instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: DMS*: Minimizing Makespan for Multi-Agent Combinatorial Path Finding
Abstract: Multi-Agent Combinatorial Path Finding (MCPF) seeks collision-free paths for
multiple agents from their initial to goal locations, while visiting a set of
intermediate target locations in the middle of the paths. MCPF is challenging
as it involves both planning collision-free paths for multiple agents and
target sequencing, i.e., solving traveling salesman problems to assign targets
to and find the visiting order for the agents. Recent work develops methods to
address MCPF while minimizing the sum of individual arrival times at goals.
Such a problem formulation may result in paths with different arrival times and
lead to a long makespan, the maximum arrival time, among the agents. This paper
proposes a min-max variant of MCPF, denoted as MCPF-max, that minimizes the
makespan of the agents. While the existing methods (such as MS*) for MCPF can
be adapted to solve MCPF-max, we further develop two new techniques based on
MS* to defer the expensive target sequencing during planning to expedite the
overall computation. We analyze the properties of the resulting algorithm
Deferred MS* (DMS*), and test DMS* with up to 20 agents and 80 targets. We
demonstrate the use of DMS* on differential-drive robots. | Robotics |
What field is the article from? | Title: The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception and Improve Presentation Skills
Abstract: This study explores the impact of AI-generated digital self-clones on
improving online presentation skills. We carried out a mixed-design experiment
involving 44 international students, comparing self-recorded videos (control)
with self-clone videos (AI group) for English presentation practice. The AI
videos utilized voice cloning, face swapping, lip-sync, and body-language
simulation to refine participants' original presentations in terms of
repetition, filler words, and pronunciation. Machine-rated scores indicated
enhancements in speech performance for both groups. Though the groups didn't
significantly differ, the AI group exhibited a heightened depth of reflection,
self-compassion, and a meaningful transition from a corrective to an enhancive
approach to self-critique. Within the AI group, congruence between
self-perception and AI self-clones resulted in diminished speech anxiety and
increased enjoyment. Our findings recommend the ethical employment of digital
self-clones to enhance the emotional and cognitive facets of skill development. | Human-Computer Interaction |
What field is the article from? | Title: Artificial Intelligence in Sustainable Vertical Farming
Abstract: As global challenges of population growth, climate change, and resource
scarcity intensify, the agricultural landscape is at a critical juncture.
Sustainable vertical farming emerges as a transformative solution to address
these challenges by maximizing crop yields in controlled environments. This
paradigm shift necessitates the integration of cutting-edge technologies, with
Artificial Intelligence (AI) at the forefront. The paper provides a
comprehensive exploration of the role of AI in sustainable vertical farming,
investigating its potential, challenges, and opportunities. The review
synthesizes the current state of AI applications, encompassing machine
learning, computer vision, the Internet of Things (IoT), and robotics, in
optimizing resource usage, automating tasks, and enhancing decision-making. It
identifies gaps in research, emphasizing the need for optimized AI models,
interdisciplinary collaboration, and the development of explainable AI in
agriculture. The implications extend beyond efficiency gains, considering
economic viability, reduced environmental impact, and increased food security.
The paper concludes by offering insights for stakeholders and suggesting
avenues for future research, aiming to guide the integration of AI technologies
in sustainable vertical farming for a resilient and sustainable future in
agriculture. | Computers and Society |
What field is the article from? | Title: ECLM: Efficient Edge-Cloud Collaborative Learning with Continuous Environment Adaptation
Abstract: Pervasive mobile AI applications primarily employ one of the two learning
paradigms: cloud-based learning (with powerful large models) or on-device
learning (with lightweight small models). Despite their own advantages, neither
paradigm can effectively handle dynamic edge environments with frequent data
distribution shifts and on-device resource fluctuations, inevitably suffering
from performance degradation. In this paper, we propose ECLM, an edge-cloud
collaborative learning framework for rapid model adaptation for dynamic edge
environments. We first propose a novel block-level model decomposition design
to decompose the original large cloud model into multiple combinable modules.
By flexibly combining a subset of the modules, this design enables the
derivation of compact, task-specific sub-models for heterogeneous edge devices
from the large cloud model, and the seamless integration of new knowledge
learned on these devices into the cloud model periodically. As such, ECLM
ensures that the cloud model always provides up-to-date sub-models for edge
devices. We further propose an end-to-end learning framework that incorporates
the modular model design into an efficient model adaptation pipeline including
an offline on-cloud model prototyping and training stage, and an online
edge-cloud collaborative adaptation stage. Extensive experiments over various
datasets demonstrate that ECLM significantly improves model performance (e.g.,
18.89% accuracy increase) and resource efficiency (e.g., 7.12x communication
cost reduction) in adapting models to dynamic edge environments by efficiently
collaborating the edge and the cloud models. | Machine Learning |
What field is the article from? | Title: The Generative AI Paradox: "What It Can Create, It May Not Understand"
Abstract: The recent wave of generative AI has sparked unprecedented global attention,
with both excitement and concern over potentially superhuman levels of
artificial intelligence: models now take only seconds to produce outputs that
would challenge or exceed the capabilities even of expert humans. At the same
time, models still show basic errors in understanding that would not be
expected even in non-expert humans. This presents us with an apparent paradox:
how do we reconcile seemingly superhuman capabilities with the persistence of
errors that few humans would make? In this work, we posit that this tension
reflects a divergence in the configuration of intelligence in today's
generative models relative to intelligence in humans. Specifically, we propose
and test the Generative AI Paradox hypothesis: generative models, having been
trained directly to reproduce expert-like outputs, acquire generative
capabilities that are not contingent upon -- and can therefore exceed -- their
ability to understand those same types of outputs. This contrasts with humans,
for whom basic understanding almost always precedes the ability to generate
expert-level outputs. We test this hypothesis through controlled experiments
analyzing generation vs. understanding in generative models, across both
language and image modalities. Our results show that although models can
outperform humans in generation, they consistently fall short of human
capabilities in measures of understanding, as well as weaker correlation
between generation and understanding performance, and more brittleness to
adversarial inputs. Our findings support the hypothesis that models' generative
capability may not be contingent upon understanding capability, and call for
caution in interpreting artificial intelligence by analogy to human
intelligence. | Artificial Intelligence |
What field is the article from? | Title: Backdoor Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment
Abstract: To ensure AI safety, instruction-tuned Large Language Models (LLMs) are
specifically trained to ensure alignment, which refers to making models behave
in accordance with human intentions. While these models have demonstrated
commendable results on various safety benchmarks, the vulnerability of their
safety alignment has not been extensively studied. This is particularly
troubling given the potential harm that LLMs can inflict. Existing attack
methods on LLMs often rely on poisoned training data or the injection of
malicious prompts. These approaches compromise the stealthiness and
generalizability of the attacks, making them susceptible to detection.
Additionally, these models often demand substantial computational resources for
implementation, making them less practical for real-world applications.
Inspired by recent success in modifying model behavior through steering vectors
without the need for optimization, and drawing on its effectiveness in
red-teaming LLMs, we conducted experiments employing activation steering to
target four key aspects of LLMs: truthfulness, toxicity, bias, and harmfulness
- across a varied set of attack settings. To establish a universal attack
strategy applicable to diverse target alignments without depending on manual
analysis, we automatically select the intervention layer based on contrastive
layer search. Our experiment results show that activation attacks are highly
effective and add little or no overhead to attack efficiency. Additionally, we
discuss potential countermeasures against such activation attacks. Our code and
data are available at https://github.com/wang2226/Backdoor-Activation-Attack
Warning: this paper contains content that can be offensive or upsetting. | Cryptography and Security |
What field is the article from? | Title: MOCHa: Multi-Objective Reinforcement Mitigating Caption Hallucinations
Abstract: While recent years have seen rapid progress in image-conditioned text
generation, image captioning still suffers from the fundamental issue of
hallucinations, the generation of spurious details that cannot be inferred from
the given image. Dedicated methods for reducing hallucinations in image
captioning largely focus on closed-vocabulary object tokens, ignoring most
types of hallucinations that occur in practice. In this work, we propose MOCHa,
an approach that harnesses advancements in reinforcement learning (RL) to
address the sequence-level nature of hallucinations in an open-world setup. To
optimize for caption fidelity to the input image, we leverage ground-truth
reference captions as proxies to measure the logical consistency of generated
captions. However, optimizing for caption fidelity alone fails to preserve the
semantic adequacy of generations; therefore, we propose a multi-objective
reward function that jointly targets these qualities, without requiring any
strong supervision. We demonstrate that these goals can be simultaneously
optimized with our framework, enhancing performance for various captioning
models of different scales. Our qualitative and quantitative results
demonstrate MOCHa's superior performance across various established metrics. We
also demonstrate the benefit of our method in the open-vocabulary setting. To
this end, we contribute OpenCHAIR, a new benchmark for quantifying
open-vocabulary hallucinations in image captioning models, constructed using
generative foundation models. We will release our code, benchmark, and trained
models. | Computer Vision |
What field is the article from? | Title: Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced Tests
Abstract: To what degree should we ascribe cognitive capacities to Large Language
Models (LLMs), such as the ability to reason about intentions and beliefs known
as Theory of Mind (ToM)? Here we add to this emerging debate by (i) testing 11
base- and instruction-tuned LLMs on capabilities relevant to ToM beyond the
dominant false-belief paradigm, including non-literal language usage and
recursive intentionality; (ii) using newly rewritten versions of standardized
tests to gauge LLMs' robustness; (iii) prompting and scoring for open besides
closed questions; and (iv) benchmarking LLM performance against that of
children aged 7-10 on the same tasks. We find that instruction-tuned LLMs from
the GPT family outperform other models, and often also children. Base-LLMs are
mostly unable to solve ToM tasks, even with specialized prompting. We suggest
that the interlinked evolution and development of language and ToM may help
explain what instruction-tuning adds: rewarding cooperative communication that
takes into account interlocutor and context. We conclude by arguing for a
nuanced perspective on ToM in LLMs. | Computational Linguistics |
What field is the article from? | Title: Leveraging Large Language Models to Build and Execute Computational Workflows
Abstract: The recent development of large language models (LLMs) with multi-billion
parameters, coupled with the creation of user-friendly application programming
interfaces (APIs), has paved the way for automatically generating and executing
code in response to straightforward human queries. This paper explores how
these emerging capabilities can be harnessed to facilitate complex scientific
workflows, eliminating the need for traditional coding methods. We present
initial findings from our attempt to integrate Phyloflow with OpenAI's
function-calling API, and outline a strategy for developing a comprehensive
workflow management system based on these concepts. | Artificial Intelligence |
What field is the article from? | Title: Towards Improving Robustness Against Common Corruptions in Object Detectors Using Adversarial Contrastive Learning
Abstract: Neural networks have revolutionized various domains, exhibiting remarkable
accuracy in tasks like natural language processing and computer vision.
However, their vulnerability to slight alterations in input samples poses
challenges, particularly in safety-critical applications like autonomous
driving. Current approaches, such as introducing distortions during training,
fall short in addressing unforeseen corruptions. This paper proposes an
innovative adversarial contrastive learning framework to enhance neural network
robustness simultaneously against adversarial attacks and common corruptions.
By generating instance-wise adversarial examples and optimizing contrastive
loss, our method fosters representations that resist adversarial perturbations
and remain robust in real-world scenarios. Subsequent contrastive learning then
strengthens the similarity between clean samples and their adversarial
counterparts, fostering representations resistant to both adversarial attacks
and common distortions. By focusing on improving performance under adversarial
and real-world conditions, our approach aims to bolster the robustness of
neural networks in safety-critical applications, such as autonomous vehicles
navigating unpredictable weather conditions. We anticipate that this framework
will contribute to advancing the reliability of neural networks in challenging
environments, facilitating their widespread adoption in mission-critical
scenarios. | Computer Vision |
What field is the article from? | Title: Towards the Inferrence of Structural Similarity of Combinatorial Landscapes
Abstract: One of the most common problem-solving heuristics is by analogy. For a given
problem, a solver can be viewed as a strategic walk on its fitness landscape.
Thus if a solver works for one problem instance, we expect it will also be
effective for other instances whose fitness landscapes essentially share
structural similarities with each other. However, due to the black-box nature
of combinatorial optimization, it is far from trivial to infer such similarity
in real-world scenarios. To bridge this gap, by using local optima network as a
proxy of fitness landscapes, this paper proposed to leverage graph data mining
techniques to conduct qualitative and quantitative analyses to explore the
latent topological structural information embedded in those landscapes. By
conducting large-scale empirical experiments on three classic combinatorial
optimization problems, we gain concrete evidence to support the existence of
structural similarity between landscapes of the same classes within neighboring
dimensions. We also interrogated the relationship between landscapes of
different problem classes. | Machine Learning |
What field is the article from? | Title: Virtual Action Actor-Critic Framework for Exploration (Student Abstract)
Abstract: Efficient exploration for an agent is challenging in reinforcement learning
(RL). In this paper, a novel actor-critic framework namely virtual action
actor-critic (VAAC), is proposed to address the challenge of efficient
exploration in RL. This work is inspired by humans' ability to imagine the
potential outcomes of their actions without actually taking them. In order to
emulate this ability, VAAC introduces a new actor called virtual actor (VA),
alongside the conventional actor-critic framework. Unlike the conventional
actor, the VA takes the virtual action to anticipate the next state without
interacting with the environment. With the virtual policy following a Gaussian
distribution, the VA is trained to maximize the anticipated novelty of the
subsequent state resulting from a virtual action. If any next state resulting
from available actions does not exhibit high anticipated novelty, training the
VA leads to an increase in the virtual policy entropy. Hence, high virtual
policy entropy represents that there is no room for exploration. The proposed
VAAC aims to maximize a modified Q function, which combines cumulative rewards
and the negative sum of virtual policy entropy. Experimental results show that
the VAAC improves the exploration performance compared to existing algorithms. | Machine Learning |
What field is the article from? | Title: FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning
Abstract: Federated Learning (FL) is a collaborative method for training models while
preserving data privacy in decentralized settings. However, FL encounters
challenges related to data heterogeneity, which can result in performance
degradation. In our study, we observe that as data heterogeneity increases,
feature representation in the FedAVG model deteriorates more significantly
compared to classifier weight. Additionally, we observe that as data
heterogeneity increases, the gap between higher feature norms for observed
classes, obtained from local models, and feature norms of unobserved classes
widens, in contrast to the behavior of classifier weight norms. This widening
gap extends to encompass the feature norm disparities between local and the
global models. To address these issues, we introduce Federated Averaging with
Feature Normalization Update (FedFN), a straightforward learning method. We
demonstrate the superior performance of FedFN through extensive experiments,
even when applied to pretrained ResNet18. Subsequently, we confirm the
applicability of FedFN to foundation models. | Machine Learning |
What field is the article from? | Title: Integrating Summarization and Retrieval for Enhanced Personalization via Large Language Models
Abstract: Personalization, the ability to tailor a system to individual users, is an
essential factor in user experience with natural language processing (NLP)
systems. With the emergence of Large Language Models (LLMs), a key question is
how to leverage these models to better personalize user experiences. To
personalize a language model's output, a straightforward approach is to
incorporate past user data into the language model prompt, but this approach
can result in lengthy inputs exceeding limitations on input length and
incurring latency and cost issues. Existing approaches tackle such challenges
by selectively extracting relevant user data (i.e. selective retrieval) to
construct a prompt for downstream tasks. However, retrieval-based methods are
limited by potential information loss, lack of more profound user
understanding, and cold-start challenges. To overcome these limitations, we
propose a novel summary-augmented approach by extending retrieval-augmented
personalization with task-aware user summaries generated by LLMs. The summaries
can be generated and stored offline, enabling real-world systems with runtime
constraints like voice assistants to leverage the power of LLMs. Experiments
show our method with 75% less of retrieved user data is on-par or outperforms
retrieval augmentation on most tasks in the LaMP personalization benchmark. We
demonstrate that offline summarization via LLMs and runtime retrieval enables
better performance for personalization on a range of tasks under practical
constraints. | Computational Linguistics |
What field is the article from? | Title: Bias Resilient Multi-Step Off-Policy Goal-Conditioned Reinforcement Learning
Abstract: In goal-conditioned reinforcement learning (GCRL), sparse rewards present
significant challenges, often obstructing efficient learning. Although
multi-step GCRL can boost this efficiency, it can also lead to off-policy
biases in target values. This paper dives deep into these biases, categorizing
them into two distinct categories: "shooting" and "shifting". Recognizing that
certain behavior policies can hasten policy refinement, we present solutions
designed to capitalize on the positive aspects of these biases while minimizing
their drawbacks, enabling the use of larger step sizes to speed up GCRL. An
empirical study demonstrates that our approach ensures a resilient and robust
improvement, even in ten-step learning scenarios, leading to superior learning
efficiency and performance that generally surpass the baseline and several
state-of-the-art multi-step GCRL benchmarks. | Machine Learning |
What field is the article from? | Title: Transfer of Reinforcement Learning-Based Controllers from Model- to Hardware-in-the-Loop
Abstract: The process of developing control functions for embedded systems is
resource-, time-, and data-intensive, often resulting in sub-optimal cost and
solutions approaches. Reinforcement Learning (RL) has great potential for
autonomously training agents to perform complex control tasks with minimal
human intervention. Due to costly data generation and safety constraints,
however, its application is mostly limited to purely simulated domains. To use
RL effectively in embedded system function development, the generated agents
must be able to handle real-world applications. In this context, this work
focuses on accelerating the training process of RL agents by combining Transfer
Learning (TL) and X-in-the-Loop (XiL) simulation. For the use case of transient
exhaust gas re-circulation control for an internal combustion engine, use of a
computationally cheap Model-in-the-Loop (MiL) simulation is made to select a
suitable algorithm, fine-tune hyperparameters, and finally train candidate
agents for the transfer. These pre-trained RL agents are then fine-tuned in a
Hardware-in-the-Loop (HiL) system via TL. The transfer revealed the need for
adjusting the reward parameters when advancing to real hardware. Further, the
comparison between a purely HiL-trained and a transferred agent showed a
reduction of training time by a factor of 5.9. The results emphasize the
necessity to train RL agents with real hardware, and demonstrate that the
maturity of the transferred policies affects both training time and
performance, highlighting the strong synergies between TL and XiL simulation. | Machine Learning |
What field is the article from? | Title: Open Knowledge Base Canonicalization with Multi-task Unlearning
Abstract: The construction of large open knowledge bases (OKBs) is integral to many
applications in the field of mobile computing. Noun phrases and relational
phrases in OKBs often suffer from redundancy and ambiguity, which calls for the
investigation on OKB canonicalization. However, in order to meet the
requirements of some privacy protection regulations and to ensure the
timeliness of the data, the canonicalized OKB often needs to remove some
sensitive information or outdated data. The machine unlearning in OKB
canonicalization is an excellent solution to the above problem. Current
solutions address OKB canonicalization by devising advanced clustering
algorithms and using knowledge graph embedding (KGE) to further facilitate the
canonicalization process. Effective schemes are urgently needed to fully
synergise machine unlearning with clustering and KGE learning. To this end, we
put forward a multi-task unlearning framework, namely MulCanon, to tackle
machine unlearning problem in OKB canonicalization. Specifically, the noise
characteristics in the diffusion model are utilized to achieve the effect of
machine unlearning for data in OKB. MulCanon unifies the learning objectives of
diffusion model, KGE and clustering algorithms, and adopts a two-step
multi-task learning paradigm for training. A thorough experimental study on
popular OKB canonicalization datasets validates that MulCanon achieves advanced
machine unlearning effects. | Artificial Intelligence |
What field is the article from? | Title: Panoptica -- instance-wise evaluation of 3D semantic and instance segmentation maps
Abstract: This paper introduces panoptica, a versatile and performance-optimized
package designed for computing instance-wise segmentation quality metrics from
2D and 3D segmentation maps. panoptica addresses the limitations of existing
metrics and provides a modular framework that complements the original
intersection over union-based panoptic quality with other metrics, such as the
distance metric Average Symmetric Surface Distance. The package is open-source,
implemented in Python, and accompanied by comprehensive documentation and
tutorials. panoptica employs a three-step metrics computation process to cover
diverse use cases. The efficacy of panoptica is demonstrated on various
real-world biomedical datasets, where an instance-wise evaluation is
instrumental for an accurate representation of the underlying clinical task.
Overall, we envision panoptica as a valuable tool facilitating in-depth
evaluation of segmentation methods. | Computer Vision |
What field is the article from? | Title: Enhancing Functional Data Analysis with Sequential Neural Networks: Advantages and Comparative Study
Abstract: Functional Data Analysis (FDA) is a statistical domain developed to handle
functional data characterized by high dimensionality and complex data
structures. Sequential Neural Networks (SNNs) are specialized neural networks
capable of processing sequence data, a fundamental aspect of functional data.
Despite their great flexibility in modeling functional data, SNNs have been
inadequately employed in the FDA community. One notable advantage of SNNs is
the ease of implementation, making them accessible to a broad audience beyond
academia. Conversely, FDA-based methodologies present challenges, particularly
for practitioners outside the field, due to their intricate complexity. In
light of this, we propose utilizing SNNs in FDA applications and demonstrate
their effectiveness through comparative analyses against popular FDA regression
models based on numerical experiments and real-world data analysis. SNN
architectures allow us to surpass the limitations of traditional FDA methods,
offering scalability, flexibility, and improved analytical performance. Our
findings highlight the potential of SNN-based methodologies as powerful tools
for data applications involving functional data. | Machine Learning |
What field is the article from? | Title: CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models
Abstract: In this paper, we present CharacterGLM, a series of models built upon
ChatGLM, with model sizes ranging from 6B to 66B parameters. Our CharacterGLM
is designed for generating Character-based Dialogues (CharacterDial), which
aims to equip a conversational AI system with character customization for
satisfying people's inherent social desires and emotional needs. On top of
CharacterGLM, we can customize various AI characters or social agents by
configuring their attributes (identities, interests, viewpoints, experiences,
achievements, social relationships, etc.) and behaviors (linguistic features,
emotional expressions, interaction patterns, etc.). Our model outperforms most
mainstream close-source large langauge models, including the GPT series,
especially in terms of consistency, human-likeness, and engagement according to
manual evaluations. We will release our 6B version of CharacterGLM and a subset
of training data to facilitate further research development in the direction of
character-based dialogue generation. | Computational Linguistics |
What field is the article from? | Title: Probabilistic Inference in Reinforcement Learning Done Right
Abstract: A popular perspective in Reinforcement learning (RL) casts the problem as
probabilistic inference on a graphical model of the Markov decision process
(MDP). The core object of study is the probability of each state-action pair
being visited under the optimal policy. Previous approaches to approximate this
quantity can be arbitrarily poor, leading to algorithms that do not implement
genuine statistical inference and consequently do not perform well in
challenging problems. In this work, we undertake a rigorous Bayesian treatment
of the posterior probability of state-action optimality and clarify how it
flows through the MDP. We first reveal that this quantity can indeed be used to
generate a policy that explores efficiently, as measured by regret.
Unfortunately, computing it is intractable, so we derive a new variational
Bayesian approximation yielding a tractable convex optimization problem and
establish that the resulting policy also explores efficiently. We call our
approach VAPOR and show that it has strong connections to Thompson sampling,
K-learning, and maximum entropy exploration. We conclude with some experiments
demonstrating the performance advantage of a deep RL version of VAPOR. | Machine Learning |
What field is the article from? | Title: Adapt Anything: Tailor Any Image Classifiers across Domains And Categories Using Text-to-Image Diffusion Models
Abstract: We do not pursue a novel method in this paper, but aim to study if a modern
text-to-image diffusion model can tailor any task-adaptive image classifier
across domains and categories. Existing domain adaptive image classification
works exploit both source and target data for domain alignment so as to
transfer the knowledge learned from the labeled source data to the unlabeled
target data. However, as the development of the text-to-image diffusion model,
we wonder if the high-fidelity synthetic data from the text-to-image generator
can serve as a surrogate of the source data in real world. In this way, we do
not need to collect and annotate the source data for each domain adaptation
task in a one-for-one manner. Instead, we utilize only one off-the-shelf
text-to-image model to synthesize images with category labels derived from the
corresponding text prompts, and then leverage the surrogate data as a bridge to
transfer the knowledge embedded in the task-agnostic text-to-image generator to
the task-oriented image classifier via domain adaptation. Such a one-for-all
adaptation paradigm allows us to adapt anything in the world using only one
text-to-image generator as well as the corresponding unlabeled target data.
Extensive experiments validate the feasibility of the proposed idea, which even
surpasses the state-of-the-art domain adaptation works using the source data
collected and annotated in real world. | Computer Vision |
What field is the article from? | Title: A Review On Table Recognition Based On Deep Learning
Abstract: Table recognition is using the computer to automatically understand the
table, to detect the position of the table from the document or picture, and to
correctly extract and identify the internal structure and content of the table.
After earlier mainstream approaches based on heuristic rules and machine
learning, the development of deep learning techniques has brought a new
paradigm to this field. This review mainly discusses the table recognition
problem from five aspects. The first part introduces data sets, benchmarks, and
commonly used evaluation indicators. This section selects representative data
sets, benchmarks, and evaluation indicators that are frequently used by
researchers. The second part introduces the table recognition model. This
survey introduces the development of the table recognition model, especially
the table recognition model based on deep learning. It is generally accepted
that table recognition is divided into two stages: table detection and table
structure recognition. This section introduces the models that follow this
paradigm (TD and TSR). The third part is the End-to-End method, this section
introduces some scholars' attempts to use an end-to-end approach to solve the
table recognition problem once and for all and the part are Data-centric
methods, such as data augmentation, aligning benchmarks, and other methods. The
fourth part is the data-centric approach, such as data enhancement, alignment
benchmark, and so on. The fifth part summarizes and compares the experimental
data in the field of form recognition, and analyzes the mainstream and more
advantageous methods. Finally, this paper also discusses the possible
development direction and trend of form processing in the future, to provide
some ideas for researchers in the field of table recognition. (Resource will be
released at https://github.com/Wa1den-jy/Topic-on-Table-Recognition .) | Computer Vision |
What field is the article from? | Title: ARIA: On the interaction between Architectures, Aggregation methods and Initializations in federated visual classification
Abstract: Federated Learning (FL) is a collaborative training paradigm that allows for
privacy-preserving learning of cross-institutional models by eliminating the
exchange of sensitive data and instead relying on the exchange of model
parameters between the clients and a server. Despite individual studies on how
client models are aggregated, and, more recently, on the benefits of ImageNet
pre-training, there is a lack of understanding of the effect the architecture
chosen for the federation has, and of how the aforementioned elements
interconnect. To this end, we conduct the first joint
ARchitecture-Initialization-Aggregation study and benchmark ARIAs across a
range of medical image classification tasks. We find that, contrary to current
practices, ARIA elements have to be chosen together to achieve the best
possible performance. Our results also shed light on good choices for each
element depending on the task, the effect of normalisation layers, and the
utility of SSL pre-training, pointing to potential directions for designing
FL-specific architectures and training pipelines. | Computer Vision |
What field is the article from? | Title: Causal Structure Learning Supervised by Large Language Model
Abstract: Causal discovery from observational data is pivotal for deciphering complex
relationships. Causal Structure Learning (CSL), which focuses on deriving
causal Directed Acyclic Graphs (DAGs) from data, faces challenges due to vast
DAG spaces and data sparsity. The integration of Large Language Models (LLMs),
recognized for their causal reasoning capabilities, offers a promising
direction to enhance CSL by infusing it with knowledge-based causal inferences.
However, existing approaches utilizing LLMs for CSL have encountered issues,
including unreliable constraints from imperfect LLM inferences and the
computational intensity of full pairwise variable analyses. In response, we
introduce the Iterative LLM Supervised CSL (ILS-CSL) framework. ILS-CSL
innovatively integrates LLM-based causal inference with CSL in an iterative
process, refining the causal DAG using feedback from LLMs. This method not only
utilizes LLM resources more efficiently but also generates more robust and
high-quality structural constraints compared to previous methodologies. Our
comprehensive evaluation across eight real-world datasets demonstrates
ILS-CSL's superior performance, setting a new standard in CSL efficacy and
showcasing its potential to significantly advance the field of causal
discovery. The codes are available at
\url{https://github.com/tyMadara/ILS-CSL}. | Artificial Intelligence |
What field is the article from? | Title: AviationGPT: A Large Language Model for the Aviation Domain
Abstract: The advent of ChatGPT and GPT-4 has captivated the world with large language
models (LLMs), demonstrating exceptional performance in question-answering,
summarization, and content generation. The aviation industry is characterized
by an abundance of complex, unstructured text data, replete with technical
jargon and specialized terminology. Moreover, labeled data for model building
are scarce in this domain, resulting in low usage of aviation text data. The
emergence of LLMs presents an opportunity to transform this situation, but
there is a lack of LLMs specifically designed for the aviation domain. To
address this gap, we propose AviationGPT, which is built on open-source LLaMA-2
and Mistral architectures and continuously trained on a wealth of carefully
curated aviation datasets. Experimental results reveal that AviationGPT offers
users multiple advantages, including the versatility to tackle diverse natural
language processing (NLP) problems (e.g., question-answering, summarization,
document writing, information extraction, report querying, data cleaning, and
interactive data exploration). It also provides accurate and contextually
relevant responses within the aviation domain and significantly improves
performance (e.g., over a 40% performance gain in tested cases). With
AviationGPT, the aviation industry is better equipped to address more complex
research problems and enhance the efficiency and safety of National Airspace
System (NAS) operations. | Computational Linguistics |
What field is the article from? | Title: Conceptual Model Interpreter for Large Language Models
Abstract: Large Language Models (LLMs) recently demonstrated capabilities for
generating source code in common programming languages. Additionally,
commercial products such as ChatGPT 4 started to provide code interpreters,
allowing for the automatic execution of generated code fragments, instant
feedback, and the possibility to develop and refine in a conversational
fashion. With an exploratory research approach, this paper applies code
generation and interpretation to conceptual models. The concept and prototype
of a conceptual model interpreter is explored, capable of rendering visual
models generated in textual syntax by state-of-the-art LLMs such as Llama~2 and
ChatGPT 4. In particular, these LLMs can generate textual syntax for the
PlantUML and Graphviz modeling software that is automatically rendered within a
conversational user interface. The first result is an architecture describing
the components necessary to interact with interpreters and LLMs through APIs or
locally, providing support for many commercial and open source LLMs and
interpreters. Secondly, experimental results for models generated with ChatGPT
4 and Llama 2 are discussed in two cases covering UML and, on an instance
level, graphs created from custom data. The results indicate the possibility of
modeling iteratively in a conversational fashion. | Software Engineering |
What field is the article from? | Title: Fin-QD: A Computational Design Framework for Soft Grippers: Integrating MAP-Elites and High-fidelity FEM
Abstract: Computational design can excite the full potential of soft robotics that has
the drawbacks of being highly nonlinear from material, structure, and contact.
Up to date, enthusiastic research interests have been demonstrated for
individual soft fingers, but the frame design space (how each soft finger is
assembled) remains largely unexplored. Computationally design remains
challenging for the finger-based soft gripper to grip across multiple
geometrical-distinct object types successfully. Including the design space for
the gripper frame can bring huge difficulties for conventional optimisation
algorithms and fitness calculation methods due to the exponential growth of
high-dimensional design space. This work proposes an automated computational
design optimisation framework that generates gripper diversity to individually
grasp geometrically distinct object types based on a quality-diversity
approach. This work first discusses a significantly large design space (28
design parameters) for a finger-based soft gripper, including the
rarely-explored design space of finger arrangement that is converted to various
configurations to arrange individual soft fingers. Then, a contact-based Finite
Element Modelling (FEM) is proposed in SOFA to output high-fidelity grasping
data for fitness evaluation and feature measurements. Finally, diverse gripper
designs are obtained from the framework while considering features such as the
volume and workspace of grippers. This work bridges the gap of computationally
exploring the vast design space of finger-based soft grippers while grasping
large geometrically distinct object types with a simple control scheme. | Robotics |
What field is the article from? | Title: ROAM: memory-efficient large DNN training via optimized operator ordering and memory layout
Abstract: As deep learning models continue to increase in size, the memory requirements
for training have surged. While high-level techniques like offloading,
recomputation, and compression can alleviate memory pressure, they also
introduce overheads. However, a memory-efficient execution plan that includes a
reasonable operator execution order and tensor memory layout can significantly
increase the models' memory efficiency and reduce overheads from high-level
techniques. In this paper, we propose ROAM which operates on computation graph
level to derive memory-efficient execution plan with optimized operator order
and tensor memory layout for models. We first propose sophisticated theories
that carefully consider model structure and training memory load to support
optimization for large complex graphs that have not been well supported in the
past. An efficient tree-based algorithm is further proposed to search task
divisions automatically, along with delivering high performance and
effectiveness to solve the problem. Experiments show that ROAM achieves a
substantial memory reduction of 35.7%, 13.3%, and 27.2% compared to Pytorch and
two state-of-the-art methods and offers a remarkable 53.7x speedup. The
evaluation conducted on the expansive GPT2-XL further validates ROAM's
scalability. | Machine Learning |
What field is the article from? | Title: An Integrative Paradigm for Enhanced Stroke Prediction: Synergizing XGBoost and xDeepFM Algorithms
Abstract: Stroke prediction plays a crucial role in preventing and managing this
debilitating condition. In this study, we address the challenge of stroke
prediction using a comprehensive dataset, and propose an ensemble model that
combines the power of XGBoost and xDeepFM algorithms. Our work aims to improve
upon existing stroke prediction models by achieving higher accuracy and
robustness. Through rigorous experimentation, we validate the effectiveness of
our ensemble model using the AUC metric. Through comparing our findings with
those of other models in the field, we gain valuable insights into the merits
and drawbacks of various approaches. This, in turn, contributes significantly
to the progress of machine learning and deep learning techniques specifically
in the domain of stroke prediction. | Computer Vision |
What field is the article from? | Title: Kinematic-aware Prompting for Generalizable Articulated Object Manipulation with LLMs
Abstract: Generalizable articulated object manipulation is essential for home-assistant
robots. Recent efforts focus on imitation learning from demonstrations or
reinforcement learning in simulation, however, due to the prohibitive costs of
real-world data collection and precise object simulation, it still remains
challenging for these works to achieve broad adaptability across diverse
articulated objects. Recently, many works have tried to utilize the strong
in-context learning ability of Large Language Models (LLMs) to achieve
generalizable robotic manipulation, but most of these researches focus on
high-level task planning, sidelining low-level robotic control. In this work,
building on the idea that the kinematic structure of the object determines how
we can manipulate it, we propose a kinematic-aware prompting framework that
prompts LLMs with kinematic knowledge of objects to generate low-level motion
trajectory waypoints, supporting various object manipulation. To effectively
prompt LLMs with the kinematic structure of different objects, we design a
unified kinematic knowledge parser, which represents various articulated
objects as a unified textual description containing kinematic joints and
contact location. Building upon this unified description, a kinematic-aware
planner model is proposed to generate precise 3D manipulation waypoints via a
designed kinematic-aware chain-of-thoughts prompting method. Our evaluation
spanned 48 instances across 16 distinct categories, revealing that our
framework not only outperforms traditional methods on 8 seen categories but
also shows a powerful zero-shot capability for 8 unseen articulated object
categories. Moreover, the real-world experiments on 7 different object
categories prove our framework's adaptability in practical scenarios. Code is
released at
\href{https://github.com/GeWu-Lab/LLM_articulated_object_manipulation/tree/main}{here}. | Robotics |
What field is the article from? | Title: Formal Methods for Autonomous Systems
Abstract: Formal methods refer to rigorous, mathematical approaches to system
development and have played a key role in establishing the correctness of
safety-critical systems. The main building blocks of formal methods are models
and specifications, which are analogous to behaviors and requirements in system
design and give us the means to verify and synthesize system behaviors with
formal guarantees.
This monograph provides a survey of the current state of the art on
applications of formal methods in the autonomous systems domain. We consider
correct-by-construction synthesis under various formulations, including closed
systems, reactive, and probabilistic settings. Beyond synthesizing systems in
known environments, we address the concept of uncertainty and bound the
behavior of systems that employ learning using formal methods. Further, we
examine the synthesis of systems with monitoring, a mitigation technique for
ensuring that once a system deviates from expected behavior, it knows a way of
returning to normalcy. We also show how to overcome some limitations of formal
methods themselves with learning. We conclude with future directions for formal
methods in reinforcement learning, uncertainty, privacy, explainability of
formal methods, and regulation and certification. | Artificial Intelligence |
What field is the article from? | Title: NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors
Abstract: Developing intelligent robots for complex manipulation tasks in household and
factory settings remains challenging due to long-horizon tasks, contact-rich
manipulation, and the need to generalize across a wide variety of object shapes
and scene layouts. While Task and Motion Planning (TAMP) offers a promising
solution, its assumptions such as kinodynamic models limit applicability in
novel contexts. Neural object descriptors (NODs) have shown promise in object
and scene generalization but face limitations in addressing broader tasks. Our
proposed TAMP-based framework, NOD-TAMP, extracts short manipulation
trajectories from a handful of human demonstrations, adapts these trajectories
using NOD features, and composes them to solve broad long-horizon tasks.
Validated in a simulation environment, NOD-TAMP effectively tackles varied
challenges and outperforms existing methods, establishing a cohesive framework
for manipulation planning. For videos and other supplemental material, see the
project website: https://sites.google.com/view/nod-tamp/. | Robotics |
What field is the article from? | Title: zkFDL: An efficient and privacy-preserving decentralized federated learning with zero knowledge proof
Abstract: Federated leaning (FL) has been frequently used in various field of studies
and businesses. Traditional centralized FL systems suffer from serious issues.
To address these concerns, decentralized federated learning (DFL) systems have
been introduced in recent years in which with the help of blockchains, try to
achieve more integrity and efficiency. On the other hand, privacy-preserving is
an uncovered part of these systems. To address this, and also scaling the
blockchain-based computations, we propose a zero knowledge proof (ZKP) based
aggregator (zkDFL) that allows clients to share their large-scale model
parameters with a trusted centralized server without revealing their individual
data to other clients. We utilize blockchain technology to manage the
aggregation algorithm via smart contracts. The server performs a ZKP algorithm
to prove to the clients that the aggregation is done according to the accepted
algorithm. The server can also prove that all inputs of clients have been used.
We evaluate our measure through a public dataset about wearable internet of
things. As demonstrated by numerical evaluations, zkDFL introduces
verifiability of correctness of aggregation process and enhances the privacy
protection and scalability of DFL systems, while the gas cost has declined
significantly. | Cryptography and Security |
What field is the article from? | Title: Appearance Codes using Joint Embedding Learning of Multiple Modalities
Abstract: The use of appearance codes in recent work on generative modeling has enabled
novel view renders with variable appearance and illumination, such as day-time
and night-time renders of a scene. A major limitation of this technique is the
need to re-train new appearance codes for every scene on inference, so in this
work we address this problem proposing a framework that learns a joint
embedding space for the appearance and structure of the scene by enforcing a
contrastive loss constraint between different modalities. We apply our
framework to a simple Variational Auto-Encoder model on the RADIATE dataset
\cite{sheeny2021radiate} and qualitatively demonstrate that we can generate new
renders of night-time photos using day-time appearance codes without additional
optimization iterations. Additionally, we compare our model to a baseline VAE
that uses the standard per-image appearance code technique and show that our
approach achieves generations of similar quality without learning appearance
codes for any unseen images on inference. | Computer Vision |
What field is the article from? | Title: Traffic Signal Control Using Lightweight Transformers: An Offline-to-Online RL Approach
Abstract: Efficient traffic signal control is critical for reducing traffic congestion
and improving overall transportation efficiency. The dynamic nature of traffic
flow has prompted researchers to explore Reinforcement Learning (RL) for
traffic signal control (TSC). Compared with traditional methods, RL-based
solutions have shown preferable performance. However, the application of
RL-based traffic signal controllers in the real world is limited by the low
sample efficiency and high computational requirements of these solutions. In
this work, we propose DTLight, a simple yet powerful lightweight Decision
Transformer-based TSC method that can learn policy from easily accessible
offline datasets. DTLight novelly leverages knowledge distillation to learn a
lightweight controller from a well-trained larger teacher model to reduce
implementation computation. Additionally, it integrates adapter modules to
mitigate the expenses associated with fine-tuning, which makes DTLight
practical for online adaptation with minimal computation and only a few
fine-tuning steps during real deployment. Moreover, DTLight is further enhanced
to be more applicable to real-world TSC problems. Extensive experiments on
synthetic and real-world scenarios show that DTLight pre-trained purely on
offline datasets can outperform state-of-the-art online RL-based methods in
most scenarios. Experiment results also show that online fine-tuning further
improves the performance of DTLight by up to 42.6% over the best online RL
baseline methods. In this work, we also introduce Datasets specifically
designed for TSC with offline RL (referred to as DTRL). Our datasets and code
are publicly available. | Machine Learning |
What field is the article from? | Title: An Empathetic User-Centric Chatbot for Emotional Support
Abstract: This paper explores the intersection of Otome Culture and artificial
intelligence, particularly focusing on how Otome-oriented games fulfill the
emotional needs of young women. These games, which are deeply rooted in a
subcultural understanding of love, provide players with feelings of
satisfaction, companionship, and protection through carefully crafted narrative
structures and character development. With the proliferation of Large Language
Models (LLMs), there is an opportunity to transcend traditional static game
narratives and create dynamic, emotionally responsive interactions. We present
a case study of Tears of Themis, where we have integrated LLM technology to
enhance the interactive experience. Our approach involves augmenting existing
game narratives with a Question and Answer (QA) system, enriched through data
augmentation and emotional enhancement techniques, resulting in a chatbot that
offers realistic and supportive companionship. | Human-Computer Interaction |
What field is the article from? | Title: GPT in Data Science: A Practical Exploration of Model Selection
Abstract: There is an increasing interest in leveraging Large Language Models (LLMs)
for managing structured data and enhancing data science processes. Despite the
potential benefits, this integration poses significant questions regarding
their reliability and decision-making methodologies. It highlights the
importance of various factors in the model selection process, including the
nature of the data, problem type, performance metrics, computational resources,
interpretability vs accuracy, assumptions about data, and ethical
considerations. Our objective is to elucidate and express the factors and
assumptions guiding GPT-4's model selection recommendations. We employ a
variability model to depict these factors and use toy datasets to evaluate both
the model and the implementation of the identified heuristics. By contrasting
these outcomes with heuristics from other platforms, our aim is to determine
the effectiveness and distinctiveness of GPT-4's methodology. This research is
committed to advancing our comprehension of AI decision-making processes,
especially in the realm of model selection within data science. Our efforts are
directed towards creating AI systems that are more transparent and
comprehensible, contributing to a more responsible and efficient practice in
data science. | Artificial Intelligence |
What field is the article from? | Title: A Survey of AI Text-to-Image and AI Text-to-Video Generators
Abstract: Text-to-Image and Text-to-Video AI generation models are revolutionary
technologies that use deep learning and natural language processing (NLP)
techniques to create images and videos from textual descriptions. This paper
investigates cutting-edge approaches in the discipline of Text-to-Image and
Text-to-Video AI generations. The survey provides an overview of the existing
literature as well as an analysis of the approaches used in various studies. It
covers data preprocessing techniques, neural network types, and evaluation
metrics used in the field. In addition, the paper discusses the challenges and
limitations of Text-to-Image and Text-to-Video AI generations, as well as
future research directions. Overall, these models have promising potential for
a wide range of applications such as video production, content creation, and
digital marketing. | Computer Vision |
What field is the article from? | Title: LLM-TAKE: Theme Aware Keyword Extraction Using Large Language Models
Abstract: Keyword extraction is one of the core tasks in natural language processing.
Classic extraction models are notorious for having a short attention span which
make it hard for them to conclude relational connections among the words and
sentences that are far from each other. This, in turn, makes their usage
prohibitive for generating keywords that are inferred from the context of the
whole text. In this paper, we explore using Large Language Models (LLMs) in
generating keywords for items that are inferred from the items textual
metadata. Our modeling framework includes several stages to fine grain the
results by avoiding outputting keywords that are non informative or sensitive
and reduce hallucinations common in LLM. We call our LLM-based framework
Theme-Aware Keyword Extraction (LLM TAKE). We propose two variations of
framework for generating extractive and abstractive themes for products in an E
commerce setting. We perform an extensive set of experiments on three real data
sets and show that our modeling framework can enhance accuracy based and
diversity based metrics when compared with benchmark models. | Information Retrieval |
What field is the article from? | Title: SimMMDG: A Simple and Effective Framework for Multi-modal Domain Generalization
Abstract: In real-world scenarios, achieving domain generalization (DG) presents
significant challenges as models are required to generalize to unknown target
distributions. Generalizing to unseen multi-modal distributions poses even
greater difficulties due to the distinct properties exhibited by different
modalities. To overcome the challenges of achieving domain generalization in
multi-modal scenarios, we propose SimMMDG, a simple yet effective multi-modal
DG framework. We argue that mapping features from different modalities into the
same embedding space impedes model generalization. To address this, we propose
splitting the features within each modality into modality-specific and
modality-shared components. We employ supervised contrastive learning on the
modality-shared features to ensure they possess joint properties and impose
distance constraints on modality-specific features to promote diversity. In
addition, we introduce a cross-modal translation module to regularize the
learned features, which can also be used for missing-modality generalization.
We demonstrate that our framework is theoretically well-supported and achieves
strong performance in multi-modal DG on the EPIC-Kitchens dataset and the novel
Human-Animal-Cartoon (HAC) dataset introduced in this paper. Our source code
and HAC dataset are available at https://github.com/donghao51/SimMMDG. | Computer Vision |
What field is the article from? | Title: A Comparative Analysis of Large Language Models for Code Documentation Generation
Abstract: This paper presents a comprehensive comparative analysis of Large Language
Models (LLMs) for generation of code documentation. Code documentation is an
essential part of the software writing process. The paper evaluates models such
as GPT-3.5, GPT-4, Bard, Llama2, and Starchat on various parameters like
Accuracy, Completeness, Relevance, Understandability, Readability and Time
Taken for different levels of code documentation. Our evaluation employs a
checklist-based system to minimize subjectivity, providing a more objective
assessment. We find that, barring Starchat, all LLMs consistently outperform
the original documentation. Notably, closed-source models GPT-3.5, GPT-4, and
Bard exhibit superior performance across various parameters compared to
open-source/source-available LLMs, namely LLama 2 and StarChat. Considering the
time taken for generation, GPT-4 demonstrated the longest duration, followed by
Llama2, Bard, with ChatGPT and Starchat having comparable generation times.
Additionally, file level documentation had a considerably worse performance
across all parameters (except for time taken) as compared to inline and
function level documentation. | Software Engineering |
What field is the article from? | Title: Can persistent homology whiten Transformer-based black-box models? A case study on BERT compression
Abstract: Large Language Models (LLMs) like BERT have gained significant prominence due
to their remarkable performance in various natural language processing tasks.
However, they come with substantial computational and memory costs.
Additionally, they are essentially black-box models, challenging to explain and
interpret. In this article, we propose Optimus BERT Compression and
Explainability (OBCE), a methodology to bring explainability to BERT models
using persistent homology, aiming to measure the importance of each neuron by
studying the topological characteristics of their outputs. As a result, we can
compress BERT significantly by reducing the number of parameters (58.47% of the
original parameters for BERT Base, 52.3% for BERT Large). We evaluated our
methodology on the standard GLUE Benchmark, comparing the results with
state-of-the-art techniques and achieving outstanding results. Consequently,
our methodology can "whiten" BERT models by providing explainability to its
neurons and reducing the model's size, making it more suitable for deployment
on resource-constrained devices. | Machine Learning |
What field is the article from? | Title: Uplifting the Expressive Power of Graph Neural Networks through Graph Partitioning
Abstract: Graph Neural Networks (GNNs) have paved its way for being a cornerstone in
graph related learning tasks. From a theoretical perspective, the expressive
power of GNNs is primarily characterised according to their ability to
distinguish non-isomorphic graphs. It is a well-known fact that most of the
conventional GNNs are upper-bounded by Weisfeiler-Lehman graph isomorphism test
(1-WL). In this work, we study the expressive power of graph neural networks
through the lens of graph partitioning. This follows from our observation that
permutation invariant graph partitioning enables a powerful way of exploring
structural interactions among vertex sets and subgraphs, and can help uplifting
the expressive power of GNNs efficiently. Based on this, we first establish a
theoretical connection between graph partitioning and graph isomorphism. Then
we introduce a novel GNN architecture, namely Graph Partitioning Neural
Networks (GPNNs). We theoretically analyse how a graph partitioning scheme and
different kinds of structural interactions relate to the k-WL hierarchy.
Empirically, we demonstrate its superior performance over existing GNN models
in a variety of graph benchmark tasks. | Machine Learning |
What field is the article from? | Title: FAIRLABEL: Correcting Bias in Labels
Abstract: There are several algorithms for measuring fairness of ML models. A
fundamental assumption in these approaches is that the ground truth is fair or
unbiased. In real-world datasets, however, the ground truth often contains data
that is a result of historical and societal biases and discrimination. Models
trained on these datasets will inherit and propagate the biases to the model
outputs. We propose FAIRLABEL, an algorithm which detects and corrects biases
in labels. The goal of FAIRLABELis to reduce the Disparate Impact (DI) across
groups while maintaining high accuracy in predictions. We propose metrics to
measure the quality of bias correction and validate FAIRLABEL on synthetic
datasets and show that the label correction is correct 86.7% of the time vs.
71.9% for a baseline model. We also apply FAIRLABEL on benchmark datasets such
as UCI Adult, German Credit Risk, and Compas datasets and show that the
Disparate Impact Ratio increases by as much as 54.2%. | Machine Learning |
What field is the article from? | Title: Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs
Abstract: Though prompting LLMs with various reasoning structures produces reasoning
proofs along with answers, these proofs are not ensured to be causal and
reliable due to the inherent defects of LLMs. Tracking such deficiencies, we
present a neuro-symbolic integration method, in which a neural LLM is used to
represent the knowledge of the problem while an LLM-free symbolic solver is
adopted to do deliberative reasoning using the knowledge. Specifically, our
customized meta-interpreters allow the production of reasoning proofs and
support flexible search strategies. These reasoning proofs are ensured to be
causal and reliable because of the deterministic executing nature of the
symbolic solvers. Empirically, on ProofWriter, our method surpasses the CoT
baseline by nearly double in accuracy and more than triple in proof similarity.
On GSM8K, our method also shows accuracy improvements and nearly doubled proof
similarity. Our code is released at https://github.com/DAMO-NLP-SG/CaRing | Artificial Intelligence |
What field is the article from? | Title: Diffusion Models for Reinforcement Learning: A Survey
Abstract: Diffusion models have emerged as a prominent class of generative models,
surpassing previous methods regarding sample quality and training stability.
Recent works have shown the advantages of diffusion models in improving
reinforcement learning (RL) solutions, including as trajectory planners,
expressive policy classes, data synthesizers, etc. This survey aims to provide
an overview of the advancements in this emerging field and hopes to inspire new
avenues of research. First, we examine several challenges encountered by
current RL algorithms. Then, we present a taxonomy of existing methods based on
the roles played by diffusion models in RL and explore how the existing
challenges are addressed. We further outline successful applications of
diffusion models in various RL-related tasks while discussing the limitations
of current approaches. Finally, we conclude the survey and offer insights into
future research directions, focusing on enhancing model performance and
applying diffusion models to broader tasks. We are actively maintaining a
GitHub repository for papers and other related resources in applying diffusion
models in RL: https://github.com/apexrl/Diff4RLSurvey | Machine Learning |
What field is the article from? | Title: Analyzing and Explaining Image Classifiers via Diffusion Guidance
Abstract: While deep learning has led to huge progress in complex image classification
tasks like ImageNet, unexpected failure modes, e.g. via spurious features, call
into question how reliably these classifiers work in the wild. Furthermore, for
safety-critical tasks the black-box nature of their decisions is problematic,
and explanations or at least methods which make decisions plausible are needed
urgently. In this paper, we address these problems by generating images that
optimize a classifier-derived objective using a framework for guided image
generation. We analyze the behavior and decisions of image classifiers by
visual counterfactual explanations (VCEs), detection of systematic mistakes by
analyzing images where classifiers maximally disagree, and visualization of
neurons to verify potential spurious features. In this way, we validate
existing observations, e.g. the shape bias of adversarially robust models, as
well as novel failure modes, e.g. systematic errors of zero-shot CLIP
classifiers, or identify harmful spurious features. Moreover, our VCEs
outperform previous work while being more versatile. | Computer Vision |
What field is the article from? | Title: GPT-4V(ision) as A Social Media Analysis Engine
Abstract: Recent research has offered insights into the extraordinary capabilities of
Large Multimodal Models (LMMs) in various general vision and language tasks.
There is growing interest in how LMMs perform in more specialized domains.
Social media content, inherently multimodal, blends text, images, videos, and
sometimes audio. Understanding social multimedia content remains a challenging
problem for contemporary machine learning frameworks. In this paper, we explore
GPT-4V(ision)'s capabilities for social multimedia analysis. We select five
representative tasks, including sentiment analysis, hate speech detection, fake
news identification, demographic inference, and political ideology detection,
to evaluate GPT-4V. Our investigation begins with a preliminary quantitative
analysis for each task using existing benchmark datasets, followed by a careful
review of the results and a selection of qualitative samples that illustrate
GPT-4V's potential in understanding multimodal social media content. GPT-4V
demonstrates remarkable efficacy in these tasks, showcasing strengths such as
joint understanding of image-text pairs, contextual and cultural awareness, and
extensive commonsense knowledge. Despite the overall impressive capacity of
GPT-4V in the social media domain, there remain notable challenges. GPT-4V
struggles with tasks involving multilingual social multimedia comprehension and
has difficulties in generalizing to the latest trends in social media.
Additionally, it exhibits a tendency to generate erroneous information in the
context of evolving celebrity and politician knowledge, reflecting the known
hallucination problem. The insights gleaned from our findings underscore a
promising future for LMMs in enhancing our comprehension of social media
content and its users through the analysis of multimodal information. | Computer Vision |
What field is the article from? | Title: Fast Sampling via De-randomization for Discrete Diffusion Models
Abstract: Diffusion models have emerged as powerful tools for high-quality data
generation, such as image generation. Despite its success in continuous spaces,
discrete diffusion models, which apply to domains such as texts and natural
languages, remain under-studied and often suffer from slow generation speed. In
this paper, we propose a novel de-randomized diffusion process, which leads to
an accelerated algorithm for discrete diffusion models. Our technique
significantly reduces the number of function evaluations (i.e., calls to the
neural network), making the sampling process much faster. Furthermore, we
introduce a continuous-time (i.e., infinite-step) sampling algorithm that can
provide even better sample qualities than its discrete-time (finite-step)
counterpart. Extensive experiments on natural language generation and machine
translation tasks demonstrate the superior performance of our method in terms
of both generation speed and sample quality over existing methods for discrete
diffusion models. | Machine Learning |
What field is the article from? | Title: Confounder Balancing in Adversarial Domain Adaptation for Pre-Trained Large Models Fine-Tuning
Abstract: The excellent generalization, contextual learning, and emergence abilities in
the pre-trained large models (PLMs) handle specific tasks without direct
training data, making them the better foundation models in the adversarial
domain adaptation (ADA) methods to transfer knowledge learned from the source
domain to target domains. However, existing ADA methods fail to account for the
confounder properly, which is the root cause of the source data distribution
that differs from the target domains. This study proposes an adversarial domain
adaptation with confounder balancing for PLMs fine-tuning (ADA-CBF). The
ADA-CBF includes a PLM as the foundation model for a feature extractor, a
domain classifier and a confounder classifier, and they are jointly trained
with an adversarial loss. This loss is designed to improve the domain-invariant
representation learning by diluting the discrimination in the domain
classifier. At the same time, the adversarial loss also balances the confounder
distribution among source and unmeasured domains in training. Compared to
existing ADA methods, ADA-CBF can correctly identify confounders in
domain-invariant features, thereby eliminating the confounder biases in the
extracted features from PLMs. The confounder classifier in ADA-CBF is designed
as a plug-and-play and can be applied in the confounder measurable,
unmeasurable, or partially measurable environments. Empirical results on
natural language processing and computer vision downstream tasks show that
ADA-CBF outperforms the newest GPT-4, LLaMA2, ViT and ADA methods. | Machine Learning |
What field is the article from? | Title: Panel Transitions for Genre Analysis in Visual Narratives
Abstract: Understanding how humans communicate and perceive narratives is important for
media technology research and development. This is particularly important in
current times when there are tools and algorithms that are easily available for
amateur users to create high-quality content. Narrative media develops over
time a set of recognizable patterns of features across similar artifacts. Genre
is one such grouping of artifacts for narrative media with similar patterns,
tropes, and story structures. While much work has been done on genre-based
classifications in text and video, we present a novel approach to do a
multi-modal analysis of genre based on comics and manga-style visual
narratives. We present a systematic feature analysis of an annotated dataset
that includes a variety of western and eastern visual books with annotations
for high-level narrative patterns. We then present a detailed analysis of the
contributions of high-level features to genre classification for this medium.
We highlight some of the limitations and challenges of our existing
computational approaches in modeling subjective labels. Our contributions to
the community are: a dataset of annotated manga books, a multi-modal analysis
of visual panels and text in a constrained and popular medium through
high-level features, and a systematic process for incorporating subjective
narrative patterns in computational models. | Artificial Intelligence |
What field is the article from? | Title: DONUT-hole: DONUT Sparsification by Harnessing Knowledge and Optimizing Learning Efficiency
Abstract: This paper introduces DONUT-hole, a sparse OCR-free visual document
understanding (VDU) model that addresses the limitations of its predecessor
model, dubbed DONUT. The DONUT model, leveraging a transformer architecture,
overcoming the challenges of separate optical character recognition (OCR) and
visual semantic understanding (VSU) components. However, its deployment in
production environments and edge devices is hindered by high memory and
computational demands, particularly in large-scale request services. To
overcome these challenges, we propose an optimization strategy based on
knowledge distillation and model pruning. Our paradigm to produce DONUT-hole,
reduces the model denisty by 54\% while preserving performance. We also achieve
a global representational similarity index between DONUT and DONUT-hole based
on centered kernel alignment (CKA) metric of 0.79. Moreover, we evaluate the
effectiveness of DONUT-hole in the document image key information extraction
(KIE) task, highlighting its potential for developing more efficient VDU
systems for logistic companies. | Computer Vision |
What field is the article from? | Title: Human-Centric Autonomous Systems With LLMs for User Command Reasoning
Abstract: The evolution of autonomous driving has made remarkable advancements in
recent years, evolving into a tangible reality. However, a human-centric
large-scale adoption hinges on meeting a variety of multifaceted requirements.
To ensure that the autonomous system meets the user's intent, it is essential
to accurately discern and interpret user commands, especially in complex or
emergency situations. To this end, we propose to leverage the reasoning
capabilities of Large Language Models (LLMs) to infer system requirements from
in-cabin users' commands. Through a series of experiments that include
different LLM models and prompt designs, we explore the few-shot multivariate
binary classification accuracy of system requirements from natural language
textual commands. We confirm the general ability of LLMs to understand and
reason about prompts but underline that their effectiveness is conditioned on
the quality of both the LLM model and the design of appropriate sequential
prompts. Code and models are public with the link
\url{https://github.com/KTH-RPL/DriveCmd_LLM}. | Computational Linguistics |
What field is the article from? | Title: An adversarial attack approach for eXplainable AI evaluation on deepfake detection models
Abstract: With the rising concern on model interpretability, the application of
eXplainable AI (XAI) tools on deepfake detection models has been a topic of
interest recently. In image classification tasks, XAI tools highlight pixels
influencing the decision given by a model. This helps in troubleshooting the
model and determining areas that may require further tuning of parameters. With
a wide range of tools available in the market, choosing the right tool for a
model becomes necessary as each one may highlight different sets of pixels for
a given image. There is a need to evaluate different tools and decide the best
performing ones among them. Generic XAI evaluation methods like insertion or
removal of salient pixels/segments are applicable for general image
classification tasks but may produce less meaningful results when applied on
deepfake detection models due to their functionality. In this paper, we perform
experiments to show that generic removal/insertion XAI evaluation methods are
not suitable for deepfake detection models. We also propose and implement an
XAI evaluation approach specifically suited for deepfake detection models. | Computer Vision |
What field is the article from? | Title: Reinforcement Replaces Supervision: Query focused Summarization using Deep Reinforcement Learning
Abstract: Query-focused Summarization (QfS) deals with systems that generate summaries
from document(s) based on a query. Motivated by the insight that Reinforcement
Learning (RL) provides a generalization to Supervised Learning (SL) for Natural
Language Generation, and thereby performs better (empirically) than SL, we use
an RL-based approach for this task of QfS. Additionally, we also resolve the
conflict of employing RL in Transformers with Teacher Forcing. We develop
multiple Policy Gradient networks, trained on various reward signals: ROUGE,
BLEU, and Semantic Similarity, which lead to a 10-point improvement over the
State-of-the-Art approach on the ROUGE-L metric for a benchmark dataset (ELI5).
We also show performance of our approach in zero-shot setting for another
benchmark dataset (DebatePedia) -- our approach leads to results comparable to
baselines, which were specifically trained on DebatePedia. To aid the RL
training, we propose a better semantic similarity reward, enabled by a novel
Passage Embedding scheme developed using Cluster Hypothesis. Lastly, we
contribute a gold-standard test dataset to further research in QfS and
Long-form Question Answering (LfQA). | Computational Linguistics |
What field is the article from? | Title: MGCT: Mutual-Guided Cross-Modality Transformer for Survival Outcome Prediction using Integrative Histopathology-Genomic Features
Abstract: The rapidly emerging field of deep learning-based computational pathology has
shown promising results in utilizing whole slide images (WSIs) to objectively
prognosticate cancer patients. However, most prognostic methods are currently
limited to either histopathology or genomics alone, which inevitably reduces
their potential to accurately predict patient prognosis. Whereas integrating
WSIs and genomic features presents three main challenges: (1) the enormous
heterogeneity of gigapixel WSIs which can reach sizes as large as
150,000x150,000 pixels; (2) the absence of a spatially corresponding
relationship between histopathology images and genomic molecular data; and (3)
the existing early, late, and intermediate multimodal feature fusion strategies
struggle to capture the explicit interactions between WSIs and genomics. To
ameliorate these issues, we propose the Mutual-Guided Cross-Modality
Transformer (MGCT), a weakly-supervised, attention-based multimodal learning
framework that can combine histology features and genomic features to model the
genotype-phenotype interactions within the tumor microenvironment. To validate
the effectiveness of MGCT, we conduct experiments using nearly 3,600 gigapixel
WSIs across five different cancer types sourced from The Cancer Genome Atlas
(TCGA). Extensive experimental results consistently emphasize that MGCT
outperforms the state-of-the-art (SOTA) methods. | Computer Vision |
What field is the article from? | Title: Emotion-Aware Music Recommendation System: Enhancing User Experience Through Real-Time Emotional Context
Abstract: This study addresses the deficiency in conventional music recommendation
systems by focusing on the vital role of emotions in shaping users music
choices. These systems often disregard the emotional context, relying
predominantly on past listening behavior and failing to consider the dynamic
and evolving nature of users emotional preferences. This gap leads to several
limitations. Users may receive recommendations that do not match their current
mood, which diminishes the quality of their music experience. Furthermore,
without accounting for emotions, the systems might overlook undiscovered or
lesser-known songs that have a profound emotional impact on users. To combat
these limitations, this research introduces an AI model that incorporates
emotional context into the song recommendation process. By accurately detecting
users real-time emotions, the model can generate personalized song
recommendations that align with the users emotional state. This approach aims
to enhance the user experience by offering music that resonates with their
current mood, elicits the desired emotions, and creates a more immersive and
meaningful listening experience. By considering emotional context in the song
recommendation process, the proposed model offers an opportunity for a more
personalized and emotionally resonant musical journey. | Information Retrieval |
What field is the article from? | Title: GPT-4 and Safety Case Generation: An Exploratory Analysis
Abstract: In the ever-evolving landscape of software engineering, the emergence of
large language models (LLMs) and conversational interfaces, exemplified by
ChatGPT, is nothing short of revolutionary. While their potential is undeniable
across various domains, this paper sets out on a captivating expedition to
investigate their uncharted territory, the exploration of generating safety
cases. In this paper, our primary objective is to delve into the existing
knowledge base of GPT-4, focusing specifically on its understanding of the Goal
Structuring Notation (GSN), a well-established notation allowing to visually
represent safety cases. Subsequently, we perform four distinct experiments with
GPT-4. These experiments are designed to assess its capacity for generating
safety cases within a defined system and application domain. To measure the
performance of GPT-4 in this context, we compare the results it generates with
ground-truth safety cases created for an X-ray system system and a
Machine-Learning (ML)-enabled component for tire noise recognition (TNR) in a
vehicle. This allowed us to gain valuable insights into the model's generative
capabilities. Our findings indicate that GPT-4 demonstrates the capacity to
produce safety arguments that are moderately accurate and reasonable.
Furthermore, it exhibits the capability to generate safety cases that closely
align with the semantic content of the reference safety cases used as
ground-truths in our experiments. | Software Engineering |
What field is the article from? | Title: Conversational AI Threads for Visualizing Multidimensional Datasets
Abstract: Generative Large Language Models (LLMs) show potential in data analysis, yet
their full capabilities remain uncharted. Our work explores the capabilities of
LLMs for creating and refining visualizations via conversational interfaces. We
used an LLM to conduct a re-analysis of a prior Wizard-of-Oz study examining
the use of chatbots for conducting visual analysis. We surfaced the strengths
and weaknesses of LLM-driven analytic chatbots, finding that they fell short in
supporting progressive visualization refinements. From these findings, we
developed AI Threads, a multi-threaded analytic chatbot that enables analysts
to proactively manage conversational context and improve the efficacy of its
outputs. We evaluate its usability through a crowdsourced study (n=40) and
in-depth interviews with expert analysts (n=10). We further demonstrate the
capabilities of AI Threads on a dataset outside the LLM's training corpus. Our
findings show the potential of LLMs while also surfacing challenges and
fruitful avenues for future research. | Human-Computer Interaction |
What field is the article from? | Title: CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image Diffusion Models
Abstract: Images produced by text-to-image diffusion models might not always faithfully
represent the semantic intent of the provided text prompt, where the model
might overlook or entirely fail to produce certain objects. Existing solutions
often require customly tailored functions for each of these problems, leading
to sub-optimal results, especially for complex prompts. Our work introduces a
novel perspective by tackling this challenge in a contrastive context. Our
approach intuitively promotes the segregation of objects in attention maps
while also maintaining that pairs of related attributes are kept close to each
other. We conduct extensive experiments across a wide variety of scenarios,
each involving unique combinations of objects, attributes, and scenes. These
experiments effectively showcase the versatility, efficiency, and flexibility
of our method in working with both latent and pixel-based diffusion models,
including Stable Diffusion and Imagen. Moreover, we publicly share our source
code to facilitate further research. | Computer Vision |
What field is the article from? | Title: Reinforcement Learning from Diffusion Feedback: Q* for Image Search
Abstract: Large vision-language models are steadily gaining personalization
capabilities at the cost of fine-tuning or data augmentation. We present two
models for image generation using model-agnostic learning that align semantic
priors with generative capabilities. RLDF, or Reinforcement Learning from
Diffusion Feedback, is a singular approach for visual imitation through
prior-preserving reward function guidance. This employs Q-learning (with
standard Q*) for generation and follows a semantic-rewarded trajectory for
image search through finite encoding-tailored actions. The second proposed
method, noisy diffusion gradient, is optimization driven. At the root of both
methods is a special CFG encoding that we propose for continual semantic
guidance. Using only a single input image and no text input, RLDF generates
high-quality images over varied domains including retail, sports and
agriculture showcasing class-consistency and strong visual diversity. Project
website is available at https://infernolia.github.io/RLDF. | Computer Vision |
What field is the article from? | Title: Think Before You Speak: Cultivating Communication Skills of Large Language Models via Inner Monologue
Abstract: The emergence of large language models (LLMs) further improves the
capabilities of open-domain dialogue systems and can generate fluent, coherent,
and diverse responses. However, LLMs still lack an important ability:
communication skills, which makes them more like information seeking tools than
anthropomorphic chatbots. To make LLMs more anthropomorphic and proactive
during the conversation, we add five communication skills to the response
generation process: topic transition, proactively asking questions, concept
guidance, empathy, and summarising often. The addition of communication skills
increases the interest of users in the conversation and attracts them to chat
for longer. To enable LLMs better understand and use communication skills, we
design and add the inner monologue to LLMs. The complete process is achieved
through prompt engineering and in-context learning. To evaluate communication
skills, we construct a benchmark named Cskills for evaluating various
communication skills, which can also more comprehensively evaluate the dialogue
generation ability of the model. Experimental results show that the proposed
CSIM strategy improves the backbone models and outperforms the baselines in
both automatic and human evaluations. | Computational Linguistics |
What field is the article from? | Title: Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism
Abstract: Large language models (LLMs) have demonstrated impressive language
understanding and generation capabilities, enabling them to answer a wide range
of questions across various domains. However, these models are not flawless and
often produce responses that contain errors or misinformation. These
inaccuracies, commonly referred to as hallucinations, render LLMs unreliable
and even unusable in many scenarios. In this paper, our focus is on mitigating
the issue of hallucination in LLMs, particularly in the context of
question-answering. Instead of attempting to answer all questions, we explore a
refusal mechanism that instructs LLMs to refuse to answer challenging questions
in order to avoid errors. We then propose a simple yet effective solution
called Learn to Refuse (L2R), which incorporates the refusal mechanism to
enable LLMs to recognize and refuse to answer questions that they find
difficult to address. To achieve this, we utilize a structured knowledge base
to represent all the LLM's understanding of the world, enabling it to provide
traceable gold knowledge. This knowledge base is separate from the LLM and
initially empty, and it is progressively expanded with validated knowledge.
When an LLM encounters questions outside its domain, the system recognizes its
knowledge scope and determines whether it can answer the question
independently. Additionally, we introduce a method for automatically and
efficiently expanding the knowledge base of LLMs. Through qualitative and
quantitative analysis, we demonstrate that our approach enhances the
controllability and reliability of LLMs. | Computational Linguistics |
What field is the article from? | Title: A Unified Approach to Count-Based Weakly-Supervised Learning
Abstract: High-quality labels are often very scarce, whereas unlabeled data with
inferred weak labels occurs more naturally. In many cases, these weak labels
dictate the frequency of each respective class over a set of instances. In this
paper, we develop a unified approach to learning from such weakly-labeled data,
which we call count-based weakly-supervised learning. At the heart of our
approach is the ability to compute the probability of exactly k out of n
outputs being set to true. This computation is differentiable, exact, and
efficient. Building upon the previous computation, we derive a count loss
penalizing the model for deviations in its distribution from an arithmetic
constraint defined over label counts. We evaluate our approach on three common
weakly-supervised learning paradigms and observe that our proposed approach
achieves state-of-the-art or highly competitive results across all three of the
paradigms. | Machine Learning |
What field is the article from? | Title: In-Context Ability Transfer for Question Decomposition in Complex QA
Abstract: Answering complex questions is a challenging task that requires question
decomposition and multistep reasoning for arriving at the solution. While
existing supervised and unsupervised approaches are specialized to a certain
task and involve training, recently proposed prompt-based approaches offer
generalizable solutions to tackle a wide variety of complex question-answering
(QA) tasks. However, existing prompt-based approaches that are effective for
complex QA tasks involve expensive hand annotations from experts in the form of
rationales and are not generalizable to newer complex QA scenarios and tasks.
We propose, icat (In-Context Ability Transfer) which induces reasoning
capabilities in LLMs without any LLM fine-tuning or manual annotation of
in-context samples. We transfer the ability to decompose complex questions to
simpler questions or generate step-by-step rationales to LLMs, by careful
selection from available data sources of related tasks. We also propose an
automated uncertainty-aware exemplar selection approach for selecting examples
from transfer data sources. Finally, we conduct large-scale experiments on a
variety of complex QA tasks involving numerical reasoning, compositional
complex QA, and heterogeneous complex QA which require decomposed reasoning. We
show that ICAT convincingly outperforms existing prompt-based solutions without
involving any model training, showcasing the benefits of re-using existing
abilities. | Computational Linguistics |
What field is the article from? | Title: Customize your NeRF: Adaptive Source Driven 3D Scene Editing via Local-Global Iterative Training
Abstract: In this paper, we target the adaptive source driven 3D scene editing task by
proposing a CustomNeRF model that unifies a text description or a reference
image as the editing prompt. However, obtaining desired editing results
conformed with the editing prompt is nontrivial since there exist two
significant challenges, including accurate editing of only foreground regions
and multi-view consistency given a single-view reference image. To tackle the
first challenge, we propose a Local-Global Iterative Editing (LGIE) training
scheme that alternates between foreground region editing and full-image
editing, aimed at foreground-only manipulation while preserving the background.
For the second challenge, we also design a class-guided regularization that
exploits class priors within the generation model to alleviate the
inconsistency problem among different views in image-driven editing. Extensive
experiments show that our CustomNeRF produces precise editing results under
various real scenes for both text- and image-driven settings. | Computer Vision |
What field is the article from? | Title: DSR-Diff: Depth Map Super-Resolution with Diffusion Model
Abstract: Color-guided depth map super-resolution (CDSR) improve the spatial resolution
of a low-quality depth map with the corresponding high-quality color map,
benefiting various applications such as 3D reconstruction, virtual reality, and
augmented reality. While conventional CDSR methods typically rely on
convolutional neural networks or transformers, diffusion models (DMs) have
demonstrated notable effectiveness in high-level vision tasks. In this work, we
present a novel CDSR paradigm that utilizes a diffusion model within the latent
space to generate guidance for depth map super-resolution. The proposed method
comprises a guidance generation network (GGN), a depth map super-resolution
network (DSRN), and a guidance recovery network (GRN). The GGN is specifically
designed to generate the guidance while managing its compactness. Additionally,
we integrate a simple but effective feature fusion module and a
transformer-style feature extraction module into the DSRN, enabling it to
leverage guided priors in the extraction, fusion, and reconstruction of
multi-model images. Taking into account both accuracy and efficiency, our
proposed method has shown superior performance in extensive experiments when
compared to state-of-the-art methods. Our codes will be made available at
https://github.com/shiyuan7/DSR-Diff. | Computer Vision |
What field is the article from? | Title: AFPQ: Asymmetric Floating Point Quantization for LLMs
Abstract: Large language models (LLMs) show great performance in various tasks, but
face deployment challenges from limited memory capacity and bandwidth. Low-bit
weight quantization can save memory and accelerate inference. Although
floating-point (FP) formats show good performance in LLM quantization, they
tend to perform poorly with small group sizes or sub-4 bits. We find the reason
is that the absence of asymmetry in previous FP quantization makes it
unsuitable for handling asymmetric value distribution of LLM weight tensors. In
this work, we propose asymmetric FP quantization (AFPQ), which sets separate
scales for positive and negative values. Our method leads to large accuracy
improvements and can be easily plugged into other quantization methods,
including GPTQ and AWQ, for better performance. Besides, no additional storage
is needed compared with asymmetric integer (INT) quantization. The code is
available at https://github.com/zhangsichengsjtu/AFPQ. | Computational Linguistics |
What field is the article from? | Title: Keeping Users Engaged During Repeated Administration of the Same Questionnaire: Using Large Language Models to Reliably Diversify Questions
Abstract: Standardized, validated questionnaires are vital tools in HCI research and
healthcare, offering dependable self-report data. However, their repeated use
in longitudinal or pre-post studies can induce respondent fatigue, impacting
data quality via response biases and decreased response rates. We propose
utilizing large language models (LLMs) to generate diverse questionnaire
versions while retaining good psychometric properties. In a longitudinal study,
participants engaged with our agent system and responded daily for two weeks to
either a standardized depression questionnaire or one of two LLM-generated
questionnaire variants, alongside a validated depression questionnaire.
Psychometric testing revealed consistent covariation between the external
criterion and the focal measure administered across the three conditions,
demonstrating the reliability and validity of the LLM-generated variants.
Participants found the repeated administration of the standardized
questionnaire significantly more repetitive compared to the variants. Our
findings highlight the potential of LLM-generated variants to invigorate
questionnaires, fostering engagement and interest without compromising
validity. | Human-Computer Interaction |
What field is the article from? | Title: Robust Domain Misinformation Detection via Multi-modal Feature Alignment
Abstract: Social media misinformation harms individuals and societies and is
potentialized by fast-growing multi-modal content (i.e., texts and images),
which accounts for higher "credibility" than text-only news pieces. Although
existing supervised misinformation detection methods have obtained acceptable
performances in key setups, they may require large amounts of labeled data from
various events, which can be time-consuming and tedious. In turn, directly
training a model by leveraging a publicly available dataset may fail to
generalize due to domain shifts between the training data (a.k.a. source
domains) and the data from target domains. Most prior work on domain shift
focuses on a single modality (e.g., text modality) and ignores the scenario
where sufficient unlabeled target domain data may not be readily available in
an early stage. The lack of data often happens due to the dynamic propagation
trend (i.e., the number of posts related to fake news increases slowly before
catching the public attention). We propose a novel robust domain and
cross-modal approach (\textbf{RDCM}) for multi-modal misinformation detection.
It reduces the domain shift by aligning the joint distribution of textual and
visual modalities through an inter-domain alignment module and bridges the
semantic gap between both modalities through a cross-modality alignment module.
We also propose a framework that simultaneously considers application scenarios
of domain generalization (in which the target domain data is unavailable) and
domain adaptation (in which unlabeled target domain data is available).
Evaluation results on two public multi-modal misinformation detection datasets
(Pheme and Twitter Datasets) evince the superiority of the proposed model. The
formal implementation of this paper can be found in this link:
https://github.com/less-and-less-bugs/RDCM | Artificial Intelligence |
What field is the article from? | Title: Making Data Work Count
Abstract: In this paper, we examine the work of data annotation. Specifically, we focus
on the role of counting or quantification in organising annotation work. Based
on an ethnographic study of data annotation in two outsourcing centres in
India, we observe that counting practices and its associated logics are an
integral part of day-to-day annotation activities. In particular, we call
attention to the presumption of total countability observed in annotation - the
notion that everything, from tasks, datasets and deliverables, to workers, work
time, quality and performance, can be managed by applying the logics of
counting. To examine this, we draw on sociological and socio-technical
scholarship on quantification and develop the lens of a 'regime of counting'
that makes explicit the specific counts, practices, actors and structures that
underpin the pervasive counting in annotation. We find that within the AI
supply chain and data work, counting regimes aid the assertion of authority by
the AI clients (also called requesters) over annotation processes, constituting
them as reductive, standardised, and homogenous. We illustrate how this has
implications for i) how annotation work and workers get valued, ii) the role
human discretion plays in annotation, and iii) broader efforts to introduce
accountable and more just practices in AI. Through these implications, we
illustrate the limits of operating within the logic of total countability.
Instead, we argue for a view of counting as partial - located in distinct
geographies, shaped by specific interests and accountable in only limited ways.
This, we propose, sets the stage for a fundamentally different orientation to
counting and what counts in data annotation. | Human-Computer Interaction |
What field is the article from? | Title: Closed Drafting as a Case Study for First-Principle Interpretability, Memory, and Generalizability in Deep Reinforcement Learning
Abstract: Closed drafting or "pick and pass" is a popular game mechanic where each
round players select a card or other playable element from their hand and pass
the rest to the next player. In this paper, we establish first-principle
methods for studying the interpretability, generalizability, and memory of Deep
Q-Network (DQN) models playing closed drafting games. In particular, we use a
popular family of closed drafting games called "Sushi Go Party", in which we
achieve state-of-the-art performance. We fit decision rules to interpret the
decision-making strategy of trained DRL agents by comparing them to the ranking
preferences of different types of human players. As Sushi Go Party can be
expressed as a set of closely-related games based on the set of cards in play,
we quantify the generalizability of DRL models trained on various sets of
cards, establishing a method to benchmark agent performance as a function of
environment unfamiliarity. Using the explicitly calculable memory of other
player's hands in closed drafting games, we create measures of the ability of
DRL models to learn memory. | Machine Learning |
What field is the article from? | Title: Breaking Boundaries: Balancing Performance and Robustness in Deep Wireless Traffic Forecasting
Abstract: Balancing the trade-off between accuracy and robustness is a long-standing
challenge in time series forecasting. While most of existing robust algorithms
have achieved certain suboptimal performance on clean data, sustaining the same
performance level in the presence of data perturbations remains extremely hard.
In this paper, we study a wide array of perturbation scenarios and propose
novel defense mechanisms against adversarial attacks using real-world telecom
data. We compare our strategy against two existing adversarial training
algorithms under a range of maximal allowed perturbations, defined using
$\ell_{\infty}$-norm, $\in [0.1,0.4]$. Our findings reveal that our hybrid
strategy, which is composed of a classifier to detect adversarial examples, a
denoiser to eliminate noise from the perturbed data samples, and a standard
forecaster, achieves the best performance on both clean and perturbed data. Our
optimal model can retain up to $92.02\%$ the performance of the original
forecasting model in terms of Mean Squared Error (MSE) on clean data, while
being more robust than the standard adversarially trained models on perturbed
data. Its MSE is 2.71$\times$ and 2.51$\times$ lower than those of comparing
methods on normal and perturbed data, respectively. In addition, the components
of our models can be trained in parallel, resulting in better computational
efficiency. Our results indicate that we can optimally balance the trade-off
between the performance and robustness of forecasting models by improving the
classifier and denoiser, even in the presence of sophisticated and destructive
poisoning attacks. | Machine Learning |
What field is the article from? | Title: Foveation in the Era of Deep Learning
Abstract: In this paper, we tackle the challenge of actively attending to visual scenes
using a foveated sensor. We introduce an end-to-end differentiable foveated
active vision architecture that leverages a graph convolutional network to
process foveated images, and a simple yet effective formulation for foveated
image sampling. Our model learns to iteratively attend to regions of the image
relevant for classification. We conduct detailed experiments on a variety of
image datasets, comparing the performance of our method with previous
approaches to foveated vision while measuring how the impact of different
choices, such as the degree of foveation, and the number of fixations the
network performs, affect object recognition performance. We find that our model
outperforms a state-of-the-art CNN and foveated vision architectures of
comparable parameters and a given pixel or computation budget | Computer Vision |
What field is the article from? | Title: AI-Generated Images Introduce Invisible Relevance Bias to Text-Image Retrieval
Abstract: With the advancement of generation models, AI-generated content (AIGC) is
becoming more realistic, flooding the Internet. A recent study suggests that
this phenomenon has elevated the issue of source bias in text retrieval for web
searches. Specifically, neural retrieval models tend to rank generated texts
higher than human-written texts. In this paper, we extend the study of this
bias to cross-modal retrieval. Firstly, we successfully construct a suitable
benchmark to explore the existence of the bias. Subsequent extensive
experiments on this benchmark reveal that AI-generated images introduce an
invisible relevance bias to text-image retrieval models. Specifically, our
experiments show that text-image retrieval models tend to rank the AI-generated
images higher than the real images, even though the AI-generated images do not
exhibit more visually relevant features to the query than real images. This
invisible relevance bias is prevalent across retrieval models with varying
training data and architectures. Furthermore, our subsequent exploration
reveals that the inclusion of AI-generated images in the training data of the
retrieval models exacerbates the invisible relevance bias. The above phenomenon
triggers a vicious cycle, which makes the invisible relevance bias become more
and more serious. To elucidate the potential causes of invisible relevance and
address the aforementioned issues, we introduce an effective training method
aimed at alleviating the invisible relevance bias. Subsequently, we apply our
proposed debiasing method to retroactively identify the causes of invisible
relevance, revealing that the AI-generated images induce the image encoder to
embed additional information into their representation. This information
exhibits a certain consistency across generated images with different semantics
and can make the retriever estimate a higher relevance score. | Information Retrieval |
What field is the article from? | Title: Cross-Domain Robustness of Transformer-based Keyphrase Generation
Abstract: Modern models for text generation show state-of-the-art results in many
natural language processing tasks. In this work, we explore the effectiveness
of abstractive text summarization models for keyphrase selection. A list of
keyphrases is an important element of a text in databases and repositories of
electronic documents. In our experiments, abstractive text summarization models
fine-tuned for keyphrase generation show quite high results for a target text
corpus. However, in most cases, the zero-shot performance on other corpora and
domains is significantly lower. We investigate cross-domain limitations of
abstractive text summarization models for keyphrase generation. We present an
evaluation of the fine-tuned BART models for the keyphrase selection task
across six benchmark corpora for keyphrase extraction including scientific
texts from two domains and news texts. We explore the role of transfer learning
between different domains to improve the BART model performance on small text
corpora. Our experiments show that preliminary fine-tuning on out-of-domain
corpora can be effective under conditions of a limited number of samples. | Computational Linguistics |
What field is the article from? | Title: SCOPE-RL: A Python Library for Offline Reinforcement Learning and Off-Policy Evaluation
Abstract: This paper introduces SCOPE-RL, a comprehensive open-source Python software
designed for offline reinforcement learning (offline RL), off-policy evaluation
(OPE), and selection (OPS). Unlike most existing libraries that focus solely on
either policy learning or evaluation, SCOPE-RL seamlessly integrates these two
key aspects, facilitating flexible and complete implementations of both offline
RL and OPE processes. SCOPE-RL put particular emphasis on its OPE modules,
offering a range of OPE estimators and robust evaluation-of-OPE protocols. This
approach enables more in-depth and reliable OPE compared to other packages. For
instance, SCOPE-RL enhances OPE by estimating the entire reward distribution
under a policy rather than its mere point-wise expected value. Additionally,
SCOPE-RL provides a more thorough evaluation-of-OPE by presenting the
risk-return tradeoff in OPE results, extending beyond mere accuracy evaluations
in existing OPE literature. SCOPE-RL is designed with user accessibility in
mind. Its user-friendly APIs, comprehensive documentation, and a variety of
easy-to-follow examples assist researchers and practitioners in efficiently
implementing and experimenting with various offline RL methods and OPE
estimators, tailored to their specific problem contexts. The documentation of
SCOPE-RL is available at https://scope-rl.readthedocs.io/en/latest/. | Machine Learning |
What field is the article from? | Title: Enhancing Lightweight Neural Networks for Small Object Detection in IoT Applications
Abstract: Advances in lightweight neural networks have revolutionized computer vision
in a broad range of IoT applications, encompassing remote monitoring and
process automation. However, the detection of small objects, which is crucial
for many of these applications, remains an underexplored area in current
computer vision research, particularly for embedded devices. To address this
gap, the paper proposes a novel adaptive tiling method that can be used on top
of any existing object detector including the popular FOMO network for object
detection on microcontrollers. Our experimental results show that the proposed
tiling method can boost the F1-score by up to 225% while reducing the average
object count error by up to 76%. Furthermore, the findings of this work suggest
that using a soft F1 loss over the popular binary cross-entropy loss can
significantly reduce the negative impact of imbalanced data. Finally, we
validate our approach by conducting experiments on the Sony Spresense
microcontroller, showcasing the proposed method's ability to strike a balance
between detection performance, low latency, and minimal memory consumption. | Computer Vision |
What field is the article from? | Title: Jellyfish: A Large Language Model for Data Preprocessing
Abstract: In this paper, we present Jellyfish, an open-source LLM as a universal task
solver for DP. Built on the Llama 2 13B model, Jellyfish is instruction-tuned
with the datasets of several typical DP tasks including error detection, data
imputation, schema matching, and entity matching, and delivers generalizability
to other tasks. Remarkably, Jellyfish can operate on a local, single, and
low-priced GPU with its 13 billion parameters, ensuring data security and
enabling further tuning. Its proficiency in understanding natural language
allows users to manually craft instructions for DP tasks. Unlike many existing
methods that heavily rely on prior knowledge, Jellyfish acquires domain
knowledge during its tuning process and integrates optional knowledge injection
during inference. A distinctive feature of Jellyfish is its interpreter, which
elucidates its output decisions. To construct Jellyfish, we develop a series of
pre-tuning and DP-tuning techniques. Jellyfish is equipped with an instance
serializer, which automatically translates raw data into model prompts, and a
knowledge injector, which optionally introduces task- and dataset-specific
knowledge to enhance DP performance. Our evaluation of Jellyfish, using a range
of real datasets, shows its competitiveness compared to state-of-the-art
methods and its strong generalizability to unseen tasks. Jellyfish's
performance rivals that of GPT series models, and its interpreter offers
enhanced reasoning capabilities compared to GPT-3.5. Furthermore, our
evaluation highlights the effectiveness of the techniques employed in
constructing Jellyfish. Our model is available at Hugging Face:
https://huggingface.co/NECOUDBFM/Jellyfish . | Artificial Intelligence |
What field is the article from? | Title: FormalGeo: The First Step Toward Human-like IMO-level Geometric Automated Reasoning
Abstract: This is the first paper in a series of work we have accomplished over the
past three years. In this paper, we have constructed a consistent formal plane
geometry system. This will serve as a crucial bridge between IMO-level plane
geometry challenges and readable AI automated reasoning. Within this formal
framework, we have been able to seamlessly integrate modern AI models with our
formal system. AI is now capable of providing deductive reasoning solutions to
IMO-level plane geometry problems, just like handling other natural languages,
and these proofs are readable, traceable, and verifiable. We propose the
geometry formalization theory (GFT) to guide the development of the geometry
formal system. Based on the GFT, we have established the FormalGeo, which
consists of 88 geometric predicates and 196 theorems. It can represent,
validate, and solve IMO-level geometry problems. we also have crafted the FGPS
(formal geometry problem solver) in Python. It serves as both an interactive
assistant for verifying problem-solving processes and an automated problem
solver. We've annotated the formalgeo7k and formalgeo-imo datasets. The former
contains 6,981 (expand to 133,818 through data augmentation) geometry problems,
while the latter includes 18 (expand to 2,627 and continuously increasing)
IMO-level challenging geometry problems. All annotated problems include
detailed formal language descriptions and solutions. Implementation of the
formal system and experiments validate the correctness and utility of the GFT.
The backward depth-first search method only yields a 2.42% problem-solving
failure rate, and we can incorporate deep learning techniques to achieve lower
one. The source code of FGPS and datasets are available at
https://github.com/BitSecret/FGPS. | Artificial Intelligence |
What field is the article from? | Title: Guided Flows for Generative Modeling and Decision Making
Abstract: Classifier-free guidance is a key component for enhancing the performance of
conditional generative models across diverse tasks. While it has previously
demonstrated remarkable improvements for the sample quality, it has only been
exclusively employed for diffusion models. In this paper, we integrate
classifier-free guidance into Flow Matching (FM) models, an alternative
simulation-free approach that trains Continuous Normalizing Flows (CNFs) based
on regressing vector fields. We explore the usage of \emph{Guided Flows} for a
variety of downstream applications. We show that Guided Flows significantly
improves the sample quality in conditional image generation and zero-shot
text-to-speech synthesis, boasting state-of-the-art performance. Notably, we
are the first to apply flow models for plan generation in the offline
reinforcement learning setting, showcasing a 10x speedup in computation
compared to diffusion models while maintaining comparable performance. | Machine Learning |
What field is the article from? | Title: MetaReVision: Meta-Learning with Retrieval for Visually Grounded Compositional Concept Acquisition
Abstract: Humans have the ability to learn novel compositional concepts by recalling
and generalizing primitive concepts acquired from past experiences. Inspired by
this observation, in this paper, we propose MetaReVision, a retrieval-enhanced
meta-learning model to address the visually grounded compositional concept
learning problem. The proposed MetaReVision consists of a retrieval module and
a meta-learning module which are designed to incorporate retrieved primitive
concepts as a supporting set to meta-train vision-anguage models for grounded
compositional concept recognition. Through meta-learning from episodes
constructed by the retriever, MetaReVision learns a generic compositional
representation that can be fast updated to recognize novel compositional
concepts. We create CompCOCO and CompFlickr to benchmark the grounded
compositional concept learning. Our experimental results show that MetaReVision
outperforms other competitive baselines and the retrieval module plays an
important role in this compositional learning process. | Computational Linguistics |
What field is the article from? | Title: Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation
Abstract: Chain-of-Thought (CoT) guides large language models (LLMs) to reason
step-by-step, and can motivate their logical reasoning ability. While effective
for logical tasks, CoT is not conducive to creative problem-solving which often
requires out-of-box thoughts and is crucial for innovation advancements. In
this paper, we explore the Leap-of-Thought (LoT) abilities within LLMs -- a
non-sequential, creative paradigm involving strong associations and knowledge
leaps. To this end, we study LLMs on the popular Oogiri game which needs
participants to have good creativity and strong associative thinking for
responding unexpectedly and humorously to the given image, text, or both, and
thus is suitable for LoT study. Then to investigate LLMs' LoT ability in the
Oogiri game, we first build a multimodal and multilingual Oogiri-GO dataset
which contains over 130,000 samples from the Oogiri game, and observe the
insufficient LoT ability or failures of most existing LLMs on the Oogiri game.
Accordingly, we introduce a creative Leap-of-Thought (CLoT) paradigm to improve
LLM's LoT ability. CLoT first formulates the Oogiri-GO dataset into
LoT-oriented instruction tuning data to train pretrained LLM for achieving
certain LoT humor generation and discrimination abilities. Then CLoT designs an
explorative self-refinement that encourages the LLM to generate more creative
LoT data via exploring parallels between seemingly unrelated concepts and
selects high-quality data to train itself for self-refinement. CLoT not only
excels in humor generation in the Oogiri game but also boosts creative
abilities in various tasks like cloud guessing game and divergent association
task. These findings advance our understanding and offer a pathway to improve
LLMs' creative capacities for innovative applications across domains. The
dataset, code, and models will be released online.
https://zhongshsh.github.io/CLoT/. | Artificial Intelligence |
What field is the article from? | Title: Designing Long-term Group Fair Policies in Dynamical Systems
Abstract: Neglecting the effect that decisions have on individuals (and thus, on the
underlying data distribution) when designing algorithmic decision-making
policies may increase inequalities and unfairness in the long term - even if
fairness considerations were taken in the policy design process. In this paper,
we propose a novel framework for achieving long-term group fairness in
dynamical systems, in which current decisions may affect an individual's
features in the next step, and thus, future decisions. Specifically, our
framework allows us to identify a time-independent policy that converges, if
deployed, to the targeted fair stationary state of the system in the long term,
independently of the initial data distribution. We model the system dynamics
with a time-homogeneous Markov chain and optimize the policy leveraging the
Markov chain convergence theorem to ensure unique convergence. We provide
examples of different targeted fair states of the system, encompassing a range
of long-term goals for society and policymakers. Furthermore, we show how our
approach facilitates the evaluation of different long-term targets by examining
their impact on the group-conditional population distribution in the long term
and how it evolves until convergence. | Artificial Intelligence |
What field is the article from? | Title: Autonomous Robotic Reinforcement Learning with Asynchronous Human Feedback
Abstract: Ideally, we would place a robot in a real-world environment and leave it
there improving on its own by gathering more experience autonomously. However,
algorithms for autonomous robotic learning have been challenging to realize in
the real world. While this has often been attributed to the challenge of sample
complexity, even sample-efficient techniques are hampered by two major
challenges - the difficulty of providing well "shaped" rewards, and the
difficulty of continual reset-free training. In this work, we describe a system
for real-world reinforcement learning that enables agents to show continual
improvement by training directly in the real world without requiring
painstaking effort to hand-design reward functions or reset mechanisms. Our
system leverages occasional non-expert human-in-the-loop feedback from remote
users to learn informative distance functions to guide exploration while
leveraging a simple self-supervised learning algorithm for goal-directed policy
learning. We show that in the absence of resets, it is particularly important
to account for the current "reachability" of the exploration policy when
deciding which regions of the space to explore. Based on this insight, we
instantiate a practical learning system - GEAR, which enables robots to simply
be placed in real-world environments and left to train autonomously without
interruption. The system streams robot experience to a web interface only
requiring occasional asynchronous feedback from remote, crowdsourced,
non-expert humans in the form of binary comparative feedback. We evaluate this
system on a suite of robotic tasks in simulation and demonstrate its
effectiveness at learning behaviors both in simulation and the real world.
Project website https://guided-exploration-autonomous-rl.github.io/GEAR/. | Machine Learning |
What field is the article from? | Title: Multi-State Brain Network Discovery
Abstract: Brain network discovery aims to find nodes and edges from the spatio-temporal
signals obtained by neuroimaging data, such as fMRI scans of human brains.
Existing methods tend to derive representative or average brain networks,
assuming observed signals are generated by only a single brain activity state.
However, the human brain usually involves multiple activity states, which
jointly determine the brain activities. The brain regions and their
connectivity usually exhibit intricate patterns that are difficult to capture
with only a single-state network. Recent studies find that brain parcellation
and connectivity change according to the brain activity state. We refer to such
brain networks as multi-state, and this mixture can help us understand human
behavior. Thus, compared to a single-state network, a multi-state network can
prevent us from losing crucial information of cognitive brain network. To
achieve this, we propose a new model called MNGL (Multi-state Network Graphical
Lasso), which successfully models multi-state brain networks by combining CGL
(coherent graphical lasso) with GMM (Gaussian Mixture Model). Using both
synthetic and real world ADHD 200 fMRI datasets, we demonstrate that MNGL
outperforms recent state-of-the-art alternatives by discovering more
explanatory and realistic results. | Machine Learning |
What field is the article from? | Title: Safe Reinforcement Learning in Tensor Reproducing Kernel Hilbert Space
Abstract: This paper delves into the problem of safe reinforcement learning (RL) in a
partially observable environment with the aim of achieving safe-reachability
objectives. In traditional partially observable Markov decision processes
(POMDP), ensuring safety typically involves estimating the belief in latent
states. However, accurately estimating an optimal Bayesian filter in POMDP to
infer latent states from observations in a continuous state space poses a
significant challenge, largely due to the intractable likelihood. To tackle
this issue, we propose a stochastic model-based approach that guarantees RL
safety almost surely in the face of unknown system dynamics and partial
observation environments. We leveraged the Predictive State Representation
(PSR) and Reproducing Kernel Hilbert Space (RKHS) to represent future
multi-step observations analytically, and the results in this context are
provable. Furthermore, we derived essential operators from the kernel Bayes'
rule, enabling the recursive estimation of future observations using various
operators. Under the assumption of \textit{undercompleness}, a polynomial
sample complexity is established for the RL algorithm for the infinite size of
observation and action spaces, ensuring an $\epsilon-$suboptimal safe policy
guarantee. | Machine Learning |
What field is the article from? | Title: TIAGo RL: Simulated Reinforcement Learning Environments with Tactile Data for Mobile Robots
Abstract: Tactile information is important for robust performance in robotic tasks that
involve physical interaction, such as object manipulation. However, with more
data included in the reasoning and control process, modeling behavior becomes
increasingly difficult. Deep Reinforcement Learning (DRL) produced promising
results for learning complex behavior in various domains, including
tactile-based manipulation in robotics. In this work, we present our
open-source reinforcement learning environments for the TIAGo service robot.
They produce tactile sensor measurements that resemble those of a real
sensorised gripper for TIAGo, encouraging research in transfer learning of DRL
policies. Lastly, we show preliminary training results of a learned force
control policy and compare it to a classical PI controller. | Robotics |
What field is the article from? | Title: Generalized Contrastive Divergence: Joint Training of Energy-Based Model and Diffusion Model through Inverse Reinforcement Learning
Abstract: We present Generalized Contrastive Divergence (GCD), a novel objective
function for training an energy-based model (EBM) and a sampler simultaneously.
GCD generalizes Contrastive Divergence (Hinton, 2002), a celebrated algorithm
for training EBM, by replacing Markov Chain Monte Carlo (MCMC) distribution
with a trainable sampler, such as a diffusion model. In GCD, the joint training
of EBM and a diffusion model is formulated as a minimax problem, which reaches
an equilibrium when both models converge to the data distribution. The minimax
learning with GCD bears interesting equivalence to inverse reinforcement
learning, where the energy corresponds to a negative reward, the diffusion
model is a policy, and the real data is expert demonstrations. We present
preliminary yet promising results showing that joint training is beneficial for
both EBM and a diffusion model. GCD enables EBM training without MCMC while
improving the sample quality of a diffusion model. | Machine Learning |
What field is the article from? | Title: StochGradAdam: Accelerating Neural Networks Training with Stochastic Gradient Sampling
Abstract: In the rapidly advancing domain of deep learning optimization, this paper
unveils the StochGradAdam optimizer, a novel adaptation of the well-regarded
Adam algorithm. Central to StochGradAdam is its gradient sampling technique.
This method not only ensures stable convergence but also leverages the
advantages of selective gradient consideration, fostering robust training by
potentially mitigating the effects of noisy or outlier data and enhancing the
exploration of the loss landscape for more dependable convergence. In both
image classification and segmentation tasks, StochGradAdam has demonstrated
superior performance compared to the traditional Adam optimizer. By judiciously
sampling a subset of gradients at each iteration, the optimizer is optimized
for managing intricate models. The paper provides a comprehensive exploration
of StochGradAdam's methodology, from its mathematical foundations to bias
correction strategies, heralding a promising advancement in deep learning
training techniques. | Machine Learning |
What field is the article from? | Title: Technical Report on the Learning of Case Relevance in Case-Based Reasoning with Abstract Argumentation
Abstract: Case-based reasoning is known to play an important role in several legal
settings. In this paper we focus on a recent approach to case-based reasoning,
supported by an instantiation of abstract argumentation whereby arguments
represent cases and attack between arguments results from outcome disagreement
between cases and a notion of relevance. In this context, relevance is
connected to a form of specificity among cases. We explore how relevance can be
learnt automatically in practice with the help of decision trees, and explore
the combination of case-based reasoning with abstract argumentation (AA-CBR)
and learning of case relevance for prediction in legal settings. Specifically,
we show that, for two legal datasets, AA-CBR and decision-tree-based learning
of case relevance perform competitively in comparison with decision trees. We
also show that AA-CBR with decision-tree-based learning of case relevance
results in a more compact representation than their decision tree counterparts,
which could be beneficial for obtaining cognitively tractable explanations. | Artificial Intelligence |
What field is the article from? | Title: Weakly-supervised Deep Cognate Detection Framework for Low-Resourced Languages Using Morphological Knowledge of Closely-Related Languages
Abstract: Exploiting cognates for transfer learning in under-resourced languages is an
exciting opportunity for language understanding tasks, including unsupervised
machine translation, named entity recognition and information retrieval.
Previous approaches mainly focused on supervised cognate detection tasks based
on orthographic, phonetic or state-of-the-art contextual language models, which
under-perform for most under-resourced languages. This paper proposes a novel
language-agnostic weakly-supervised deep cognate detection framework for
under-resourced languages using morphological knowledge from closely related
languages. We train an encoder to gain morphological knowledge of a language
and transfer the knowledge to perform unsupervised and weakly-supervised
cognate detection tasks with and without the pivot language for the
closely-related languages. While unsupervised, it overcomes the need for
hand-crafted annotation of cognates. We performed experiments on different
published cognate detection datasets across language families and observed not
only significant improvement over the state-of-the-art but also our method
outperformed the state-of-the-art supervised and unsupervised methods. Our
model can be extended to a wide range of languages from any language family as
it overcomes the requirement of the annotation of the cognate pairs for
training. The code and dataset building scripts can be found at
https://github.com/koustavagoswami/Weakly_supervised-Cognate_Detection | Computational Linguistics |
What field is the article from? | Title: Proving Conjectures Acquired by Composing Multiple Biases
Abstract: We present the proofs of the conjectures mentioned in the paper published in
the proceedings of the 2024 AAAI conference [1], and discovered by the
decomposition methods presented in the same paper. | Artificial Intelligence |
What field is the article from? | Title: MIA-BAD: An Approach for Enhancing Membership Inference Attack and its Mitigation with Federated Learning
Abstract: The membership inference attack (MIA) is a popular paradigm for compromising
the privacy of a machine learning (ML) model. MIA exploits the natural
inclination of ML models to overfit upon the training data. MIAs are trained to
distinguish between training and testing prediction confidence to infer
membership information. Federated Learning (FL) is a privacy-preserving ML
paradigm that enables multiple clients to train a unified model without
disclosing their private data. In this paper, we propose an enhanced Membership
Inference Attack with the Batch-wise generated Attack Dataset (MIA-BAD), a
modification to the MIA approach. We investigate that the MIA is more accurate
when the attack dataset is generated batch-wise. This quantitatively decreases
the attack dataset while qualitatively improving it. We show how training an ML
model through FL, has some distinct advantages and investigate how the threat
introduced with the proposed MIA-BAD approach can be mitigated with FL
approaches. Finally, we demonstrate the qualitative effects of the proposed
MIA-BAD methodology by conducting extensive experiments with various target
datasets, variable numbers of federated clients, and training batch sizes. | Cryptography and Security |
What field is the article from? | Title: Unifying Structure and Language Semantic for Efficient Contrastive Knowledge Graph Completion with Structured Entity Anchors
Abstract: The goal of knowledge graph completion (KGC) is to predict missing links in a
KG using trained facts that are already known. In recent, pre-trained language
model (PLM) based methods that utilize both textual and structural information
are emerging, but their performances lag behind state-of-the-art (SOTA)
structure-based methods or some methods lose their inductive inference
capabilities in the process of fusing structure embedding to text encoder. In
this paper, we propose a novel method to effectively unify structure
information and language semantics without losing the power of inductive
reasoning. We adopt entity anchors and these anchors and textual description of
KG elements are fed together into the PLM-based encoder to learn unified
representations. In addition, the proposed method utilizes additional random
negative samples which can be reused in the each mini-batch during contrastive
learning to learn a generalized entity representations. We verify the
effectiveness of the our proposed method through various experiments and
analysis. The experimental results on standard benchmark widely used in link
prediction task show that the proposed model outperforms existing the SOTA KGC
models. Especially, our method show the largest performance improvement on
FB15K-237, which is competitive to the SOTA of structure-based KGC methods. | Artificial Intelligence |
What field is the article from? | Title: Aiming to Minimize Alcohol-Impaired Road Fatalities: Utilizing Fairness-Aware and Domain Knowledge-Infused Artificial Intelligence
Abstract: Approximately 30% of all traffic fatalities in the United States are
attributed to alcohol-impaired driving. This means that, despite stringent laws
against this offense in every state, the frequency of drunk driving accidents
is alarming, resulting in approximately one person being killed every 45
minutes. The process of charging individuals with Driving Under the Influence
(DUI) is intricate and can sometimes be subjective, involving multiple stages
such as observing the vehicle in motion, interacting with the driver, and
conducting Standardized Field Sobriety Tests (SFSTs). Biases have been observed
through racial profiling, leading to some groups and geographical areas facing
fewer DUI tests, resulting in many actual DUI incidents going undetected,
ultimately leading to a higher number of fatalities. To tackle this issue, our
research introduces an Artificial Intelligence-based predictor that is both
fairness-aware and incorporates domain knowledge to analyze DUI-related
fatalities in different geographic locations. Through this model, we gain
intriguing insights into the interplay between various demographic groups,
including age, race, and income. By utilizing the provided information to
allocate policing resources in a more equitable and efficient manner, there is
potential to reduce DUI-related fatalities and have a significant impact on
road safety. | Machine Learning |
What field is the article from? | Title: BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting
Abstract: Deep neural networks (DNNs) are susceptible to backdoor attacks, where
malicious functionality is embedded to allow attackers to trigger incorrect
classifications. Old-school backdoor attacks use strong trigger features that
can easily be learned by victim models. Despite robustness against input
variation, the robustness however increases the likelihood of unintentional
trigger activations. This leaves traces to existing defenses, which find
approximate replacements for the original triggers that can activate the
backdoor without being identical to the original trigger via, e.g., reverse
engineering and sample overlay.
In this paper, we propose and investigate a new characteristic of backdoor
attacks, namely, backdoor exclusivity, which measures the ability of backdoor
triggers to remain effective in the presence of input variation. Building upon
the concept of backdoor exclusivity, we propose Backdoor Exclusivity LifTing
(BELT), a novel technique which suppresses the association between the backdoor
and fuzzy triggers to enhance backdoor exclusivity for defense evasion.
Extensive evaluation on three popular backdoor benchmarks validate, our
approach substantially enhances the stealthiness of four old-school backdoor
attacks, which, after backdoor exclusivity lifting, is able to evade six
state-of-the-art backdoor countermeasures, at almost no cost of the attack
success rate and normal utility. For example, one of the earliest backdoor
attacks BadNet, enhanced by BELT, evades most of the state-of-the-art defenses
including ABS and MOTH which would otherwise recognize the backdoored model. | Cryptography and Security |
What field is the article from? | Title: EHRTutor: Enhancing Patient Understanding of Discharge Instructions
Abstract: Large language models have shown success as a tutor in education in various
fields. Educating patients about their clinical visits plays a pivotal role in
patients' adherence to their treatment plans post-discharge. This paper
presents EHRTutor, an innovative multi-component framework leveraging the Large
Language Model (LLM) for patient education through conversational
question-answering. EHRTutor first formulates questions pertaining to the
electronic health record discharge instructions. It then educates the patient
through conversation by administering each question as a test. Finally, it
generates a summary at the end of the conversation. Evaluation results using
LLMs and domain experts have shown a clear preference for EHRTutor over the
baseline. Moreover, EHRTutor also offers a framework for generating synthetic
patient education dialogues that can be used for future in-house system
training. | Computational Linguistics |
What field is the article from? | Title: ChatTraffic: Text-to-Traffic Generation via Diffusion Model
Abstract: Traffic prediction is one of the most significant foundations in Intelligent
Transportation Systems (ITS). Traditional traffic prediction methods rely only
on historical traffic data to predict traffic trends and face two main
challenges. 1) insensitivity to unusual events. 2) poor performance in
long-term prediction. In this work, we explore how generative models combined
with text describing the traffic system can be applied for traffic generation
and name the task Text-to-Traffic Generation (TTG). The key challenge of the
TTG task is how to associate text with the spatial structure of the road
network and traffic data for generating traffic situations. To this end, we
propose ChatTraffic, the first diffusion model for text-to-traffic generation.
To guarantee the consistency between synthetic and real data, we augment a
diffusion model with the Graph Convolutional Network (GCN) to extract spatial
correlations of traffic data. In addition, we construct a large dataset
containing text-traffic pairs for the TTG task. We benchmarked our model
qualitatively and quantitatively on the released dataset. The experimental
results indicate that ChatTraffic can generate realistic traffic situations
from the text. Our code and dataset are available at
https://github.com/ChyaZhang/ChatTraffic. | Machine Learning |
Subsets and Splits