arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2402.11572 | 2024-02-18T12:36:23Z | Cobra Effect in Reference-Free Image Captioning Metrics | [
"Zheng Ma",
"Changxin Wang",
"Yawen Ouyang",
"Fei Zhao",
"Jianbing Zhang",
"Shujian Huang",
"Jiajun Chen"
] | Evaluating the compatibility between textual descriptions and corresponding
images represents a core endeavor within multi-modal research. In recent years,
a proliferation of reference-free methods, leveraging visual-language
pre-trained models (VLMs), has emerged. Empirical evidence has substantiated
that these innovative approaches exhibit a higher correlation with human
judgment, marking a significant advancement in the field. However, does a
higher correlation with human evaluations alone sufficiently denote the
complete of a metric? In response to this question, in this paper, we study if
there are any deficiencies in reference-free metrics. Specifically, inspired by
the Cobra Effect, we utilize metric scores as rewards to direct the captioning
model toward generating descriptions that closely align with the metric's
criteria. If a certain metric has flaws, it will be exploited by the model and
reflected in the generated sentences. Our findings reveal that descriptions
guided by these metrics contain significant flaws, e.g. incoherent statements
and excessive repetition. Subsequently, we propose a novel method termed
Self-Improving to rectify the identified shortcomings within these metrics. We
employ GPT-4V as an evaluative tool to assess generated sentences and the
result reveals that our approach achieves state-of-the-art (SOTA) performance.
In addition, we also introduce a challenging evaluation benchmark called Flaws
Caption to evaluate reference-free image captioning metrics comprehensively.
Our code is available at
https://github.com/aaronma2020/robust_captioning_metric | [
"cs.CL"
] | false |
2402.11573 | 2024-02-18T12:41:01Z | BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval
Augmented Long-Context Large Language Models | [
"Kun Luo",
"Zheng Liu",
"Shitao Xiao",
"Kang Liu"
] | Large language models (LLMs) call for extension of context to handle many
critical applications. However, the existing approaches are prone to expensive
costs and inferior quality of context extension. In this work, we
proposeExtensible Embedding, which realizes high-quality extension of LLM's
context with strong flexibility and cost-effectiveness. Extensible embedding
stand as an enhancement of typical token embedding, which represents the
information for an extensible scope of context instead of a single token. By
leveraging such compact input units of higher information density, the LLM can
access to a vast scope of context even with a small context window. Extensible
embedding is systematically optimized in architecture and training method,
which leads to multiple advantages. 1) High flexibility of context extension,
which flexibly supports ad-hoc extension of diverse context lengths. 2) Strong
sample efficiency of training, which enables the embedding model to be learned
in a cost-effective way. 3) Superior compatibility with the existing LLMs,
where the extensible embedding can be seamlessly introduced as a plug-in
component. Comprehensive evaluations on long-context language modeling and
understanding tasks verify extensible embedding as an effective, efficient,
flexible, and compatible method to extend the LLM's context. | [
"cs.CL"
] | false |
2402.11577 | 2024-02-18T12:50:19Z | Extensible Embedding: A Flexible Multipler For LLM's Context Length | [
"Ninglu Shao",
"Shitao Xiao",
"Zheng Liu",
"Peitian Zhang"
] | Large language models (LLMs) call for extension of context to handle many
critical applications. However, the existing approaches are prone to expensive
costs and inferior quality of context extension. In this work, we propose
Extensible Embedding, which realizes high-quality extension of LLM's context
with strong flexibility and cost-effectiveness. Extensible embedding stand as
an enhancement of typical token embedding, which represents the information for
an extensible scope of context instead of a single token. By leveraging such
compact input units of higher information density, the LLM can access to a vast
scope of context even with a small context window. Extensible embedding is
systematically optimized in architecture and training method, which leads to
multiple advantages. 1) High flexibility of context extension, which flexibly
supports ad-hoc extension of diverse context lengths. 2) Strong sample
efficiency of training, which enables the embedding model to be learned in a
cost-effective way. 3) Superior compatibility with the existing LLMs, where the
extensible embedding can be seamlessly introduced as a plug-in component.
Comprehensive evaluations on long-context language modeling and understanding
tasks verify extensible embedding as an effective, efficient, flexible, and
compatible method to extend the LLM's context. | [
"cs.CL"
] | false |
2402.11597 | 2024-02-18T14:25:19Z | Multi-Task Inference: Can Large Language Models Follow Multiple
Instructions at Once? | [
"Guijin Son",
"Sangwon Baek",
"Sangdae Nam",
"Ilgyun Jeong",
"Seungone Kim"
] | Large language models (LLMs) are typically prompted to follow a single
instruction per inference call. In this work, we analyze whether LLMs also hold
the capability to handle multiple instructions simultaneously, denoted as
Multi-Task Inference. For this purpose, we introduce the MTI Bench(Multi-Task
Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000
instances across 25 tasks. Each task in the MTI Bench involves 2 to 3
sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces
the total inference time by 1.46 times in average since it does not require
multiple inference calls. Interestingly, contrary to the expectation that LLMs
would perform better when tasks are divided, we find that state-of-the-art
LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3% and 12.4% improved
performance with Multi-Task Inference compared to Single-Task Inference on the
MTI Bench. We release the MTI Bench dataset and our code at this link
https://github.com/guijinSON/MTI-Bench. | [
"cs.CL"
] | false |
2402.11608 | 2024-02-18T14:57:53Z | Metric-Learning Encoding Models Identify Processing Profiles of
Linguistic Features in BERT's Representations | [
"Louis Jalouzot",
"Robin Sobczyk",
"Bastien Lhopitallier",
"Jeanne Salle",
"Nur Lan",
"Emmanuel Chemla",
"Yair Lakretz"
] | We introduce Metric-Learning Encoding Models (MLEMs) as a new approach to
understand how neural systems represent the theoretical features of the objects
they process. As a proof-of-concept, we apply MLEMs to neural representations
extracted from BERT, and track a wide variety of linguistic features (e.g.,
tense, subject person, clause type, clause embedding). We find that: (1)
linguistic features are ordered: they separate representations of sentences to
different degrees in different layers; (2) neural representations are organized
hierarchically: in some layers, we find clusters of representations nested
within larger clusters, following successively important linguistic features;
(3) linguistic features are disentangled in middle layers: distinct, selective
units are activated by distinct linguistic features. Methodologically, MLEMs
are superior (4) to multivariate decoding methods, being more robust to type-I
errors, and (5) to univariate encoding methods, in being able to predict both
local and distributed representations. Together, this demonstrates the utility
of Metric-Learning Encoding Methods for studying how linguistic features are
neurally encoded in language models and the advantage of MLEMs over traditional
methods. MLEMs can be extended to other domains (e.g. vision) and to other
neural systems, such as the human brain. | [
"cs.CL"
] | false |
2402.11625 | 2024-02-18T15:33:24Z | SpeCrawler: Generating OpenAPI Specifications from API Documentation
Using Large Language Models | [
"Koren Lazar",
"Matan Vetzler",
"Guy Uziel",
"David Boaz",
"Esther Goldbraich",
"David Amid",
"Ateret Anaby-Tavor"
] | In the digital era, the widespread use of APIs is evident. However, scalable
utilization of APIs poses a challenge due to structure divergence observed in
online API documentation. This underscores the need for automatic tools to
facilitate API consumption. A viable approach involves the conversion of
documentation into an API Specification format. While previous attempts have
been made using rule-based methods, these approaches encountered difficulties
in generalizing across diverse documentation. In this paper we introduce
SpeCrawler, a comprehensive system that utilizes large language models (LLMs)
to generate OpenAPI Specifications from diverse API documentation through a
carefully crafted pipeline. By creating a standardized format for numerous
APIs, SpeCrawler aids in streamlining integration processes within API
orchestrating systems and facilitating the incorporation of tools into LLMs.
The paper explores SpeCrawler's methodology, supported by empirical evidence
and case studies, demonstrating its efficacy through LLM capabilities. | [
"cs.CL"
] | false |
2402.11633 | 2024-02-18T16:20:43Z | Self-seeding and Multi-intent Self-instructing LLMs for Generating
Intent-aware Information-Seeking dialogs | [
"Arian Askari",
"Roxana Petcu",
"Chuan Meng",
"Mohammad Aliannejadi",
"Amin Abolghasemi",
"Evangelos Kanoulas",
"Suzan Verberne"
] | Identifying user intents in information-seeking dialogs is crucial for a
system to meet user's information needs. Intent prediction (IP) is challenging
and demands sufficient dialogs with human-labeled intents for training.
However, manually annotating intents is resource-intensive. While large
language models (LLMs) have been shown to be effective in generating synthetic
data, there is no study on using LLMs to generate intent-aware
information-seeking dialogs. In this paper, we focus on leveraging LLMs for
zero-shot generation of large-scale, open-domain, and intent-aware
information-seeking dialogs. We propose SOLID, which has novel self-seeding and
multi-intent self-instructing schemes. The former improves the generation
quality by using the LLM's own knowledge scope to initiate dialog generation;
the latter prompts the LLM to generate utterances sequentially, and mitigates
the need for manual prompt design by asking the LLM to autonomously adapt its
prompt instruction when generating complex multi-intent utterances.
Furthermore, we propose SOLID-RL, which is further trained to generate a dialog
in one step on the data generated by SOLID. We propose a length-based quality
estimation mechanism to assign varying weights to SOLID-generated dialogs based
on their quality during the training process of SOLID-RL. We use SOLID and
SOLID-RL to generate more than 300k intent-aware dialogs, surpassing the size
of existing datasets. Experiments show that IP methods trained on dialogs
generated by SOLID and SOLID-RL achieve better IP quality than ones trained on
human-generated dialogs. | [
"cs.CL"
] | false |
2402.11638 | 2024-02-18T16:36:00Z | Stumbling Blocks: Stress Testing the Robustness of Machine-Generated
Text Detectors Under Attacks | [
"Yichen Wang",
"Shangbin Feng",
"Abe Bohan Hou",
"Xiao Pu",
"Chao Shen",
"Xiaoming Liu",
"Yulia Tsvetkov",
"Tianxing He"
] | The widespread use of large language models (LLMs) is increasing the demand
for methods that detect machine-generated text to prevent misuse. The goal of
our study is to stress test the detectors' robustness to malicious attacks
under realistic scenarios. We comprehensively study the robustness of popular
machine-generated text detectors under attacks from diverse categories:
editing, paraphrasing, prompting, and co-generating. Our attacks assume limited
access to the generator LLMs, and we compare the performance of detectors on
different attacks under different budget levels. Our experiments reveal that
almost none of the existing detectors remain robust under all the attacks, and
all detectors exhibit different loopholes. Averaging all detectors, the
performance drops by 35% across all attacks. Further, we investigate the
reasons behind these defects and propose initial out-of-the-box patches to
improve robustness. | [
"cs.CL"
] | false |
2402.11655 | 2024-02-18T17:26:51Z | Competition of Mechanisms: Tracing How Language Models Handle Facts and
Counterfactuals | [
"Francesco Ortu",
"Zhijing Jin",
"Diego Doimo",
"Mrinmaya Sachan",
"Alberto Cazzaniga",
"Bernhard Schölkopf"
] | Interpretability research aims to bridge the gap between the empirical
success and our scientific understanding of the inner workings of large
language models (LLMs). However, most existing research in this area focused on
analyzing a single mechanism, such as how models copy or recall factual
knowledge. In this work, we propose the formulation of competition of
mechanisms, which instead of individual mechanisms focuses on the interplay of
multiple mechanisms, and traces how one of them becomes dominant in the final
prediction. We uncover how and where the competition of mechanisms happens
within LLMs using two interpretability methods, logit inspection and attention
modification. Our findings show traces of the mechanisms and their competition
across various model components, and reveal attention positions that
effectively control the strength of certain mechanisms. Our code and data are
at https://github.com/francescortu/Competition_of_Mechanisms. | [
"cs.CL"
] | false |
2402.11683 | 2024-02-18T19:13:52Z | One Prompt To Rule Them All: LLMs for Opinion Summary Evaluation | [
"Tejpalsingh Siledar",
"Swaroop Nath",
"Sankara Sri Raghava Ravindra Muddu",
"Rupasai Rangaraju",
"Swaprava Nath",
"Pushpak Bhattacharyya",
"Suman Banerjee",
"Amey Patil",
"Sudhanshu Shekhar Singh",
"Muthusamy Chelliah",
"Nikesh Garera"
] | Evaluation of opinion summaries using conventional reference-based metrics
rarely provides a holistic evaluation and has been shown to have a relatively
low correlation with human judgments. Recent studies suggest using Large
Language Models (LLMs) as reference-free metrics for NLG evaluation, however,
they remain unexplored for opinion summary evaluation. Moreover, limited
opinion summary evaluation datasets inhibit progress. To address this, we
release the SUMMEVAL-OP dataset covering 7 dimensions related to the evaluation
of opinion summaries: fluency, coherence, relevance, faithfulness, aspect
coverage, sentiment consistency, and specificity. We investigate Op-I-Prompt a
dimension-independent prompt, and Op-Prompts, a dimension-dependent set of
prompts for opinion summary evaluation. Experiments indicate that Op-I-Prompt
emerges as a good alternative for evaluating opinion summaries achieving an
average Spearman correlation of 0.70 with humans, outperforming all previous
approaches. To the best of our knowledge, we are the first to investigate LLMs
as evaluators on both closed-source and open-source models in the opinion
summarization domain. | [
"cs.CL"
] | false |
2402.11700 | 2024-02-18T20:47:10Z | Why Lift so Heavy? Slimming Large Language Models by Cutting Off the
Layers | [
"Shuzhou Yuan",
"Ercong Nie",
"Bolei Ma",
"Michael Färber"
] | Large Language Models (LLMs) possess outstanding capabilities in addressing
various natural language processing (NLP) tasks. However, the sheer size of
these models poses challenges in terms of storage, training and inference due
to the inclusion of billions of parameters through layer stacking. While
traditional approaches such as model pruning or distillation offer ways for
reducing model size, they often come at the expense of performance retention.
In our investigation, we systematically explore the approach of reducing the
number of layers in LLMs. Surprisingly, we observe that even with fewer layers,
LLMs maintain similar or better performance levels, particularly in
prompt-based fine-tuning for text classification tasks. Remarkably, in certain
cases, models with a single layer outperform their fully layered counterparts.
These findings offer valuable insights for future work aimed at mitigating the
size constraints of LLMs while preserving their performance, thereby opening
avenues for significantly more efficient use of LLMs. | [
"cs.CL"
] | false |
2402.11710 | 2024-02-18T21:20:33Z | A Note on Bias to Complete | [
"Jia Xu",
"Mona Diab"
] | Minimizing social bias strengthens societal bonds, promoting shared
understanding and better decision-making. We revisit the definition of bias by
discovering new bias types (e.g., societal status) in dynamic environments and
describe them relative to context, such as culture, region, time, and personal
background. Our framework includes eight hypotheses about bias and a minimizing
bias strategy for each assumption as well as five methods as proposed solutions
in LLM. The realization of the framework is yet to be completed. | [
"cs.CL"
] | false |
2402.11711 | 2024-02-18T21:25:09Z | MORL-Prompt: An Empirical Analysis of Multi-Objective Reinforcement
Learning for Discrete Prompt Optimization | [
"Yasaman Jafari",
"Dheeraj Mekala",
"Rose Yu",
"Taylor Berg-Kirkpatrick"
] | RL-based techniques can be used to search for prompts that when fed into a
target language model maximize a set of user-specified reward functions.
However, in many target applications, the natural reward functions are in
tension with one another -- for example, content preservation vs. style
matching in style transfer tasks. Current techniques focus on maximizing the
average of reward functions, which does not necessarily lead to prompts that
achieve balance across rewards -- an issue that has been well-studied in the
multi-objective and robust optimization literature. In this paper, we adapt
several techniques for multi-objective optimization to RL-based discrete prompt
optimization -- two that consider volume of the Pareto reward surface, and
another that chooses an update direction that benefits all rewards
simultaneously. We conduct an empirical analysis of these methods on two NLP
tasks: style transfer and machine translation, each using three competing
reward functions. Our experiments demonstrate that multi-objective methods that
directly optimize volume perform better and achieve a better balance of all
rewards than those that attempt to find monotonic update directions. | [
"cs.CL"
] | false |
2402.11712 | 2024-02-18T21:28:06Z | Modelling Political Coalition Negotiations Using LLM-based Agents | [
"Farhad Moghimifar",
"Yuan-Fang Li",
"Robert Thomson",
"Gholamreza Haffari"
] | Coalition negotiations are a cornerstone of parliamentary democracies,
characterised by complex interactions and strategic communications among
political parties. Despite its significance, the modelling of these
negotiations has remained unexplored with the domain of Natural Language
Processing (NLP), mostly due to lack of proper data. In this paper, we
introduce coalition negotiations as a novel NLP task, and model it as a
negotiation between large language model-based agents. We introduce a
multilingual dataset, POLCA, comprising manifestos of European political
parties and coalition agreements over a number of elections in these countries.
This dataset addresses the challenge of the current scope limitations in
political negotiation modelling by providing a diverse, real-world basis for
simulation. Additionally, we propose a hierarchical Markov decision process
designed to simulate the process of coalition negotiation between political
parties and predict the outcomes. We evaluate the performance of
state-of-the-art large language models (LLMs) as agents in handling coalition
negotiations, offering insights into their capabilities and paving the way for
future advancements in political modelling. | [
"cs.CL"
] | false |
2402.11436 | 2024-02-18T03:10:39Z | Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models | [
"Wenda Xu",
"Guanglei Zhu",
"Xuandong Zhao",
"Liangming Pan",
"Lei Li",
"William Yang Wang"
] | Recent studies show that self-feedback improves large language models (LLMs)
on certain tasks while worsens other tasks. We discovered that such a contrary
is due to LLM's bias towards their own output. In this paper, we formally
define LLM's self-bias -- the tendency to favor its own generation -- using two
statistics. We analyze six LLMs on translation, constrained text generation,
and mathematical reasoning tasks. We find that self-bias is prevalent in all
examined LLMs across multiple languages and tasks. Our analysis reveals that
while the self-refine pipeline improves the fluency and understandability of
model outputs, it further amplifies self-bias. To mitigate such biases, we
discover that larger model size and external feedback with accurate assessment
can significantly reduce bias in the self-refine pipeline, leading to actual
performance improvement in downstream tasks. | [
"cs.CL",
"cs.AI"
] | false |
2402.11505 | 2024-02-18T08:32:59Z | Federated Fine-tuning of Large Language Models under Heterogeneous
Language Tasks and Client Resources | [
"Jiamu Bai",
"Daoyuan Chen",
"Bingchen Qian",
"Liuyi Yao",
"Yaliang Li"
] | Federated Learning (FL) has recently been applied to the parameter-efficient
fine-tuning of Large Language Models (LLMs). While promising, it raises
significant challenges due to the heterogeneous resources and data
distributions of clients.This study introduces FlexLoRA, a simple yet effective
aggregation scheme for LLM fine-tuning, which mitigates the "buckets effect" in
traditional FL that restricts the potential of clients with ample resources by
tying them to the capabilities of the least-resourced participants. FlexLoRA
allows for dynamic adjustment of local LoRA ranks, fostering the development of
a global model imbued with broader, less task-specific knowledge. By
synthesizing a full-size LoRA weight from individual client contributions and
employing Singular Value Decomposition (SVD) for weight redistribution,
FlexLoRA fully leverages heterogeneous client resources. Involving over 1,600
clients performing diverse NLP tasks, our experiments validate the efficacy of
FlexLoRA, with the federated global model achieving up to a 3.1% average
improvement in downstream NLP task performance. FlexLoRA's practicality is
further underscored by its seamless integration with existing LoRA-based FL
methods and theoretical analysis, offering a path toward scalable,
privacy-preserving federated tuning for LLMs. | [
"cs.CL",
"cs.AI"
] | false |
2402.11518 | 2024-02-18T09:21:12Z | Large Language Model-driven Meta-structure Discovery in Heterogeneous
Information Network | [
"Lin Chen",
"Fengli Xu",
"Nian Li",
"Zhenyu Han",
"Meng Wang",
"Yong Li",
"Pan Hui"
] | Heterogeneous information networks (HIN) have gained increasing popularity
for being able to capture complex relations between nodes of diverse types.
Meta-structure was proposed to identify important patterns of relations on HIN,
which has been proven effective for extracting rich semantic information and
facilitating graph neural networks to learn expressive representations.
However, hand-crafted meta-structures pose challenges for scaling up, which
draws wide research attention for developing automatic meta-structure search
algorithms. Previous efforts concentrate on searching for meta-structures with
good empirical prediction performance, overlooking explainability. Thus, they
often produce meta-structures prone to overfitting and incomprehensible to
humans. To address this, we draw inspiration from the emergent reasoning
abilities of large language models (LLMs). We propose a novel REasoning
meta-STRUCTure search (ReStruct) framework that integrates LLM reasoning into
the evolutionary procedure. ReStruct uses a grammar translator to encode
meta-structures into natural language sentences, and leverages the reasoning
power of LLMs to evaluate semantically feasible meta-structures. ReStruct also
employs performance-oriented evolutionary operations. These two competing
forces jointly optimize for semantic explainability and empirical performance
of meta-structures. We also design a differential LLM explainer that can
produce natural language explanations for the discovered meta-structures, and
refine the explanation by reasoning through the search history. Experiments on
five datasets demonstrate ReStruct achieve SOTA performance in node
classification and link recommendation tasks. Additionally, a survey study
involving 73 graduate students shows that the meta-structures and natural
language explanations generated by ReStruct are substantially more
comprehensible. | [
"cs.LG",
"cs.CL"
] | false |
2402.11534 | 2024-02-18T10:15:38Z | PreAct: Predicting Future in ReAct Enhances Agent's Planning Ability | [
"Dayuan Fu",
"Jianzhao Huang",
"Siyuan Lu",
"Guanting Dong",
"Yejie Wang",
"Keqing He",
"Weiran Xu"
] | Addressing the discrepancies between predictions and actual outcomes often
aids individuals in expanding their thought processes and engaging in
reflection, thereby facilitating reasoning in the correct direction. In this
paper, we introduce $\textbf{PreAct}$, an agent framework that integrates
$\textbf{pre}$diction with $\textbf{rea}$soning and $\textbf{act}$ion.
Leveraging the information provided by predictions, a large language model
(LLM) based agent can offer more diversified and strategically oriented
reasoning, which in turn leads to more effective actions that help the agent
complete complex tasks. Our experiments demonstrate that PreAct outperforms the
ReAct approach in accomplishing complex tasks and that PreAct can be
co-enhanced when combined with Reflexion methods. We prompt the model with
different numbers of historical predictions and find that historical
predictions have a sustained positive effect on LLM planning. The differences
in single-step reasoning between PreAct and ReAct show that PreAct indeed
offers advantages in terms of diversity and strategic directivity over ReAct. | [
"cs.CL",
"cs.AI"
] | false |
2402.11542 | 2024-02-18T10:44:48Z | Question Answering Over Spatio-Temporal Knowledge Graph | [
"Xinbang Dai",
"Huiying Li",
"Guilin Qi"
] | Spatio-temporal knowledge graphs (STKGs) extend the concept of knowledge
graphs (KGs) by incorporating time and location information. While the research
community's focus on Knowledge Graph Question Answering (KGQA), the field of
answering questions incorporating both spatio-temporal information based on
STKGs remains largely unexplored. Furthermore, a lack of comprehensive datasets
also has hindered progress in this area. To address this issue, we present
STQAD, a dataset comprising 10,000 natural language questions for
spatio-temporal knowledge graph question answering (STKGQA). Unfortunately,
various state-of-the-art KGQA approaches fall far short of achieving
satisfactory performance on our dataset. In response, we propose STCQA, a new
spatio-temporal KGQA approach that utilizes a novel STKG embedding method named
STComplEx. By extracting temporal and spatial information from a question, our
QA model can better comprehend the question and retrieve accurate answers from
the STKG. Through extensive experiments, we demonstrate the quality of our
dataset and the effectiveness of our STKGQA method. | [
"cs.CL",
"cs.AI",
"I.2.4; I.2.7"
] | false |
2402.11626 | 2024-02-18T15:41:31Z | Metacognitive Retrieval-Augmented Large Language Models | [
"Yujia Zhou",
"Zheng Liu",
"Jiajie Jin",
"Jian-Yun Nie",
"Zhicheng Dou"
] | Retrieval-augmented generation have become central in natural language
processing due to their efficacy in generating factual content. While
traditional methods employ single-time retrieval, more recent approaches have
shifted towards multi-time retrieval for multi-hop reasoning tasks. However,
these strategies are bound by predefined reasoning steps, potentially leading
to inaccuracies in response generation. This paper introduces MetaRAG, an
approach that combines the retrieval-augmented generation process with
metacognition. Drawing from cognitive psychology, metacognition allows an
entity to self-reflect and critically evaluate its cognitive processes. By
integrating this, MetaRAG enables the model to monitor, evaluate, and plan its
response strategies, enhancing its introspective reasoning abilities. Through a
three-step metacognitive regulation pipeline, the model can identify
inadequacies in initial cognitive responses and fixes them. Empirical
evaluations show that MetaRAG significantly outperforms existing methods. | [
"cs.CL",
"cs.IR"
] | false |
2402.11628 | 2024-02-18T16:03:04Z | Discrete Neural Algorithmic Reasoning | [
"Gleb Rodionov",
"Liudmila Prokhorenkova"
] | Neural algorithmic reasoning aims to capture computations with neural
networks via learning the models to imitate the execution of classical
algorithms. While common architectures are expressive enough to contain the
correct model in the weights space, current neural reasoners are struggling to
generalize well on out-of-distribution data. On the other hand, classical
computations are not affected by distribution shifts as they can be described
as transitions between discrete computational states. In this work, we propose
to force neural reasoners to maintain the execution trajectory as a combination
of finite predefined states. Trained with supervision on the algorithm's state
transitions, such models are able to perfectly align with the original
algorithm. To show this, we evaluate our approach on the SALSA-CLRS benchmark,
where we get perfect test scores for all tasks. Moreover, the proposed
architectural choice allows us to prove the correctness of the learned
algorithms for any test data. | [
"cs.LG",
"cs.CL"
] | false |
2402.11671 | 2024-02-18T18:20:57Z | Autocorrect for Estonian texts: final report from project EKTB25 | [
"Agnes Luhtaru",
"Martin Vainikko",
"Krista Liin",
"Kais Allkivi-Metsoja",
"Jaagup Kippar",
"Pille Eslon",
"Mark Fishel"
] | The project was funded in 2021-2023 by the National Programme of Estonian
Language Technology. Its main aim was to develop spelling and grammar
correction tools for the Estonian language. The main challenge was the very
small amount of available error correction data needed for such development. To
mitigate this, (1) we annotated more correction data for model training and
testing, (2) we tested transfer-learning, i.e. retraining machine learning
models created for other tasks, so as not to depend solely on correction data,
(3) we compared the developed method and model with alternatives, including
large language models. We also developed automatic evaluation, which can
calculate the accuracy and yield of corrections by error category, so that the
effectiveness of different methods can be compared in detail.
There has been a breakthrough in large language models during the project:
GPT4, a commercial language model with Estonian-language support, has been
created. We took into account the existence of the model when adjusting plans
and in the report we present a comparison with the ability of GPT4 to improve
the Estonian language text.
The final results show that the approach we have developed provides better
scores than GPT4 and the result is usable but not entirely reliable yet. The
report also contains ideas on how GPT4 and other major language models can be
implemented in the future, focusing on open-source solutions.
All results of this project are open-data/open-source, with licenses that
allow them to be used for purposes including commercial ones. | [
"cs.CL",
"cs.AI"
] | false |
2402.11684 | 2024-02-18T19:26:49Z | ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language
Model | [
"Guiming Hardy Chen",
"Shunian Chen",
"Ruifei Zhang",
"Junying Chen",
"Xiangbo Wu",
"Zhiyi Zhang",
"Zhihong Chen",
"Jianquan Li",
"Xiang Wan",
"Benyou Wang"
] | Recent advancements in Large Vision-Language Models (LVLMs) have enabled
processing of multimodal inputs in language models but require significant
computational resources for deployment, especially in edge devices. This study
aims to bridge the performance gap between traditional-scale LVLMs and
resource-friendly lite versions by adopting high-quality training data. To do
this, a synthetic dataset is created by leveraging GPT-4V's ability to generate
detailed captions, complex reasoning instructions and detailed answers from
images. The resulted model trained with our data, ALLaVA, achieves competitive
performance on 12 benchmarks up to 3B LVLMs. This work highlights the
feasibility of adopting high-quality data in crafting more efficient LVLMs. Our
online demo is available at \url{https://allava.freedomai.cn}. | [
"cs.CL",
"cs.AI"
] | false |
2402.11709 | 2024-02-18T21:13:05Z | GNNavi: Navigating the Information Flow in Large Language Models by
Graph Neural Network | [
"Shuzhou Yuan",
"Ercong Nie",
"Michael Färber",
"Helmut Schmid",
"Hinrich Schütze"
] | Large Language Models (LLMs) exhibit strong In-Context Learning (ICL)
capabilities when prompts with demonstrations are applied to them. However,
fine-tuning still remains crucial to further enhance their adaptability.
Prompt-based fine-tuning proves to be an effective fine-tuning method in
low-data scenarios, but high demands on computing resources limit its
practicality. We address this issue by introducing a prompt-based
parameter-efficient fine-tuning (PEFT) approach. GNNavi leverages insights into
ICL's information flow dynamics, which indicates that label words act in
prompts as anchors for information propagation. GNNavi employs a Graph Neural
Network (GNN) layer to precisely guide the aggregation and distribution of
information flow during the processing of prompts by hardwiring the desired
information flow into the GNN. Our experiments on text classification tasks
with GPT-2 and Llama2 shows GNNavi surpasses standard prompt-based fine-tuning
methods in few-shot settings by updating just 0.2% to 0.5% of parameters. We
compare GNNavi with prevalent PEFT approaches, such as prefix tuning, LoRA and
Adapter in terms of performance and efficiency. Our analysis reveals that
GNNavi enhances information flow and ensures a clear aggregation process. | [
"cs.CL",
"cs.AI"
] | false |
2402.11723 | 2024-02-18T22:27:42Z | Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing
with Language Models | [
"Paramveer S. Dhillon",
"Somayeh Molaei",
"Jiaqi Li",
"Maximilian Golub",
"Shaochun Zheng",
"Lionel P. Robert"
] | Advances in language modeling have paved the way for novel human-AI
co-writing experiences. This paper explores how varying levels of scaffolding
from large language models (LLMs) shape the co-writing process. Employing a
within-subjects field experiment with a Latin square design, we asked
participants (N=131) to respond to argumentative writing prompts under three
randomly sequenced conditions: no AI assistance (control), next-sentence
suggestions (low scaffolding), and next-paragraph suggestions (high
scaffolding). Our findings reveal a U-shaped impact of scaffolding on writing
quality and productivity (words/time). While low scaffolding did not
significantly improve writing quality or productivity, high scaffolding led to
significant improvements, especially benefiting non-regular writers and less
tech-savvy users. No significant cognitive burden was observed while using the
scaffolded writing tools, but a moderate decrease in text ownership and
satisfaction was noted. Our results have broad implications for the design of
AI-powered writing tools, including the need for personalized scaffolding
mechanisms. | [
"cs.HC",
"cs.CL"
] | false |
2402.11417 | 2024-02-18T01:20:00Z | LoRETTA: Low-Rank Economic Tensor-Train Adaptation for
Ultra-Low-Parameter Fine-Tuning of Large Language Models | [
"Yifan Yang",
"Jiajun Zhou",
"Ngai Wong",
"Zheng Zhang"
] | Various parameter-efficient fine-tuning (PEFT) techniques have been proposed
to enable computationally efficient fine-tuning while maintaining model
performance. However, existing PEFT methods are still limited by the growing
number of trainable parameters with the rapid deployment of Large Language
Models (LLMs). To address this challenge, we present LoRETTA, an
ultra-parameter-efficient framework that significantly reduces trainable
parameters through tensor-train decomposition. Specifically, we propose two
methods, named {LoRETTA}$_{adp}$ and {LoRETTA}$_{rep}$. The former employs
tensorized adapters, offering a high-performance yet lightweight approach for
the fine-tuning of LLMs. The latter emphasizes fine-tuning via weight
parameterization with a set of small tensor factors. LoRETTA achieves
comparable or better performance than most widely used PEFT methods with up to
$100\times$ fewer parameters on the LLaMA-2-7B models. Furthermore, empirical
results demonstrate that the proposed method effectively improves training
efficiency, enjoys better multi-task learning performance, and enhances the
anti-overfitting capability. Plug-and-play codes built upon the Huggingface
framework and PEFT library will be released. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2402.11441 | 2024-02-18T03:36:26Z | InfuserKI: Enhancing Large Language Models with Knowledge Graphs via
Infuser-Guided Knowledge Integration | [
"Fali Wang",
"Runxue Bao",
"Suhang Wang",
"Wenchao Yu",
"Yanchi Liu",
"Wei Cheng",
"Haifeng Chen"
] | Though Large Language Models (LLMs) have shown remarkable open-generation
capabilities across diverse domains, they struggle with knowledge-intensive
tasks. To alleviate this issue, knowledge integration methods have been
proposed to enhance LLMs with domain-specific knowledge graphs using external
modules. However, they suffer from data inefficiency as they require both known
and unknown knowledge for fine-tuning. Thus, we study a novel problem of
integrating unknown knowledge into LLMs efficiently without unnecessary overlap
of known knowledge. Injecting new knowledge poses the risk of forgetting
previously acquired knowledge. To tackle this, we propose a novel
Infuser-Guided Knowledge Integration (InfuserKI) framework that utilizes
transformer internal states to determine whether to enhance the original LLM
output with additional information, thereby effectively mitigating knowledge
forgetting. Evaluations on the UMLS-2.5k and MetaQA domain knowledge graphs
demonstrate that InfuserKI can effectively acquire new knowledge and outperform
state-of-the-art baselines by 9% and 6%, respectively, in reducing knowledge
forgetting. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2402.11469 | 2024-02-18T05:58:25Z | A Curious Case of Searching for the Correlation between Training Data
and Adversarial Robustness of Transformer Textual Models | [
"Cuong Dang",
"Dung D. Le",
"Thai Le"
] | Existing works have shown that fine-tuned textual transformer models achieve
state-of-the-art prediction performances but are also vulnerable to adversarial
text perturbations. Traditional adversarial evaluation is often done
\textit{only after} fine-tuning the models and ignoring the training data. In
this paper, we want to prove that there is also a strong correlation between
training data and model robustness. To this end, we extract 13 different
features representing a wide range of input fine-tuning corpora properties and
use them to predict the adversarial robustness of the fine-tuned models.
Focusing mostly on encoder-only transformer models BERT and RoBERTa with
additional results for BART, ELECTRA and GPT2, we provide diverse evidence to
support our argument. First, empirical analyses show that (a) extracted
features can be used with a lightweight classifier such as Random Forest to
effectively predict the attack success rate and (b) features with the most
influence on the model robustness have a clear correlation with the robustness.
Second, our framework can be used as a fast and effective additional tool for
robustness evaluation since it (a) saves 30x-193x runtime compared to the
traditional technique, (b) is transferable across models, (c) can be used under
adversarial training, and (d) robust to statistical randomness. Our code will
be publicly available. | [
"cs.LG",
"cs.CL",
"cs.CR"
] | false |
2402.11485 | 2024-02-18T07:24:34Z | LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models
with Entity-based Data Augmentation | [
"Ikuya Yamada",
"Ryokan Ri"
] | Adapting English-based large language models (LLMs) to other languages has
become increasingly popular due to the efficiency and potential of
cross-lingual transfer. However, existing language adaptation methods often
overlook the benefits of cross-lingual supervision. In this study, we introduce
LEIA, a language adaptation tuning method that utilizes Wikipedia entity names
aligned across languages. This method involves augmenting the target language
corpus with English entity names and training the model using left-to-right
language modeling. We assess LEIA on diverse question answering datasets using
7B-parameter LLMs, demonstrating significant performance gains across various
non-English languages. The source code is available at
https://github.com/studio-ousia/leia. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2402.11569 | 2024-02-18T12:33:54Z | Developing Autonomous Robot-Mediated Behavior Coaching Sessions with
Haru | [
"Matouš Jelínek",
"Eric Nichols",
"Randy Gomez"
] | This study presents an empirical investigation into the design and impact of
autonomous dialogues in human-robot interaction for behavior change coaching.
We focus on the use of Haru, a tabletop social robot, and explore the
implementation of the Tiny Habits method for fostering positive behavior
change. The core of our study lies in developing a fully autonomous dialogue
system that maximizes Haru's emotional expressiveness and unique personality.
Our methodology involved iterative design and extensive testing of the dialogue
system, ensuring it effectively embodied the principles of the Tiny Habits
method while also incorporating strategies for trust-raising and
trust-dampening. The effectiveness of the final version of the dialogue was
evaluated in an experimental study with human participants (N=12). The results
indicated a significant improvement in perceptions of Haru's liveliness,
interactivity, and neutrality. Additionally, our study contributes to the
broader understanding of dialogue design in social robotics, offering practical
insights for future developments in the field. | [
"cs.RO",
"cs.AI",
"cs.CL"
] | false |
2402.11571 | 2024-02-18T12:35:52Z | Ain't Misbehavin' -- Using LLMs to Generate Expressive Robot Behavior in
Conversations with the Tabletop Robot Haru | [
"Zining Wang",
"Paul Reisert",
"Eric Nichols",
"Randy Gomez"
] | Social robots aim to establish long-term bonds with humans through engaging
conversation. However, traditional conversational approaches, reliant on
scripted interactions, often fall short in maintaining engaging conversations.
This paper addresses this limitation by integrating large language models
(LLMs) into social robots to achieve more dynamic and expressive conversations.
We introduce a fully-automated conversation system that leverages LLMs to
generate robot responses with expressive behaviors, congruent with the robot's
personality. We incorporate robot behavior with two modalities: 1) a
text-to-speech (TTS) engine capable of various delivery styles, and 2) a
library of physical actions for the robot. We develop a custom,
state-of-the-art emotion recognition model to dynamically select the robot's
tone of voice and utilize emojis from LLM output as cues for generating robot
actions. A demo of our system is available here. To illuminate design and
implementation issues, we conduct a pilot study where volunteers chat with a
social robot using our proposed system, and we analyze their feedback,
conducting a rigorous error analysis of chat transcripts. Feedback was
overwhelmingly positive, with participants commenting on the robot's empathy,
helpfulness, naturalness, and entertainment. Most negative feedback was due to
automatic speech recognition (ASR) errors which had limited impact on
conversations. However, we observed a small class of errors, such as the LLM
repeating itself or hallucinating fictitious information and human responses,
that have the potential to derail conversations, raising important issues for
LLM application. | [
"cs.RO",
"cs.AI",
"cs.CL"
] | false |
2402.11639 | 2024-02-18T16:37:32Z | In-Context Learning with Transformers: Softmax Attention Adapts to
Function Lipschitzness | [
"Liam Collins",
"Advait Parulekar",
"Aryan Mokhtari",
"Sujay Sanghavi",
"Sanjay Shakkottai"
] | A striking property of transformers is their ability to perform in-context
learning (ICL), a machine learning framework in which the learner is presented
with a novel context during inference implicitly through some data, and tasked
with making a prediction in that context. As such that learner must adapt to
the context without additional training. We explore the role of softmax
attention in an ICL setting where each context encodes a regression task. We
show that an attention unit learns a window that it uses to implement a
nearest-neighbors predictor adapted to the landscape of the pretraining tasks.
Specifically, we show that this window widens with decreasing Lipschitzness and
increasing label noise in the pretraining tasks. We also show that on low-rank,
linear problems, the attention unit learns to project onto the appropriate
subspace before inference. Further, we show that this adaptivity relies
crucially on the softmax activation and thus cannot be replicated by the linear
activation often studied in prior theoretical analyses. | [
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2402.11681 | 2024-02-18T19:11:58Z | Opening the black box of language acquisition | [
"Jérôme Michaud",
"Anna Jon-and"
] | Recent advances in large language models using deep learning techniques have
renewed interest on how languages can be learned from data. However, it is
unclear whether or how these models represent grammatical information from the
learned languages. In addition, the models must be pre-trained on large corpora
before they can be used. In this work, we propose an alternative, more
transparent and cognitively plausible architecture for learning language.
Instead of using deep learning, our approach uses a minimal cognitive
architecture based on sequence memory and chunking. The learning mechanism is
based on the principles of reinforcement learning. We test our architecture on
a number of natural-like toy languages. Results show that the model can learn
these artificial languages from scratch and extract grammatical information
that supports learning. Our study demonstrates the power of this simple
architecture and stresses the importance of sequence memory as a key component
of the language learning process. Since other animals do not seem to have a
faithful sequence memory, this may explain why only humans have developed
complex languages. | [
"cs.CL",
"cs.NA",
"math.NA"
] | false |
2402.11728 | 2024-02-18T22:55:26Z | Numerical Claim Detection in Finance: A New Financial Dataset,
Weak-Supervision Model, and Market Analysis | [
"Agam Shah",
"Arnav Hiray",
"Pratvi Shah",
"Arkaprabha Banerjee",
"Anushka Singh",
"Dheeraj Eidnani",
"Bhaskar Chaudhury",
"Sudheer Chava"
] | In this paper, we investigate the influence of claims in analyst reports and
earnings calls on financial market returns, considering them as significant
quarterly events for publicly traded companies. To facilitate a comprehensive
analysis, we construct a new financial dataset for the claim detection task in
the financial domain. We benchmark various language models on this dataset and
propose a novel weak-supervision model that incorporates the knowledge of
subject matter experts (SMEs) in the aggregation function, outperforming
existing approaches. Furthermore, we demonstrate the practical utility of our
proposed model by constructing a novel measure ``optimism". Furthermore, we
observed the dependence of earnings surprise and return on our optimism
measure. Our dataset, models, and code will be made publicly (under CC BY 4.0
license) available on GitHub and Hugging Face. | [
"cs.CL",
"cs.LG",
"q-fin.CP"
] | false |
2402.12408 | 2024-02-18T11:24:34Z | ModelGPT: Unleashing LLM's Capabilities for Tailored Model Generation | [
"Zihao Tang",
"Zheqi Lv",
"Shengyu Zhang",
"Fei Wu",
"Kun Kuang"
] | The rapid advancement of Large Language Models (LLMs) has revolutionized
various sectors by automating routine tasks, marking a step toward the
realization of Artificial General Intelligence (AGI). However, they still
struggle to accommodate the diverse and specific needs of users and simplify
the utilization of AI models for the average user. In response, we propose
ModelGPT, a novel framework designed to determine and generate AI models
specifically tailored to the data or task descriptions provided by the user,
leveraging the capabilities of LLMs. Given user requirements, ModelGPT is able
to provide tailored models at most 270x faster than the previous paradigms
(e.g. all-parameter or LoRA finetuning). Comprehensive experiments on NLP, CV,
and Tabular datasets attest to the effectiveness of our framework in making AI
models more accessible and user-friendly. Our code is available at
https://github.com/IshiKura-a/ModelGPT. | [
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2402.14834 | 2024-02-18T05:40:33Z | MSynFD: Multi-hop Syntax aware Fake News Detection | [
"Liang Xiao",
"Qi Zhang",
"Chongyang Shi",
"Shoujin Wang",
"Usman Naseem",
"Liang Hu"
] | The proliferation of social media platforms has fueled the rapid
dissemination of fake news, posing threats to our real-life society. Existing
methods use multimodal data or contextual information to enhance the detection
of fake news by analyzing news content and/or its social context. However,
these methods often overlook essential textual news content (articles) and
heavily rely on sequential modeling and global attention to extract semantic
information. These existing methods fail to handle the complex, subtle twists
in news articles, such as syntax-semantics mismatches and prior biases, leading
to lower performance and potential failure when modalities or social context
are missing. To bridge these significant gaps, we propose a novel multi-hop
syntax aware fake news detection (MSynFD) method, which incorporates
complementary syntax information to deal with subtle twists in fake news.
Specifically, we introduce a syntactical dependency graph and design a
multi-hop subgraph aggregation mechanism to capture multi-hop syntax. It
extends the effect of word perception, leading to effective noise filtering and
adjacent relation enhancement. Subsequently, a sequential relative
position-aware Transformer is designed to capture the sequential information,
together with an elaborate keyword debiasing module to mitigate the prior bias.
Extensive experimental results on two public benchmark datasets verify the
effectiveness and superior performance of our proposed MSynFD over
state-of-the-art detection models. | [
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2402.14835 | 2024-02-18T07:15:03Z | MIKE: A New Benchmark for Fine-grained Multimodal Entity Knowledge
Editing | [
"Jiaqi Li",
"Miaozeng Du",
"Chuanyi Zhang",
"Yongrui Chen",
"Nan Hu",
"Guilin Qi",
"Haiyun Jiang",
"Siyuan Cheng",
"Bozhong Tian"
] | Multimodal knowledge editing represents a critical advancement in enhancing
the capabilities of Multimodal Large Language Models (MLLMs). Despite its
potential, current benchmarks predominantly focus on coarse-grained knowledge,
leaving the intricacies of fine-grained (FG) multimodal entity knowledge
largely unexplored. This gap presents a notable challenge, as FG entity
recognition is pivotal for the practical deployment and effectiveness of MLLMs
in diverse real-world scenarios. To bridge this gap, we introduce MIKE, a
comprehensive benchmark and dataset specifically designed for the FG multimodal
entity knowledge editing. MIKE encompasses a suite of tasks tailored to assess
different perspectives, including Vanilla Name Answering, Entity-Level Caption,
and Complex-Scenario Recognition. In addition, a new form of knowledge editing,
Multi-step Editing, is introduced to evaluate the editing efficiency. Through
our extensive evaluations, we demonstrate that the current state-of-the-art
methods face significant challenges in tackling our proposed benchmark,
underscoring the complexity of FG knowledge editing in MLLMs. Our findings
spotlight the urgent need for novel approaches in this domain, setting a clear
agenda for future research and development efforts within the community. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2402.14836 | 2024-02-18T16:51:02Z | Stealthy Attack on Large Language Model based Recommendation | [
"Jinghao Zhang",
"Yuting Liu",
"Qiang Liu",
"Shu Wu",
"Guibing Guo",
"Liang Wang"
] | Recently, the powerful large language models (LLMs) have been instrumental in
propelling the progress of recommender systems (RS). However, while these
systems have flourished, their susceptibility to security threats has been
largely overlooked. In this work, we reveal that the introduction of LLMs into
recommendation models presents new security vulnerabilities due to their
emphasis on the textual content of items. We demonstrate that attackers can
significantly boost an item's exposure by merely altering its textual content
during the testing phase, without requiring direct interference with the
model's training process. Additionally, the attack is notably stealthy, as it
does not affect the overall recommendation performance and the modifications to
the text are subtle, making it difficult for users and platforms to detect. Our
comprehensive experiments across four mainstream LLM-based recommendation
models demonstrate the superior efficacy and stealthiness of our approach. Our
work unveils a significant security gap in LLM-based recommendation systems and
paves the way for future research on protecting these systems. | [
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2402.14837 | 2024-02-18T23:03:56Z | An Empirical Categorization of Prompting Techniques for Large Language
Models: A Practitioner's Guide | [
"Oluwole Fagbohun",
"Rachel M. Harrison",
"Anton Dereventsov"
] | Due to rapid advancements in the development of Large Language Models (LLMs),
programming these models with prompts has recently gained significant
attention. However, the sheer number of available prompt engineering techniques
creates an overwhelming landscape for practitioners looking to utilize these
tools. For the most efficient and effective use of LLMs, it is important to
compile a comprehensive list of prompting techniques and establish a
standardized, interdisciplinary categorization framework. In this survey, we
examine some of the most well-known prompting techniques from both academic and
practical viewpoints and classify them into seven distinct categories. We
present an overview of each category, aiming to clarify their unique
contributions and showcase their practical applications in real-world examples
in order to equip fellow practitioners with a structured framework for
understanding and categorizing prompting techniques tailored to their specific
domains. We believe that this approach will help simplify the complex landscape
of prompt engineering and enable more effective utilization of LLMs in various
applications. By providing practitioners with a systematic approach to prompt
categorization, we aim to assist in navigating the intricacies of effective
prompt design for conversational pre-trained LLMs and inspire new possibilities
in their respective fields. | [
"cs.CL",
"cs.AI",
"cs.HC",
"cs.LG"
] | false |
2402.16880 | 2024-02-18T12:44:15Z | BESA: Pruning Large Language Models with Blockwise Parameter-Efficient
Sparsity Allocation | [
"Peng Xu",
"Wenqi Shao",
"Mengzhao Chen",
"Shitao Tang",
"Kaipeng Zhang",
"Peng Gao",
"Fengwei An",
"Yu Qiao",
"Ping Luo"
] | Large language models (LLMs) have demonstrated outstanding performance in
various tasks, such as text summarization, text question-answering, and etc.
While their performance is impressive, the computational footprint due to their
vast number of parameters can be prohibitive. Existing solutions such as
SparseGPT and Wanda attempt to alleviate this issue through weight pruning.
However, their layer-wise approach results in significant perturbation to the
model's output and requires meticulous hyperparameter tuning, such as the
pruning rate, which can adversely affect overall model performance. To address
this, this paper introduces a novel LLM pruning technique dubbed blockwise
parameter-efficient sparsity allocation (BESA) by applying a blockwise
reconstruction loss. In contrast to the typical layer-wise pruning techniques,
BESA is characterized by two distinctive attributes: i) it targets the overall
pruning error with respect to individual transformer blocks, and ii) it
allocates layer-specific sparsity in a differentiable manner, both of which
ensure reduced performance degradation after pruning. Our experiments show that
BESA achieves state-of-the-art performance, efficiently pruning LLMs like
LLaMA1, and LLaMA2 with 7B to 70B parameters on a single A100 GPU in just five
hours. Code is available at
\href{https://github.com/OpenGVLab/LLMPrune-BESA}{here}. | [
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2403.00782 | 2024-02-18T10:28:18Z | Ploutos: Towards interpretable stock movement prediction with financial
large language model | [
"Hanshuang Tong",
"Jun Li",
"Ning Wu",
"Ming Gong",
"Dongmei Zhang",
"Qi Zhang"
] | Recent advancements in large language models (LLMs) have opened new pathways
for many domains. However, the full potential of LLMs in financial investments
remains largely untapped. There are two main challenges for typical deep
learning-based methods for quantitative finance. First, they struggle to fuse
textual and numerical information flexibly for stock movement prediction.
Second, traditional methods lack clarity and interpretability, which impedes
their application in scenarios where the justification for predictions is
essential. To solve the above challenges, we propose Ploutos, a novel financial
LLM framework that consists of PloutosGen and PloutosGPT. The PloutosGen
contains multiple primary experts that can analyze different modal data, such
as text and numbers, and provide quantitative strategies from different
perspectives. Then PloutosGPT combines their insights and predictions and
generates interpretable rationales. To generate accurate and faithful
rationales, the training strategy of PloutosGPT leverage rearview-mirror
prompting mechanism to guide GPT-4 to generate rationales, and a dynamic token
weighting mechanism to finetune LLM by increasing key tokens weight. Extensive
experiments show our framework outperforms the state-of-the-art methods on both
prediction accuracy and interpretability. | [
"q-fin.ST",
"cs.AI",
"cs.CL"
] | false |
2403.00784 | 2024-02-18T23:22:40Z | Utilizing BERT for Information Retrieval: Survey, Applications,
Resources, and Challenges | [
"Jiajia Wang",
"Jimmy X. Huang",
"Xinhui Tu",
"Junmei Wang",
"Angela J. Huang",
"Md Tahmid Rahman Laskar",
"Amran Bhuiyan"
] | Recent years have witnessed a substantial increase in the use of deep
learning to solve various natural language processing (NLP) problems. Early
deep learning models were constrained by their sequential or unidirectional
nature, such that they struggled to capture the contextual relationships across
text inputs. The introduction of bidirectional encoder representations from
transformers (BERT) leads to a robust encoder for the transformer model that
can understand the broader context and deliver state-of-the-art performance
across various NLP tasks. This has inspired researchers and practitioners to
apply BERT to practical problems, such as information retrieval (IR). A survey
that focuses on a comprehensive analysis of prevalent approaches that apply
pretrained transformer encoders like BERT to IR can thus be useful for academia
and the industry. In light of this, we revisit a variety of BERT-based methods
in this survey, cover a wide range of techniques of IR, and group them into six
high-level categories: (i) handling long documents, (ii) integrating semantic
information, (iii) balancing effectiveness and efficiency, (iv) predicting the
weights of terms, (v) query expansion, and (vi) document expansion. We also
provide links to resources, including datasets and toolkits, for BERT-based IR
systems. A key highlight of our survey is the comparison between BERT's
encoder-based models and the latest generative Large Language Models (LLMs),
such as ChatGPT, which rely on decoders. Despite the popularity of LLMs, we
find that for specific tasks, finely tuned BERT encoders still outperform, and
at a lower deployment cost. Finally, we summarize the comprehensive outcomes of
the survey and suggest directions for future research in the area. | [
"cs.IR",
"cs.AI",
"cs.CL"
] | false |
2403.15399 | 2024-02-18T07:35:01Z | ChatGPT in Linear Algebra: Strides Forward, Steps to Go | [
"Eli Bagno",
"Thierry Dana-Picard",
"Shulamit Reches"
] | As soon as a new technology emerges, the education community explores its
affordances and the possibilities to apply it in education. In this paper, we
analyze sessions with ChatGPT around topics in basic Linear Algebra. We reflect
the process undertaken by the ChatGPT along the recent year in our area of
interest, emphasising the vast improvement that has been done in grappling with
Linear Algebra problems. In particular, the question whether this software can
be a teaching assistant or even somehow replace the human teacher, is
addressed. As of the time this paper is written, the answer is generally
negative. For the small part where the answer can be positive, some reflections
about an original instrumental genesis are given.
Communication with the software gives the impression to talk to a human, and
sometimes the question is whether the software understands the question or not.
Therefore, the reader's attention is drawn to the fact that ChatGPT works on a
statistical basis and not according to reflection and understanding. | [
"cs.CY",
"cs.CL",
"cs.LG"
] | false |
2402.11656 | 2024-02-18T17:27:51Z | Integrating Pre-Trained Language Model with Physical Layer
Communications | [
"Ju-Hyung Lee",
"Dong-Ho Lee",
"Joohan Lee",
"Jay Pujara"
] | The burgeoning field of on-device AI communication, where devices exchange
information directly through embedded foundation models, such as language
models (LMs), requires robust, efficient, and generalizable communication
frameworks. However, integrating these frameworks with existing wireless
systems and effectively managing noise and bit errors pose significant
challenges. In this work, we introduce a practical on-device AI communication
framework, integrated with physical layer (PHY) communication functions,
demonstrated through its performance on a link-level simulator. Our framework
incorporates end-to-end training with channel noise to enhance resilience,
incorporates vector quantized variational autoencoders (VQ-VAE) for efficient
and robust communication, and utilizes pre-trained encoder-decoder transformers
for improved generalization capabilities. Simulations, across various
communication scenarios, reveal that our framework achieves a 50% reduction in
transmission size while demonstrating substantial generalization ability and
noise robustness under standardized 3GPP channel models. | [
"cs.IT",
"cs.CL",
"cs.LG",
"eess.SP",
"math.IT"
] | false |
2402.11604 | 2024-02-18T14:42:47Z | Self-evolving Autoencoder Embedded Q-Network | [
"J. Senthilnath",
"Bangjian Zhou",
"Zhen Wei Ng",
"Deeksha Aggarwal",
"Rajdeep Dutta",
"Ji Wei Yoon",
"Aye Phyu Phyu Aung",
"Keyu Wu",
"Min Wu",
"Xiaoli Li"
] | In the realm of sequential decision-making tasks, the exploration capability
of a reinforcement learning (RL) agent is paramount for achieving high rewards
through interactions with the environment. To enhance this crucial ability, we
propose SAQN, a novel approach wherein a self-evolving autoencoder (SA) is
embedded with a Q-Network (QN). In SAQN, the self-evolving autoencoder
architecture adapts and evolves as the agent explores the environment. This
evolution enables the autoencoder to capture a diverse range of raw
observations and represent them effectively in its latent space. By leveraging
the disentangled states extracted from the encoder generated latent space, the
QN is trained to determine optimal actions that improve rewards. During the
evolution of the autoencoder architecture, a bias-variance regulatory strategy
is employed to elicit the optimal response from the RL agent. This strategy
involves two key components: (i) fostering the growth of nodes to retain
previously acquired knowledge, ensuring a rich representation of the
environment, and (ii) pruning the least contributing nodes to maintain a more
manageable and tractable latent space. Extensive experimental evaluations
conducted on three distinct benchmark environments and a real-world molecular
environment demonstrate that the proposed SAQN significantly outperforms
state-of-the-art counterparts. The results highlight the effectiveness of the
self-evolving autoencoder and its collaboration with the Q-Network in tackling
sequential decision-making tasks. | [
"cs.LG"
] | false |
2402.11722 | 2024-02-18T22:16:43Z | Invertible Fourier Neural Operators for Tackling Both Forward and
Inverse Problems | [
"Da Long",
"Shandian Zhe"
] | Fourier Neural Operator (FNO) is a popular operator learning method, which
has demonstrated state-of-the-art performance across many tasks. However, FNO
is mainly used in forward prediction, yet a large family of applications rely
on solving inverse problems. In this paper, we propose an invertible Fourier
Neural Operator (iFNO) that tackles both the forward and inverse problems. We
designed a series of invertible Fourier blocks in the latent channel space to
share the model parameters, efficiently exchange the information, and mutually
regularize the learning for the bi-directional tasks. We integrated a
variational auto-encoder to capture the intrinsic structures within the input
space and to enable posterior inference so as to overcome challenges of
illposedness, data shortage, noises, etc. We developed a three-step process for
pre-training and fine tuning for efficient training. The evaluations on five
benchmark problems have demonstrated the effectiveness of our approach. | [
"cs.LG"
] | false |
2402.11740 | 2024-02-18T23:54:35Z | Extraction of nonlinearity in neural networks and model compression with
Koopman operator | [
"Naoki Sugishita",
"Kayo Kinjo",
"Jun Ohkubo"
] | Nonlinearity plays a crucial role in deep neural networks. In this paper, we
first investigate the degree to which the nonlinearity of the neural network is
essential. For this purpose, we employ the Koopman operator, extended dynamic
mode decomposition, and the tensor-train format. The results imply that
restricted nonlinearity is enough for the classification of handwritten
numbers. Then, we propose a model compression method for deep neural networks,
which could be beneficial to handling large networks in resource-constrained
environments. Leveraging the Koopman operator, the proposed method enables us
to use linear algebra in the internal processing of neural networks. We
numerically show that the proposed method performs comparably or better than
conventional methods in highly compressed model settings for the handwritten
number recognition task. | [
"cs.LG"
] | false |
2402.11494 | 2024-02-18T07:49:22Z | Graph Out-of-Distribution Generalization via Causal Intervention | [
"Qitian Wu",
"Fan Nie",
"Chenxiao Yang",
"Tianyi Bao",
"Junchi Yan"
] | Out-of-distribution (OOD) generalization has gained increasing attentions for
learning on graphs, as graph neural networks (GNNs) often exhibit performance
degradation with distribution shifts. The challenge is that distribution shifts
on graphs involve intricate interconnections between nodes, and the environment
labels are often absent in data. In this paper, we adopt a bottom-up
data-generative perspective and reveal a key observation through causal
analysis: the crux of GNNs' failure in OOD generalization lies in the latent
confounding bias from the environment. The latter misguides the model to
leverage environment-sensitive correlations between ego-graph features and
target nodes' labels, resulting in undesirable generalization on new unseen
nodes. Built upon this analysis, we introduce a conceptually simple yet
principled approach for training robust GNNs under node-level distribution
shifts, without prior knowledge of environment labels. Our method resorts to a
new learning objective derived from causal inference that coordinates an
environment estimator and a mixture-of-expert GNN predictor. The new approach
can counteract the confounding bias in training data and facilitate learning
generalizable predictive relations. Extensive experiment demonstrates that our
model can effectively enhance generalization with various types of distribution
shifts and yield up to 27.4\% accuracy improvement over state-of-the-arts on
graph OOD generalization benchmarks. Source codes are available at
https://github.com/fannie1208/CaNet. | [
"cs.LG",
"cs.SI"
] | false |
2402.11495 | 2024-02-18T07:51:20Z | URLBERT:A Contrastive and Adversarial Pre-trained Model for URL
Classification | [
"Yujie Li",
"Yanbin Wang",
"Haitao Xu",
"Zhenhao Guo",
"Zheng Cao",
"Lun Zhang"
] | URLs play a crucial role in understanding and categorizing web content,
particularly in tasks related to security control and online recommendations.
While pre-trained models are currently dominating various fields, the domain of
URL analysis still lacks specialized pre-trained models. To address this gap,
this paper introduces URLBERT, the first pre-trained representation learning
model applied to a variety of URL classification or detection tasks. We first
train a URL tokenizer on a corpus of billions of URLs to address URL data
tokenization. Additionally, we propose two novel pre-training tasks: (1)
self-supervised contrastive learning tasks, which strengthen the model's
understanding of URL structure and the capture of category differences by
distinguishing different variants of the same URL; (2) virtual adversarial
training, aimed at improving the model's robustness in extracting semantic
features from URLs. Finally, our proposed methods are evaluated on tasks
including phishing URL detection, web page classification, and ad filtering,
achieving state-of-the-art performance. Importantly, we also explore multi-task
learning with URLBERT, and experimental results demonstrate that multi-task
learning model based on URLBERT exhibit equivalent effectiveness compared to
independently fine-tuned models, showing the simplicity of URLBERT in handling
complex task requirements. The code for our work is available at
https://github.com/Davidup1/URLBERT. | [
"cs.CR",
"cs.LG"
] | false |
2402.11538 | 2024-02-18T10:38:34Z | PASCL: Supervised Contrastive Learning with Perturbative Augmentation
for Particle Decay Reconstruction | [
"Junjian Lu",
"Siwei Liu",
"Dmitrii Kobylianski",
"Etienne Dreyer",
"Eilam Gross",
"Shangsong Liang"
] | In high-energy physics, particles produced in collision events decay in a
format of a hierarchical tree structure, where only the final decay products
can be observed using detectors. However, the large combinatorial space of
possible tree structures makes it challenging to recover the actual decay
process given a set of final particles. To better analyse the hierarchical tree
structure, we propose a graph-based deep learning model to infer the tree
structure to reconstruct collision events. In particular, we use a compact
matrix representation termed as lowest common ancestor generations (LCAG)
matrix, to encode the particle decay tree structure. Then, we introduce a
perturbative augmentation technique applied to node features, aiming to mimic
experimental uncertainties and increase data diversity. We further propose a
supervised graph contrastive learning algorithm to utilize the information of
inter-particle relations from multiple decay processes. Extensive experiments
show that our proposed supervised graph contrastive learning with perturbative
augmentation (PASCL) method outperforms state-of-the-art baseline models on an
existing physics-based dataset, significantly improving the reconstruction
accuracy. This method provides a more effective training strategy for models
with the same parameters and makes way for more accurate and efficient
high-energy particle physics data analysis. | [
"hep-ph",
"cs.LG"
] | false |
2402.11565 | 2024-02-18T12:24:45Z | Continual Learning on Graphs: Challenges, Solutions, and Opportunities | [
"Xikun Zhang",
"Dongjin Song",
"Dacheng Tao"
] | Continual learning on graph data has recently attracted paramount attention
for its aim to resolve the catastrophic forgetting problem on existing tasks
while adapting the sequentially updated model to newly emerged graph tasks.
While there have been efforts to summarize progress on continual learning
research over Euclidean data, e.g., images and texts, a systematic review of
progress in continual learning on graphs, a.k.a, continual graph learning (CGL)
or lifelong graph learning, is still demanding. Graph data are far more complex
in terms of data structures and application scenarios, making CGL task
settings, model designs, and applications extremely challenging. To bridge the
gap, we provide a comprehensive review of existing continual graph learning
(CGL) algorithms by elucidating the different task settings and categorizing
the existing methods based on their characteristics. We compare the CGL methods
with traditional continual learning techniques and analyze the applicability of
the traditional continual learning techniques to CGL tasks. Additionally, we
review the benchmark works that are crucial to CGL research. Finally, we
discuss the remaining challenges and propose several future directions. We will
maintain an up-to-date GitHub repository featuring a comprehensive list of CGL
algorithms, accessible at
https://github.com/UConn-DSIS/Survey-of-Continual-Learning-on-Graphs. | [
"cs.LG",
"cs.AI"
] | false |
2402.11594 | 2024-02-18T14:12:15Z | Simplifying Hyperparameter Tuning in Online Machine Learning -- The
spotRiverGUI | [
"Thomas Bartz-Beielstein"
] | Batch Machine Learning (BML) reaches its limits when dealing with very large
amounts of streaming data. This is especially true for available memory,
handling drift in data streams, and processing new, unknown data. Online
Machine Learning (OML) is an alternative to BML that overcomes the limitations
of BML. OML is able to process data in a sequential manner, which is especially
useful for data streams. The `river` package is a Python OML-library, which
provides a variety of online learning algorithms for classification,
regression, clustering, anomaly detection, and more. The `spotRiver` package
provides a framework for hyperparameter tuning of OML models. The
`spotRiverGUI` is a graphical user interface for the `spotRiver` package. The
`spotRiverGUI` releases the user from the burden of manually searching for the
optimal hyperparameter setting. After the data is provided, users can compare
different OML algorithms from the powerful `river` package in a convenient way
and tune the selected algorithms very efficiently. | [
"cs.LG",
"cs.AI",
"90C26",
"I.2.6; G.1.6"
] | false |
2402.11654 | 2024-02-18T17:17:17Z | Model-Free $μ$-Synthesis: A Nonsmooth Optimization Perspective | [
"Darioush Keivan",
"Xingang Guo",
"Peter Seiler",
"Geir Dullerud",
"Bin Hu"
] | In this paper, we revisit model-free policy search on an important robust
control benchmark, namely $\mu$-synthesis. In the general output-feedback
setting, there do not exist convex formulations for this problem, and hence
global optimality guarantees are not expected. Apkarian (2011) presented a
nonconvex nonsmooth policy optimization approach for this problem, and achieved
state-of-the-art design results via using subgradient-based policy search
algorithms which generate update directions in a model-based manner. Despite
the lack of convexity and global optimality guarantees, these subgradient-based
policy search methods have led to impressive numerical results in practice.
Built upon such a policy optimization persepctive, our paper extends these
subgradient-based search methods to a model-free setting. Specifically, we
examine the effectiveness of two model-free policy optimization strategies: the
model-free non-derivative sampling method and the zeroth-order policy search
with uniform smoothing. We performed an extensive numerical study to
demonstrate that both methods consistently replicate the design outcomes
achieved by their model-based counterparts. Additionally, we provide some
theoretical justifications showing that convergence guarantees to stationary
points can be established for our model-free $\mu$-synthesis under some
assumptions related to the coerciveness of the cost function. Overall, our
results demonstrate that derivative-free policy optimization offers a
competitive and viable approach for solving general output-feedback
$\mu$-synthesis problems in the model-free setting. | [
"math.OC",
"cs.LG"
] | false |
2402.11664 | 2024-02-18T17:55:59Z | Interpretable Short-Term Load Forecasting via Multi-Scale Temporal
Decomposition | [
"Yuqi Jiang",
"Yan Li",
"Yize Chen"
] | Rapid progress in machine learning and deep learning has enabled a wide range
of applications in the electricity load forecasting of power systems, for
instance, univariate and multivariate short-term load forecasting. Though the
strong capabilities of learning the non-linearity of the load patterns and the
high prediction accuracy have been achieved, the interpretability of typical
deep learning models for electricity load forecasting is less studied. This
paper proposes an interpretable deep learning method, which learns a linear
combination of neural networks that each attends to an input time feature. We
also proposed a multi-scale time series decomposition method to deal with the
complex time patterns. Case studies have been carried out on the Belgium
central grid load dataset and the proposed model demonstrated better accuracy
compared to the frequently applied baseline model. Specifically, the proposed
multi-scale temporal decomposition achieves the best MSE, MAE and RMSE of 0.52,
0.57 and 0.72 respectively. As for interpretability, on one hand, the proposed
method displays generalization capability. On the other hand, it can
demonstrate not only the feature but also the temporal interpretability
compared to other baseline methods. Besides, the global time feature
interpretabilities are also obtained. Obtaining global feature
interpretabilities allows us to catch the overall patterns, trends, and
cyclicality in load data while also revealing the significance of various
time-related features in forming the final outputs. | [
"cs.LG",
"eess.SP"
] | false |
2402.11737 | 2024-02-18T23:41:38Z | Compression Repair for Feedforward Neural Networks Based on Model
Equivalence Evaluation | [
"Zihao Mo",
"Yejiang Yang",
"Shuaizheng Lu",
"Weiming Xiang"
] | In this paper, we propose a method of repairing compressed Feedforward Neural
Networks (FNNs) based on equivalence evaluation of two neural networks. In the
repairing framework, a novel neural network equivalence evaluation method is
developed to compute the output discrepancy between two neural networks. The
output discrepancy can quantitatively characterize the output difference
produced by compression procedures. Based on the computed output discrepancy,
the repairing method first initializes a new training set for the compressed
networks to narrow down the discrepancy between the two neural networks and
improve the performance of the compressed network. Then, we repair the
compressed FNN by re-training based on the training set. We apply our developed
method to the MNIST dataset to demonstrate the effectiveness and advantages of
our proposed repair method. | [
"cs.LG",
"cs.AI"
] | false |
2402.11410 | 2024-02-18T00:53:05Z | An Elementary Predictor Obtaining $2\sqrt{T}$ Distance to Calibration | [
"Eshwar Ram Arunachaleswaran",
"Natalie Collina",
"Aaron Roth",
"Mirah Shi"
] | Blasiok et al. [2023] proposed distance to calibration as a natural measure
of calibration error that unlike expected calibration error (ECE) is
continuous. Recently, Qiao and Zheng [2024] gave a non-constructive argument
establishing the existence of an online predictor that can obtain $O(\sqrt{T})$
distance to calibration in the adversarial setting, which is known to be
impossible for ECE. They leave as an open problem finding an explicit,
efficient algorithm. We resolve this problem and give an extremely simple,
efficient, deterministic algorithm that obtains distance to calibration error
at most $2\sqrt{T}$. | [
"cs.LG",
"cs.DS",
"stat.ML"
] | false |
2402.11427 | 2024-02-18T02:19:02Z | OptEx: Expediting First-Order Optimization with Approximately
Parallelized Iterations | [
"Yao Shu",
"Jiongfeng Fang",
"Ying Tiffany He",
"Fei Richard Yu"
] | First-order optimization (FOO) algorithms are pivotal in numerous
computational domains such as machine learning and signal denoising. However,
their application to complex tasks like neural network training often entails
significant inefficiencies due to the need for many sequential iterations for
convergence. In response, we introduce first-order optimization expedited with
approximately parallelized iterations (OptEx), the first framework that
enhances the efficiency of FOO by leveraging parallel computing to mitigate its
iterative bottleneck. OptEx employs kernelized gradient estimation to make use
of gradient history for future gradient prediction, enabling parallelization of
iterations -- a strategy once considered impractical because of the inherent
iterative dependency in FOO. We provide theoretical guarantees for the
reliability of our kernelized gradient estimation and the iteration complexity
of SGD-based OptEx, confirming that estimation errors diminish to zero as
historical gradients accumulate and that SGD-based OptEx enjoys an effective
acceleration rate of $\Omega(\sqrt{N})$ over standard SGD given parallelism of
N. We also use extensive empirical studies, including synthetic functions,
reinforcement learning tasks, and neural network training across various
datasets, to underscore the substantial efficiency improvements achieved by
OptEx. | [
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2402.11433 | 2024-02-18T02:55:19Z | Improved Indoor Localization with Machine Learning Techniques for IoT
applications | [
"M. W. P. Maduranga"
] | The rise of the Internet of Things (IoT) and mobile internet applications has
spurred interest in location-based services (LBS) for commercial, military, and
social applications. While the global positioning system (GPS) dominates
outdoor localization, its efficacy wanes indoors due to signal challenges.
Indoor localization systems leverage wireless technologies like Wi-Fi, ZigBee,
Bluetooth, UWB, selecting based on context. Received signal strength indicator
(RSSI) technology, known for its accuracy and simplicity, is widely adopted.
This study employs machine learning algorithms in three phases: supervised
regressors, supervised classifiers, and ensemble methods for RSSI-based indoor
localization. Additionally, it introduces a weighted least squares technique
and pseudo-linear solution approach to address non-linear RSSI measurement
equations by approximating them with linear equations. An experimental testbed,
utilizing diverse wireless technologies and anchor nodes, is designed for data
collection, employing IoT cloud architectures. Pre-processing involves
investigating filters for data refinement before algorithm training. The study
employs machine learning models like linear regression, polynomial regression,
support vector regression, random forest regression, and decision tree
regressor across various wireless technologies. These models estimate the
geographical coordinates of a moving target node, and their performance is
evaluated using metrics such as accuracy, root mean square errors, precision,
recall, sensitivity, coefficient of determinant, and the f1-score. The
experiment's outcomes provide insights into the effectiveness of different
supervised machine learning techniques in terms of localization accuracy and
robustness in indoor environments. | [
"cs.LG",
"cs.NI",
"eess.SP"
] | false |
2402.11463 | 2024-02-18T05:35:01Z | Attractor Memory for Long-Term Time Series Forecasting: A Chaos
Perspective | [
"Jiaxi Hu",
"Yuehong Hu",
"Wei Chen",
"Ming Jin",
"Shirui Pan",
"Qingsong Wen",
"Yuxuan Liang"
] | In long-term time series forecasting (LTSF) tasks, existing deep learning
models overlook the crucial characteristic that discrete time series originate
from underlying continuous dynamic systems, resulting in a lack of
extrapolation and evolution capabilities. Recognizing the chaotic nature of
real-world data, our model, \textbf{\textit{Attraos}}, incorporates chaos
theory into LTSF, perceiving real-world time series as observations from
unknown high-dimensional chaotic dynamic systems. Under the concept of
attractor invariance, Attraos utilizes the proposed multi-scale dynamic memory
unit to memorize historical dynamics structure and predicts by a
frequency-enhanced local evolution strategy. Detailed theoretical analysis and
abundant empirical evidence consistently show that Attraos outperforms various
LTSF methods on mainstream LTSF datasets and chaotic datasets. | [
"cs.LG",
"cs.AI",
"nlin.CD"
] | false |
2402.11472 | 2024-02-18T06:22:01Z | DDIPrompt: Drug-Drug Interaction Event Prediction based on Graph Prompt
Learning | [
"Yingying Wang",
"Yun Xiong",
"Xixi Wu",
"Xiangguo Sun",
"Jiawei Zhang"
] | Recently, Graph Neural Networks have become increasingly prevalent in
predicting adverse drug-drug interactions (DDI) due to their proficiency in
modeling the intricate associations between atoms and functional groups within
and across drug molecules. However, they are still hindered by two significant
challenges: (1) the issue of highly imbalanced event distribution, which is a
common but critical problem in medical datasets where certain interactions are
vastly underrepresented. This imbalance poses a substantial barrier to
achieving accurate and reliable DDI predictions. (2) the scarcity of labeled
data for rare events, which is a pervasive issue in the medical field where
rare yet potentially critical interactions are often overlooked or
under-studied due to limited available data. In response, we offer DDIPrompt,
an innovative panacea inspired by the recent advancements in graph prompting.
Our framework aims to address these issues by leveraging the intrinsic
knowledge from pre-trained models, which can be efficiently deployed with
minimal downstream data. Specifically, to solve the first challenge, DDIPrompt
employs augmented links between drugs, considering both structural and
interactive proximity. It features a hierarchical pre-training strategy that
comprehends intra-molecular structures and inter-molecular interactions,
fostering a comprehensive and unbiased understanding of drug properties. For
the second challenge, we implement a prototype-enhanced prompting mechanism
during inference. This mechanism, refined by few-shot examples from each
category, effectively harnesses the rich pre-training knowledge to enhance
prediction accuracy, particularly for these rare but crucial interactions.
Comprehensive evaluations on two benchmark datasets demonstrate the superiority
of DDIPrompt, particularly in predicting rare DDI events. | [
"q-bio.BM",
"cs.AI",
"cs.LG"
] | false |
2402.11552 | 2024-02-18T11:49:38Z | Empirical Density Estimation based on Spline Quasi-Interpolation with
applications to Copulas clustering modeling | [
"Cristiano Tamborrino",
"Antonella Falini",
"Francesca Mazzia"
] | Density estimation is a fundamental technique employed in various fields to
model and to understand the underlying distribution of data. The primary
objective of density estimation is to estimate the probability density function
of a random variable. This process is particularly valuable when dealing with
univariate or multivariate data and is essential for tasks such as clustering,
anomaly detection, and generative modeling. In this paper we propose the
mono-variate approximation of the density using spline quasi interpolation and
we applied it in the context of clustering modeling. The clustering technique
used is based on the construction of suitable multivariate distributions which
rely on the estimation of the monovariate empirical densities (marginals). Such
an approximation is achieved by using the proposed spline quasi-interpolation,
while the joint distributions to model the sought clustering partition is
constructed with the use of copulas functions. In particular, since copulas can
capture the dependence between the features of the data independently from the
marginal distributions, a finite mixture copula model is proposed. The
presented algorithm is validated on artificial and real datasets. | [
"stat.ML",
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2402.11637 | 2024-02-18T16:34:12Z | Poisoning Federated Recommender Systems with Fake Users | [
"Ming Yin",
"Yichang Xu",
"Minghong Fang",
"Neil Zhenqiang Gong"
] | Federated recommendation is a prominent use case within federated learning,
yet it remains susceptible to various attacks, from user to server-side
vulnerabilities. Poisoning attacks are particularly notable among user-side
attacks, as participants upload malicious model updates to deceive the global
model, often intending to promote or demote specific targeted items. This study
investigates strategies for executing promotion attacks in federated
recommender systems.
Current poisoning attacks on federated recommender systems often rely on
additional information, such as the local training data of genuine users or
item popularity. However, such information is challenging for the potential
attacker to obtain. Thus, there is a need to develop an attack that requires no
extra information apart from item embeddings obtained from the server. In this
paper, we introduce a novel fake user based poisoning attack named PoisonFRS to
promote the attacker-chosen targeted item in federated recommender systems
without requiring knowledge about user-item rating data, user attributes, or
the aggregation rule used by the server. Extensive experiments on multiple
real-world datasets demonstrate that PoisonFRS can effectively promote the
attacker-chosen targeted item to a large portion of genuine users and
outperform current benchmarks that rely on additional information about the
system. We further observe that the model updates from both genuine and fake
users are indistinguishable within the latent space. | [
"cs.CR",
"cs.IR",
"cs.LG"
] | false |
2402.11650 | 2024-02-18T17:02:39Z | Theoretical foundations for programmatic reinforcement learning | [
"Guruprerana Shabadi",
"Nathanaël Fijalkow",
"Théo Matricon"
] | The field of Reinforcement Learning (RL) is concerned with algorithms for
learning optimal policies in unknown stochastic environments. Programmatic RL
studies representations of policies as programs, meaning involving higher order
constructs such as control loops. Despite attracting a lot of attention at the
intersection of the machine learning and formal methods communities, very
little is known on the theoretical front about programmatic RL: what are good
classes of programmatic policies? How large are optimal programmatic policies?
How can we learn them? The goal of this paper is to give first answers to these
questions, initiating a theoretical study of programmatic RL. | [
"cs.LG",
"cs.LO",
"cs.PL"
] | false |
2402.11658 | 2024-02-18T17:32:53Z | Dynamic planning in hierarchical active inference | [
"Matteo Priorelli",
"Ivilin Peev Stoianov"
] | By dynamic planning, we refer to the ability of the human brain to infer and
impose motor trajectories related to cognitive decisions. A recent paradigm,
active inference, brings fundamental insights into the adaptation of biological
organisms, constantly striving to minimize prediction errors to restrict
themselves to life-compatible states. Over the past years, many studies have
shown how human and animal behavior could be explained in terms of an active
inferential process -- either as discrete decision-making or continuous motor
control -- inspiring innovative solutions in robotics and artificial
intelligence. Still, the literature lacks a comprehensive outlook on how to
effectively plan actions in changing environments. Setting ourselves the goal
of modeling tool use, we delve into the topic of dynamic planning in active
inference, keeping in mind two crucial aspects of biological goal-directed
behavior: the capacity to understand and exploit affordances for object
manipulation, and to learn the hierarchical interactions between the self and
the environment, including other agents. We start from a simple unit and
gradually describe more advanced structures, comparing recently proposed design
choices and providing basic examples for each section. This study distances
itself from traditional views centered on neural networks and reinforcement
learning, and points toward a yet unexplored direction in active inference:
hybrid representations in hierarchical models. | [
"cs.AI",
"cs.LG",
"cs.RO"
] | false |
2402.11674 | 2024-02-18T18:33:48Z | A Fast Algorithm to Simulate Nonlinear Resistive Networks | [
"Benjamin Scellier"
] | In the quest for energy-efficient artificial intelligence systems, resistor
networks are attracting interest as an alternative to conventional GPU-based
neural networks. These networks leverage the physics of electrical circuits for
inference and can be optimized with local training techniques such as
equilibrium propagation. Despite their potential advantage in terms of power
consumption, the challenge of efficiently simulating these resistor networks
has been a significant bottleneck to assess their scalability, with current
methods either being limited to linear networks or relying on realistic, yet
slow circuit simulators like SPICE. Assuming ideal circuit elements, we
introduce a novel approach for the simulation of nonlinear resistive networks,
which we frame as a quadratic programming problem with linear inequality
constraints, and which we solve using a fast, exact coordinate descent
algorithm. Our simulation methodology significantly outperforms existing
SPICE-based simulations, enabling the training of networks up to 325 times
larger at speeds 150 times faster, resulting in a 50,000-fold improvement in
the ratio of network size to epoch duration. Our approach, adaptable to other
electrical components, can foster more rapid progress in the simulations of
nonlinear electrical networks. | [
"cs.ET",
"cond-mat.dis-nn",
"cs.LG"
] | false |
2402.11687 | 2024-02-18T19:35:30Z | Evaluating Efficacy of Model Stealing Attacks and Defenses on Quantum
Neural Networks | [
"Satwik Kundu",
"Debarshi Kundu",
"Swaroop Ghosh"
] | Cloud hosting of quantum machine learning (QML) models exposes them to a
range of vulnerabilities, the most significant of which is the model stealing
attack. In this study, we assess the efficacy of such attacks in the realm of
quantum computing. We conducted comprehensive experiments on various datasets
with multiple QML model architectures. Our findings revealed that model
stealing attacks can produce clone models achieving up to $0.9\times$ and
$0.99\times$ clone test accuracy when trained using Top-$1$ and Top-$k$ labels,
respectively ($k:$ num\_classes). To defend against these attacks, we leverage
the unique properties of current noisy hardware and perturb the victim model
outputs and hinder the attacker's training process. In particular, we propose:
1) hardware variation-induced perturbation (HVIP) and 2) hardware and
architecture variation-induced perturbation (HAVIP). Although noise and
architectural variability can provide up to $\sim16\%$ output obfuscation, our
comprehensive analysis revealed that models cloned under noisy conditions tend
to be resilient, suffering little to no performance degradation due to such
obfuscations. Despite limited success with our defense techniques, this outcome
has led to an important discovery: QML models trained on noisy hardwares are
naturally resistant to perturbation or obfuscation-based defenses or attacks. | [
"quant-ph",
"cs.CR",
"cs.LG"
] | false |
2402.11729 | 2024-02-18T23:01:28Z | Prospector Heads: Generalized Feature Attribution for Large Models &
Data | [
"Gautam Machiraju",
"Alexander Derry",
"Arjun Desai",
"Neel Guha",
"Amir-Hossein Karimi",
"James Zou",
"Russ Altman",
"Christopher Ré",
"Parag Mallick"
] | Feature attribution, the ability to localize regions of the input data that
are relevant for classification, is an important capability for machine
learning models in scientific and biomedical domains. Current methods for
feature attribution, which rely on "explaining" the predictions of end-to-end
classifiers, suffer from imprecise feature localization and are inadequate for
use with small sample sizes and high-dimensional datasets due to computational
challenges. We introduce prospector heads, an efficient and interpretable
alternative to explanation-based methods for feature attribution that can be
applied to any encoder and any data modality. Prospector heads generalize
across modalities through experiments on sequences (text), images (pathology),
and graphs (protein structures), outperforming baseline attribution methods by
up to 49 points in mean localization AUPRC. We also demonstrate how prospector
heads enable improved interpretation and discovery of class-specific patterns
in the input data. Through their high performance, flexibility, and
generalizability, prospectors provide a framework for improving trust and
transparency for machine learning models in complex domains. | [
"cs.LG",
"cs.AI",
"q-bio.QM"
] | false |
2402.11736 | 2024-02-18T23:39:00Z | Monte Carlo with kernel-based Gibbs measures: Guarantees for
probabilistic herding | [
"Martin Rouault",
"Rémi Bardenet",
"Mylène Maïda"
] | Kernel herding belongs to a family of deterministic quadratures that seek to
minimize the worst-case integration error over a reproducing kernel Hilbert
space (RKHS). In spite of strong experimental support, it has revealed
difficult to prove that this worst-case error decreases at a faster rate than
the standard square root of the number of quadrature nodes, at least in the
usual case where the RKHS is infinite-dimensional. In this theoretical paper,
we study a joint probability distribution over quadrature nodes, whose support
tends to minimize the same worst-case error as kernel herding. We prove that it
does outperform i.i.d. Monte Carlo, in the sense of coming with a tighter
concentration inequality on the worst-case integration error. While not
improving the rate yet, this demonstrates that the mathematical tools of the
study of Gibbs measures can help understand to what extent kernel herding and
its variants improve on computationally cheaper methods. Moreover, we provide
early experimental evidence that a faster rate of convergence, though not
worst-case, is likely. | [
"cs.LG",
"math.PR",
"stat.ML"
] | false |
2402.11739 | 2024-02-18T23:49:18Z | A Transition System Abstraction Framework for Neural Network Dynamical
System Models | [
"Yejiang Yang",
"Zihao Mo",
"Hoang-Dung Tran",
"Weiming Xiang"
] | This paper proposes a transition system abstraction framework for neural
network dynamical system models to enhance the model interpretability, with
applications to complex dynamical systems such as human behavior learning and
verification. To begin with, the localized working zone will be segmented into
multiple localized partitions under the data-driven Maximum Entropy (ME)
partitioning method. Then, the transition matrix will be obtained based on the
set-valued reachability analysis of neural networks. Finally, applications to
human handwriting dynamics learning and verification are given to validate our
proposed abstraction framework, which demonstrates the advantages of enhancing
the interpretability of the black-box model, i.e., our proposed framework is
able to abstract a data-driven neural network model into a transition system,
making the neural network model interpretable through verifying specifications
described in Computational Tree Logic (CTL) languages. | [
"eess.SY",
"cs.LG",
"cs.SY"
] | false |
2402.11826 | 2024-02-19T04:39:16Z | Unveiling the Depths: A Multi-Modal Fusion Framework for Challenging
Scenarios | [
"Jialei Xu",
"Xianming Liu",
"Junjun Jiang",
"Kui Jiang",
"Rui Li",
"Kai Cheng",
"Xiangyang Ji"
] | Monocular depth estimation from RGB images plays a pivotal role in 3D vision.
However, its accuracy can deteriorate in challenging environments such as
nighttime or adverse weather conditions. While long-wave infrared cameras offer
stable imaging in such challenging conditions, they are inherently
low-resolution, lacking rich texture and semantics as delivered by the RGB
image. Current methods focus solely on a single modality due to the
difficulties to identify and integrate faithful depth cues from both sources.
To address these issues, this paper presents a novel approach that identifies
and integrates dominant cross-modality depth features with a learning-based
framework. Concretely, we independently compute the coarse depth maps with
separate networks by fully utilizing the individual depth cues from each
modality. As the advantageous depth spreads across both modalities, we propose
a novel confidence loss steering a confidence predictor network to yield a
confidence map specifying latent potential depth areas. With the resulting
confidence map, we propose a multi-modal fusion network that fuses the final
depth in an end-to-end manner. Harnessing the proposed pipeline, our method
demonstrates the ability of robust depth estimation in a variety of difficult
scenarios. Experimental results on the challenging MS$^2$ and ViViD++ datasets
demonstrate the effectiveness and robustness of our method. | [
"cs.CV"
] | false |
2402.11831 | 2024-02-19T04:45:15Z | Rock Classification Based on Residual Networks | [
"Sining Zhoubian",
"Yuyang Wang",
"Zhihuan Jiang"
] | Rock Classification is an essential geological problem since it provides
important formation information. However, exploration on this problem using
convolutional neural networks is not sufficient. To tackle this problem, we
propose two approaches using residual neural networks. We first adopt data
augmentation methods to enlarge our dataset. By modifying kernel sizes,
normalization methods and composition based on ResNet34, we achieve an accuracy
of 70.1% on the test dataset, with an increase of 3.5% compared to regular
Resnet34. Furthermore, using a similar backbone like BoTNet that incorporates
multihead self attention, we additionally use internal residual connections in
our model. This boosts the model's performance, achieving an accuracy of 73.7%
on the test dataset. We also explore how the number of bottleneck transformer
blocks may influence model performance. We discover that models with more than
one bottleneck transformer block may not further improve performance. Finally,
we believe that our approach can inspire future work related to this problem
and our model design can facilitate the development of new residual model
architectures. | [
"cs.CV"
] | false |
2402.11840 | 2024-02-19T05:06:52Z | An Endoscopic Chisel: Intraoperative Imaging Carves 3D Anatomical Models | [
"Jan Emily Mangulabnan",
"Roger D. Soberanis-Mukul",
"Timo Teufel",
"Manish Sahu",
"Jose L. Porras",
"S. Swaroop Vedula",
"Masaru Ishii",
"Gregory Hager",
"Russell H. Taylor",
"Mathias Unberath"
] | Purpose: Preoperative imaging plays a pivotal role in sinus surgery where CTs
offer patient-specific insights of complex anatomy, enabling real-time
intraoperative navigation to complement endoscopy imaging. However, surgery
elicits anatomical changes not represented in the preoperative model,
generating an inaccurate basis for navigation during surgery progression.
Methods: We propose a first vision-based approach to update the preoperative
3D anatomical model leveraging intraoperative endoscopic video for navigated
sinus surgery where relative camera poses are known. We rely on comparisons of
intraoperative monocular depth estimates and preoperative depth renders to
identify modified regions. The new depths are integrated in these regions
through volumetric fusion in a truncated signed distance function
representation to generate an intraoperative 3D model that reflects tissue
manipulation.
Results: We quantitatively evaluate our approach by sequentially updating
models for a five-step surgical progression in an ex vivo specimen. We compute
the error between correspondences from the updated model and ground-truth
intraoperative CT in the region of anatomical modification. The resulting
models show a decrease in error during surgical progression as opposed to
increasing when no update is employed.
Conclusion: Our findings suggest that preoperative 3D anatomical models can
be updated using intraoperative endoscopy video in navigated sinus surgery.
Future work will investigate improvements to monocular depth estimation as well
as removing the need for external navigation systems. The resulting ability to
continuously update the patient model may provide surgeons with a more precise
understanding of the current anatomical state and paves the way toward a
digital twin paradigm for sinus surgery. | [
"cs.CV"
] | false |
2402.11843 | 2024-02-19T05:13:39Z | WildFake: A Large-scale Challenging Dataset for AI-Generated Images
Detection | [
"Yan Hong",
"Jianfu Zhang"
] | The extraordinary ability of generative models enabled the generation of
images with such high quality that human beings cannot distinguish Artificial
Intelligence (AI) generated images from real-life photographs. The development
of generation techniques opened up new opportunities but concurrently
introduced potential risks to privacy, authenticity, and security. Therefore,
the task of detecting AI-generated imagery is of paramount importance to
prevent illegal activities. To assess the generalizability and robustness of
AI-generated image detection, we present a large-scale dataset, referred to as
WildFake, comprising state-of-the-art generators, diverse object categories,
and real-world applications. WildFake dataset has the following advantages: 1)
Rich Content with Wild collection: WildFake collects fake images from the
open-source community, enriching its diversity with a broad range of image
classes and image styles. 2) Hierarchical structure: WildFake contains fake
images synthesized by different types of generators from GANs, diffusion
models, to other generative models. These key strengths enhance the
generalization and robustness of detectors trained on WildFake, thereby
demonstrating WildFake's considerable relevance and effectiveness for
AI-generated detectors in real-world scenarios. Moreover, our extensive
evaluation experiments are tailored to yield profound insights into the
capabilities of different levels of generative models, a distinctive advantage
afforded by WildFake's unique hierarchical structure. | [
"cs.CV"
] | false |
2402.11849 | 2024-02-19T05:34:08Z | ComFusion: Personalized Subject Generation in Multiple Specific Scenes
From Single Image | [
"Yan Hong",
"Jianfu Zhang"
] | Recent advancements in personalizing text-to-image (T2I) diffusion models
have shown the capability to generate images based on personalized visual
concepts using a limited number of user-provided examples. However, these
models often struggle with maintaining high visual fidelity, particularly in
manipulating scenes as defined by textual inputs. Addressing this, we introduce
ComFusion, a novel approach that leverages pretrained models generating
composition of a few user-provided subject images and predefined-text scenes,
effectively fusing visual-subject instances with textual-specific scenes,
resulting in the generation of high-fidelity instances within diverse scenes.
ComFusion integrates a class-scene prior preservation regularization, which
leverages composites the subject class and scene-specific knowledge from
pretrained models to enhance generation fidelity. Additionally, ComFusion uses
coarse generated images, ensuring they align effectively with both the instance
image and scene texts. Consequently, ComFusion maintains a delicate balance
between capturing the essence of the subject and maintaining scene
fidelity.Extensive evaluations of ComFusion against various baselines in T2I
personalization have demonstrated its qualitative and quantitative superiority. | [
"cs.CV"
] | false |
2402.11882 | 2024-02-19T06:43:25Z | NOTE: Notable generation Of patient Text summaries through Efficient
approach based on direct preference optimization | [
"Imjin Ahn",
"Hansle Gwon",
"Young-Hak Kim",
"Tae Joon Jun",
"Sanghyun Park"
] | The discharge summary is a one of critical documents in the patient journey,
encompassing all events experienced during hospitalization, including multiple
visits, medications, tests, surgery/procedures, and admissions/discharge.
Providing a summary of the patient's progress is crucial, as it significantly
influences future care and planning. Consequently, clinicians face the
laborious and resource-intensive task of manually collecting, organizing, and
combining all the necessary data for a discharge summary. Therefore, we propose
"NOTE", which stands for "Notable generation Of patient Text summaries through
an Efficient approach based on direct preference optimization". NOTE is based
on Medical Information Mart for Intensive Care- III dataset and summarizes a
single hospitalization of a patient. Patient events are sequentially combined
and used to generate a discharge summary for each hospitalization. In the
present circumstances, large language models' application programming
interfaces (LLMs' APIs) are widely available, but importing and exporting
medical data presents significant challenges due to privacy protection policies
in healthcare institutions. Moreover, to ensure optimal performance, it is
essential to implement a lightweight model for internal server or program
within the hospital. Therefore, we utilized DPO and parameter efficient fine
tuning (PEFT) techniques to apply a fine-tuning method that guarantees superior
performance. To demonstrate the practical application of the developed NOTE, we
provide a webpage-based demonstration software. In the future, we will aim to
deploy the software available for actual use by clinicians in hospital. NOTE
can be utilized to generate various summaries not only discharge summaries but
also throughout a patient's journey, thereby alleviating the labor-intensive
workload of clinicians and aiming for increased efficiency. | [
"cs.CV",
"J.3"
] | false |
2402.11909 | 2024-02-19T07:48:29Z | One2Avatar: Generative Implicit Head Avatar For Few-shot User Adaptation | [
"Zhixuan Yu",
"Ziqian Bai",
"Abhimitra Meka",
"Feitong Tan",
"Qiangeng Xu",
"Rohit Pandey",
"Sean Fanello",
"Hyun Soo Park",
"Yinda Zhang"
] | Traditional methods for constructing high-quality, personalized head avatars
from monocular videos demand extensive face captures and training time, posing
a significant challenge for scalability. This paper introduces a novel approach
to create high quality head avatar utilizing only a single or a few images per
user. We learn a generative model for 3D animatable photo-realistic head avatar
from a multi-view dataset of expressions from 2407 subjects, and leverage it as
a prior for creating personalized avatar from few-shot images. Different from
previous 3D-aware face generative models, our prior is built with a
3DMM-anchored neural radiance field backbone, which we show to be more
effective for avatar creation through auto-decoding based on few-shot inputs.
We also handle unstable 3DMM fitting by jointly optimizing the 3DMM fitting and
camera calibration that leads to better few-shot adaptation. Our method
demonstrates compelling results and outperforms existing state-of-the-art
methods for few-shot avatar adaptation, paving the way for more efficient and
personalized avatar creation. | [
"cs.CV"
] | false |
2402.11913 | 2024-02-19T07:59:16Z | PhySU-Net: Long Temporal Context Transformer for rPPG with
Self-Supervised Pre-training | [
"Marko Savic",
"Guoying Zhao"
] | Remote photoplethysmography (rPPG) is a promising technology that consists of
contactless measuring of cardiac activity from facial videos. Most recent
approaches utilize convolutional networks with limited temporal modeling
capability or ignore long temporal context. Supervised rPPG methods are also
severely limited by scarce data availability. In this work, we propose
PhySU-Net, the first long spatial-temporal map rPPG transformer network and a
self-supervised pre-training strategy that exploits unlabeled data to improve
our model. Our strategy leverages traditional methods and image masking to
provide pseudo-labels for self-supervised pre-training. Our model is tested on
two public datasets (OBF and VIPL-HR) and shows superior performance in
supervised training. Furthermore, we demonstrate that our self-supervised
pre-training strategy further improves our model's performance by leveraging
representations learned from unlabeled data. | [
"cs.CV"
] | false |
2402.11928 | 2024-02-19T08:17:13Z | Separating common from salient patterns with Contrastive Representation
Learning | [
"Robin Louiset",
"Edouard Duchesnay",
"Antoine Grigis",
"Pietro Gori"
] | Contrastive Analysis is a sub-field of Representation Learning that aims at
separating common factors of variation between two datasets, a background
(i.e., healthy subjects) and a target (i.e., diseased subjects), from the
salient factors of variation, only present in the target dataset. Despite their
relevance, current models based on Variational Auto-Encoders have shown poor
performance in learning semantically-expressive representations. On the other
hand, Contrastive Representation Learning has shown tremendous performance
leaps in various applications (classification, clustering, etc.). In this work,
we propose to leverage the ability of Contrastive Learning to learn
semantically expressive representations well adapted for Contrastive Analysis.
We reformulate it under the lens of the InfoMax Principle and identify two
Mutual Information terms to maximize and one to minimize. We decompose the
first two terms into an Alignment and a Uniformity term, as commonly done in
Contrastive Learning. Then, we motivate a novel Mutual Information minimization
strategy to prevent information leakage between common and salient
distributions. We validate our method, called SepCLR, on three visual datasets
and three medical datasets, specifically conceived to assess the pattern
separation capability in Contrastive Analysis. Code available at
https://github.com/neurospin-projects/2024_rlouiset_sep_clr. | [
"cs.CV"
] | false |
2402.11957 | 2024-02-19T08:59:58Z | Event-Based Motion Magnification | [
"Yutian Chen",
"Shi Guo",
"Fangzheng Yu",
"Feng Zhang",
"Jinwei Gu",
"Tianfan Xue"
] | Detecting and magnifying imperceptible high-frequency motions in real-world
scenarios has substantial implications for industrial and medical applications.
These motions are characterized by small amplitudes and high frequencies.
Traditional motion magnification methods rely on costly high-speed cameras or
active light sources, which limit the scope of their applications. In this
work, we propose a dual-camera system consisting of an event camera and a
conventional RGB camera for video motion magnification, containing
temporally-dense information from the event stream and spatially-dense data
from the RGB images. This innovative combination enables a broad and
cost-effective amplification of high-frequency motions. By revisiting the
physical camera model, we observe that estimating motion direction and
magnitude necessitates the integration of event streams with additional image
features. On this basis, we propose a novel deep network for event-based video
motion magnification that addresses two primary challenges: firstly, the high
frequency of motion induces a large number of interpolated frames (up to 80),
which our network mitigates with a Second-order Recurrent Propagation module
for better handling of long-term frame interpolations; and secondly, magnifying
subtle motions is sensitive to noise, which we address by utilizing a temporal
filter to amplify motion at specific frequencies and reduce noise impact. We
demonstrate the effectiveness and accuracy of our dual-camera system and
network through extensive experiments in magnifying small-amplitude,
high-frequency motions, offering a cost-effective and flexible solution for
motion detection and magnification. | [
"cs.CV"
] | false |
2402.12004 | 2024-02-19T09:52:41Z | Direct Consistency Optimization for Compositional Text-to-Image
Personalization | [
"Kyungmin Lee",
"Sangkyung Kwak",
"Kihyuk Sohn",
"Jinwoo Shin"
] | Text-to-image (T2I) diffusion models, when fine-tuned on a few personal
images, are able to generate visuals with a high degree of consistency.
However, they still lack in synthesizing images of different scenarios or
styles that are possible in the original pretrained models. To address this, we
propose to fine-tune the T2I model by maximizing consistency to reference
images, while penalizing the deviation from the pretrained model. We devise a
novel training objective for T2I diffusion models that minimally fine-tunes the
pretrained model to achieve consistency. Our method, dubbed \emph{Direct
Consistency Optimization}, is as simple as regular diffusion loss, while
significantly enhancing the compositionality of personalized T2I models. Also,
our approach induces a new sampling method that controls the tradeoff between
image fidelity and prompt fidelity. Lastly, we emphasize the necessity of using
a comprehensive caption for reference images to further enhance the image-text
alignment. We show the efficacy of the proposed method on the T2I
personalization for subject, style, or both. In particular, our method results
in a superior Pareto frontier to the baselines. Generated examples and codes
are in our project page( https://dco-t2i.github.io/). | [
"cs.CV"
] | false |
2402.12043 | 2024-02-19T10:56:58Z | A Lightweight Parallel Framework for Blind Image Quality Assessment | [
"Qunyue Huang",
"Bin Fang"
] | Existing blind image quality assessment (BIQA) methods focus on designing
complicated networks based on convolutional neural networks (CNNs) or
transformer. In addition, some BIQA methods enhance the performance of the
model in a two-stage training manner. Despite the significant advancements,
these methods remarkably raise the parameter count of the model, thus requiring
more training time and computational resources. To tackle the above issues, we
propose a lightweight parallel framework (LPF) for BIQA. First, we extract the
visual features using a pre-trained feature extraction network. Furthermore, we
construct a simple yet effective feature embedding network (FEN) to transform
the visual features, aiming to generate the latent representations that contain
salient distortion information. To improve the robustness of the latent
representations, we present two novel self-supervised subtasks, including a
sample-level category prediction task and a batch-level quality comparison
task. The sample-level category prediction task is presented to help the model
with coarse-grained distortion perception. The batch-level quality comparison
task is formulated to enhance the training data and thus improve the robustness
of the latent representations. Finally, the latent representations are fed into
a distortion-aware quality regression network (DaQRN), which simulates the
human vision system (HVS) and thus generates accurate quality scores.
Experimental results on multiple benchmark datasets demonstrate that the
proposed method achieves superior performance over state-of-the-art approaches.
Moreover, extensive analyses prove that the proposed method has lower
computational complexity and faster convergence speed. | [
"cs.CV"
] | false |
2402.12099 | 2024-02-19T12:28:45Z | Human Video Translation via Query Warping | [
"Haiming Zhu",
"Yangyang Xu",
"Shengfeng He"
] | In this paper, we present QueryWarp, a novel framework for temporally
coherent human motion video translation. Existing diffusion-based video editing
approaches that rely solely on key and value tokens to ensure temporal
consistency, which scarifies the preservation of local and structural regions.
In contrast, we aim to consider complementary query priors by constructing the
temporal correlations among query tokens from different frames. Initially, we
extract appearance flows from source poses to capture continuous human
foreground motion. Subsequently, during the denoising process of the diffusion
model, we employ appearance flows to warp the previous frame's query token,
aligning it with the current frame's query. This query warping imposes explicit
constraints on the outputs of self-attention layers, effectively guaranteeing
temporally coherent translation. We perform experiments on various human motion
video translation tasks, and the results demonstrate that our QueryWarp
framework surpasses state-of-the-art methods both qualitatively and
quantitatively. | [
"cs.CV"
] | false |
2402.12128 | 2024-02-19T13:24:46Z | 3D Vascular Segmentation Supervised by 2D Annotation of Maximum
Intensity Projection | [
"Zhanqiang Guo",
"Zimeng Tan",
"Jianjiang Feng",
"Jie Zhou"
] | Vascular structure segmentation plays a crucial role in medical analysis and
clinical applications. The practical adoption of fully supervised segmentation
models is impeded by the intricacy and time-consuming nature of annotating
vessels in the 3D space. This has spurred the exploration of weakly-supervised
approaches that reduce reliance on expensive segmentation annotations. Despite
this, existing weakly supervised methods employed in organ segmentation, which
encompass points, bounding boxes, or graffiti, have exhibited suboptimal
performance when handling sparse vascular structure. To alleviate this issue,
we employ maximum intensity projection (MIP) to decrease the dimensionality of
3D volume to 2D image for efficient annotation, and the 2D labels are utilized
to provide guidance and oversight for training 3D vessel segmentation model.
Initially, we generate pseudo-labels for 3D blood vessels using the annotations
of 2D projections. Subsequently, taking into account the acquisition method of
the 2D labels, we introduce a weakly-supervised network that fuses 2D-3D deep
features via MIP to further improve segmentation performance. Furthermore, we
integrate confidence learning and uncertainty estimation to refine the
generated pseudo-labels, followed by fine-tuning the segmentation network. Our
method is validated on five datasets (including cerebral vessel, aorta and
coronary artery), demonstrating highly competitive performance in segmenting
vessels and the potential to significantly reduce the time and effort required
for vessel annotation. Our code is available at:
https://github.com/gzq17/Weakly-Supervised-by-MIP. | [
"cs.CV"
] | false |
2402.12138 | 2024-02-19T13:38:15Z | Perceiving Longer Sequences With Bi-Directional Cross-Attention
Transformers | [
"Markus Hiller",
"Krista A. Ehinger",
"Tom Drummond"
] | We present a novel bi-directional Transformer architecture (BiXT) which
scales linearly with input size in terms of computational cost and memory
consumption, but does not suffer the drop in performance or limitation to only
one input modality seen with other efficient Transformer-based approaches. BiXT
is inspired by the Perceiver architectures but replaces iterative attention
with an efficient bi-directional cross-attention module in which input tokens
and latent variables attend to each other simultaneously, leveraging a
naturally emerging attention-symmetry between the two. This approach unlocks a
key bottleneck experienced by Perceiver-like architectures and enables the
processing and interpretation of both semantics (`what') and location (`where')
to develop alongside each other over multiple layers -- allowing its direct
application to dense and instance-based tasks alike. By combining efficiency
with the generality and performance of a full Transformer architecture, BiXT
can process longer sequences like point clouds or images at higher feature
resolutions and achieves competitive performance across a range of tasks like
point cloud part segmentation, semantic image segmentation and image
classification. | [
"cs.CV"
] | false |
2402.12184 | 2024-02-19T14:47:23Z | Colorizing Monochromatic Radiance Fields | [
"Yean Cheng",
"Renjie Wan",
"Shuchen Weng",
"Chengxuan Zhu",
"Yakun Chang",
"Boxin Shi"
] | Though Neural Radiance Fields (NeRF) can produce colorful 3D representations
of the world by using a set of 2D images, such ability becomes non-existent
when only monochromatic images are provided. Since color is necessary in
representing the world, reproducing color from monochromatic radiance fields
becomes crucial. To achieve this goal, instead of manipulating the
monochromatic radiance fields directly, we consider it as a
representation-prediction task in the Lab color space. By first constructing
the luminance and density representation using monochromatic images, our
prediction stage can recreate color representation on the basis of an image
colorization module. We then reproduce a colorful implicit model through the
representation of luminance, density, and color. Extensive experiments have
been conducted to validate the effectiveness of our approaches. Our project
page: https://liquidammonia.github.io/color-nerf. | [
"cs.CV"
] | false |
2402.12185 | 2024-02-19T14:48:23Z | ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for
Complicated Chart Reasoning | [
"Renqiu Xia",
"Bo Zhang",
"Hancheng Ye",
"Xiangchao Yan",
"Qi Liu",
"Hongbin Zhou",
"Zijun Chen",
"Min Dou",
"Botian Shi",
"Junchi Yan",
"Yu Qiao"
] | Recently, many versatile Multi-modal Large Language Models (MLLMs) have
emerged continuously. However, their capacity to query information depicted in
visual charts and engage in reasoning based on the queried contents remains
under-explored. In this paper, to comprehensively and rigorously benchmark the
ability of the off-the-shelf MLLMs in the chart domain, we construct ChartX, a
multi-modal evaluation set covering 18 chart types, 7 chart tasks, 22
disciplinary topics, and high-quality chart data. Besides, we develop ChartVLM
to offer a new perspective on handling multi-modal tasks that strongly depend
on interpretable patterns, such as reasoning tasks in the field of charts or
geometric images. We evaluate the chart-related ability of mainstream MLLMs and
our ChartVLM on the proposed ChartX evaluation set. Extensive experiments
demonstrate that ChartVLM surpasses both versatile and chart-related large
models, achieving results comparable to GPT-4V. We believe that our study can
pave the way for further exploration in creating a more comprehensive chart
evaluation set and developing more interpretable multi-modal models. Both
ChartX and ChartVLM are available at:
https://github.com/UniModal4Reasoning/ChartVLM | [
"cs.CV"
] | false |
2402.12238 | 2024-02-19T15:48:55Z | Mixed Gaussian Flow for Diverse Trajectory Prediction | [
"Jiahe Chen",
"Jinkun Cao",
"Dahua Lin",
"Kris Kitani",
"Jiangmiao Pang"
] | Existing trajectory prediction studies intensively leverage generative
models. Normalizing flow is one of the genres with the advantage of being
invertible to derive the probability density of predicted trajectories.
However, mapping from a standard Gaussian by a flow-based model hurts the
capacity to capture complicated patterns of trajectories, ignoring the
under-represented motion intentions in the training data. To solve the problem,
we propose a flow-based model to transform a mixed Gaussian prior into the
future trajectory manifold. The model shows a better capacity for generating
diverse trajectory patterns. Also, by associating each sub-Gaussian with a
certain subspace of trajectories, we can generate future trajectories with
controllable motion intentions. In such a fashion, the flow-based model is not
encouraged to simply seek the most likelihood of the intended manifold anymore
but a family of controlled manifolds with explicit interpretability. Our
proposed method is demonstrated to show state-of-the-art performance in the
quantitative evaluation of sampling well-aligned trajectories in top-M
generated candidates. We also demonstrate that it can generate diverse,
controllable, and out-of-distribution trajectories. Code is available at
https://github.com/mulplue/MGF. | [
"cs.CV"
] | false |
2402.12376 | 2024-02-19T18:59:07Z | FiT: Flexible Vision Transformer for Diffusion Model | [
"Zeyu Lu",
"Zidong Wang",
"Di Huang",
"Chengyue Wu",
"Xihui Liu",
"Wanli Ouyang",
"Lei Bai"
] | Nature is infinitely resolution-free. In the context of this reality,
existing diffusion models, such as Diffusion Transformers, often face
challenges when processing image resolutions outside of their trained domain.
To overcome this limitation, we present the Flexible Vision Transformer (FiT),
a transformer architecture specifically designed for generating images with
unrestricted resolutions and aspect ratios. Unlike traditional methods that
perceive images as static-resolution grids, FiT conceptualizes images as
sequences of dynamically-sized tokens. This perspective enables a flexible
training strategy that effortlessly adapts to diverse aspect ratios during both
training and inference phases, thus promoting resolution generalization and
eliminating biases induced by image cropping. Enhanced by a meticulously
adjusted network structure and the integration of training-free extrapolation
techniques, FiT exhibits remarkable flexibility in resolution extrapolation
generation. Comprehensive experiments demonstrate the exceptional performance
of FiT across a broad range of resolutions, showcasing its effectiveness both
within and beyond its training resolution distribution. Repository available at
https://github.com/whlzy/FiT. | [
"cs.CV"
] | true |
2402.12377 | 2024-02-19T18:59:41Z | Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis | [
"Christian Reiser",
"Stephan Garbin",
"Pratul P. Srinivasan",
"Dor Verbin",
"Richard Szeliski",
"Ben Mildenhall",
"Jonathan T. Barron",
"Peter Hedman",
"Andreas Geiger"
] | While surface-based view synthesis algorithms are appealing due to their low
computational requirements, they often struggle to reproduce thin structures.
In contrast, more expensive methods that model the scene's geometry as a
volumetric density field (e.g. NeRF) excel at reconstructing fine geometric
detail. However, density fields often represent geometry in a "fuzzy" manner,
which hinders exact localization of the surface. In this work, we modify
density fields to encourage them to converge towards surfaces, without
compromising their ability to reconstruct thin structures. First, we employ a
discrete opacity grid representation instead of a continuous density field,
which allows opacity values to discontinuously transition from zero to one at
the surface. Second, we anti-alias by casting multiple rays per pixel, which
allows occlusion boundaries and subpixel structures to be modelled without
using semi-transparent voxels. Third, we minimize the binary entropy of the
opacity values, which facilitates the extraction of surface geometry by
encouraging opacity values to binarize towards the end of training. Lastly, we
develop a fusion-based meshing strategy followed by mesh simplification and
appearance model fitting. The compact meshes produced by our model can be
rendered in real-time on mobile devices and achieve significantly higher view
synthesis quality compared to existing mesh-based approaches. | [
"cs.CV"
] | true |
2402.12519 | 2024-02-19T20:29:49Z | System Identification of Neural Systems: Going Beyond Images to
Modelling Dynamics | [
"Mai Gamal",
"Mohamed Rashad",
"Eman Ehab",
"Seif Eldawlatly",
"Mennatullah Siam"
] | Vast literature has compared the recordings of biological neurons in the
brain to deep neural networks. The ultimate goal is to interpret deep networks
or to better understand and encode biological neural systems. Recently, there
has been a debate on whether system identification is possible and how much it
can tell us about the brain computation. System identification recognizes
whether one model is more valid to represent the brain computation over
another. Nonetheless, previous work did not consider the time aspect and how
video and dynamics (e.g., motion) modelling in deep networks relate to these
biological neural systems within a large-scale comparison. Towards this end, we
propose a system identification study focused on comparing single image vs.
video understanding models with respect to the visual cortex recordings. Our
study encompasses two sets of experiments; a real environment setup and a
simulated environment setup. The study also encompasses more than 30 models
and, unlike prior works, we focus on convolutional vs. transformer-based,
single vs. two-stream, and fully vs. self-supervised video understanding
models. The goal is to capture a greater variety of architectures that model
dynamics. As such, this signifies the first large-scale study of video
understanding models from a neuroscience perspective. Our results in the
simulated experiments, show that system identification can be attained to a
certain level in differentiating image vs. video understanding models.
Moreover, we provide key insights on how video understanding models predict
visual cortex responses; showing video understanding better than image
understanding models, convolutional models are better in the early-mid regions
than transformer based except for multiscale transformers that are still good
in predicting these regions, and that two-stream models are better than single
stream. | [
"cs.CV"
] | false |
2402.12522 | 2024-02-19T20:33:46Z | An evaluation of Deep Learning based stereo dense matching dataset shift
from aerial images and a large scale stereo dataset | [
"Teng Wu",
"Bruno Vallet",
"Marc Pierrot-Deseilligny",
"Ewelina Rupnik"
] | Dense matching is crucial for 3D scene reconstruction since it enables the
recovery of scene 3D geometry from image acquisition. Deep Learning (DL)-based
methods have shown effectiveness in the special case of epipolar stereo
disparity estimation in the computer vision community. DL-based methods depend
heavily on the quality and quantity of training datasets. However, generating
ground-truth disparity maps for real scenes remains a challenging task in the
photogrammetry community. To address this challenge, we propose a method for
generating ground-truth disparity maps directly from Light Detection and
Ranging (LiDAR) and images to produce a large and diverse dataset for six
aerial datasets across four different areas and two areas with different
resolution images. We also introduce a LiDAR-to-image co-registration
refinement to the framework that takes special precautions regarding occlusions
and refrains from disparity interpolation to avoid precision loss. Evaluating
11 dense matching methods across datasets with diverse scene types, image
resolutions, and geometric configurations, which are deeply investigated in
dataset shift, GANet performs best with identical training and testing data,
and PSMNet shows robustness across different datasets, and we proposed the best
strategy for training with a limit dataset. We will also provide the dataset
and training models; more information can be found at
https://github.com/whuwuteng/Aerial_Stereo_Dataset. | [
"cs.CV"
] | false |
2402.12536 | 2024-02-19T20:50:55Z | Designing High-Performing Networks for Multi-Scale Computer Vision | [
"Cédric Picron"
] | Since the emergence of deep learning, the computer vision field has
flourished with models improving at a rapid pace on more and more complex
tasks. We distinguish three main ways to improve a computer vision model: (1)
improving the data aspect by for example training on a large, more diverse
dataset, (2) improving the training aspect by for example designing a better
optimizer, and (3) improving the network architecture (or network for short).
In this thesis, we chose to improve the latter, i.e. improving the network
designs of computer vision models. More specifically, we investigate new
network designs for multi-scale computer vision tasks, which are tasks
requiring to make predictions about concepts at different scales. The goal of
these new network designs is to outperform existing baseline designs from the
literature. Specific care is taken to make sure the comparisons are fair, by
guaranteeing that the different network designs were trained and evaluated with
the same settings. Code is publicly available at
https://github.com/CedricPicron/DetSeg. | [
"cs.CV"
] | false |
2402.11760 | 2024-02-19T01:17:52Z | Reinforcement Learning as a Parsimonious Alternative to Prediction
Cascades: A Case Study on Image Segmentation | [
"Bharat Srikishan",
"Anika Tabassum",
"Srikanth Allu",
"Ramakrishnan Kannan",
"Nikhil Muralidhar"
] | Deep learning architectures have achieved state-of-the-art (SOTA) performance
on computer vision tasks such as object detection and image segmentation. This
may be attributed to the use of over-parameterized, monolithic deep learning
architectures executed on large datasets. Although such architectures lead to
increased accuracy, this is usually accompanied by a large increase in
computation and memory requirements during inference. While this is a non-issue
in traditional machine learning pipelines, the recent confluence of machine
learning and fields like the Internet of Things has rendered such large
architectures infeasible for execution in low-resource settings. In such
settings, previous efforts have proposed decision cascades where inputs are
passed through models of increasing complexity until desired performance is
achieved. However, we argue that cascaded prediction leads to increased
computational cost due to wasteful intermediate computations. To address this,
we propose PaSeR (Parsimonious Segmentation with Reinforcement Learning) a
non-cascading, cost-aware learning pipeline as an alternative to cascaded
architectures. Through experimental evaluation on real-world and standard
datasets, we demonstrate that PaSeR achieves better accuracy while minimizing
computational cost relative to cascaded models. Further, we introduce a new
metric IoU/GigaFlop to evaluate the balance between cost and performance. On
the real-world task of battery material phase segmentation, PaSeR yields a
minimum performance improvement of 174% on the IoU/GigaFlop metric with respect
to baselines. We also demonstrate PaSeR's adaptability to complementary models
trained on a noisy MNIST dataset, where it achieved a minimum performance
improvement on IoU/GigaFlop of 13.4% over SOTA models. Code and data are
available at https://github.com/scailab/paser . | [
"cs.LG",
"cs.CV"
] | false |
2402.11788 | 2024-02-19T02:31:36Z | MM-SurvNet: Deep Learning-Based Survival Risk Stratification in Breast
Cancer Through Multimodal Data Fusion | [
"Raktim Kumar Mondol",
"Ewan K. A. Millar",
"Arcot Sowmya",
"Erik Meijering"
] | Survival risk stratification is an important step in clinical decision making
for breast cancer management. We propose a novel deep learning approach for
this purpose by integrating histopathological imaging, genetic and clinical
data. It employs vision transformers, specifically the MaxViT model, for image
feature extraction, and self-attention to capture intricate image relationships
at the patient level. A dual cross-attention mechanism fuses these features
with genetic data, while clinical data is incorporated at the final layer to
enhance predictive accuracy. Experiments on the public TCGA-BRCA dataset show
that our model, trained using the negative log likelihood loss function, can
achieve superior performance with a mean C-index of 0.64, surpassing existing
methods. This advancement facilitates tailored treatment strategies,
potentially leading to improved patient outcomes. | [
"cs.CV",
"cs.AI"
] | false |
2402.11812 | 2024-02-19T03:59:32Z | Interpretable Embedding for Ad-hoc Video Search | [
"Jiaxin Wu",
"Chong-Wah Ngo"
] | Answering query with semantic concepts has long been the mainstream approach
for video search. Until recently, its performance is surpassed by concept-free
approach, which embeds queries in a joint space as videos. Nevertheless, the
embedded features as well as search results are not interpretable, hindering
subsequent steps in video browsing and query reformulation. This paper
integrates feature embedding and concept interpretation into a neural network
for unified dual-task learning. In this way, an embedding is associated with a
list of semantic concepts as an interpretation of video content. This paper
empirically demonstrates that, by using either the embedding features or
concepts, considerable search improvement is attainable on TRECVid benchmarked
datasets. Concepts are not only effective in pruning false positive videos, but
also highly complementary to concept-free search, leading to large margin of
improvement compared to state-of-the-art approaches. | [
"cs.CV",
"cs.MM"
] | false |
2402.11836 | 2024-02-19T04:58:40Z | DIO: Dataset of 3D Mesh Models of Indoor Objects for Robotics and
Computer Vision Applications | [
"Nillan Nimal",
"Wenbin Li",
"Ronald Clark",
"Sajad Saeedi"
] | The creation of accurate virtual models of real-world objects is imperative
to robotic simulations and applications such as computer vision, artificial
intelligence, and machine learning. This paper documents the different methods
employed for generating a database of mesh models of real-world objects. These
methods address the tedious and time-intensive process of manually generating
the models using CAD software. Essentially, DSLR/phone cameras were employed to
acquire images of target objects. These images were processed using a
photogrammetry software known as Meshroom to generate a dense surface
reconstruction of the scene. The result produced by Meshroom was edited and
simplified using MeshLab, a mesh-editing software to produce the final model.
Based on the obtained models, this process was effective in modelling the
geometry and texture of real-world objects with high fidelity. An active 3D
scanner was also utilized to accelerate the process for large objects. All
generated models and captured images are made available on the website of the
project. | [
"cs.RO",
"cs.CV"
] | false |
2402.11845 | 2024-02-19T05:15:13Z | Modularized Networks for Few-shot Hateful Meme Detection | [
"Rui Cao",
"Roy Ka-Wei Lee",
"Jing Jiang"
] | In this paper, we address the challenge of detecting hateful memes in the
low-resource setting where only a few labeled examples are available. Our
approach leverages the compositionality of Low-rank adaptation (LoRA), a widely
used parameter-efficient tuning technique. We commence by fine-tuning large
language models (LLMs) with LoRA on selected tasks pertinent to hateful meme
detection, thereby generating a suite of LoRA modules. These modules are
capable of essential reasoning skills for hateful meme detection. We then use
the few available annotated samples to train a module composer, which assigns
weights to the LoRA modules based on their relevance. The model's learnable
parameters are directly proportional to the number of LoRA modules. This
modularized network, underpinned by LLMs and augmented with LoRA modules,
exhibits enhanced generalization in the context of hateful meme detection. Our
evaluation spans three datasets designed for hateful meme detection in a
few-shot learning context. The proposed method demonstrates superior
performance to traditional in-context learning, which is also more
computationally intensive during inference.We then use the few available
annotated samples to train a module composer, which assigns weights to the LoRA
modules based on their relevance. The model's learnable parameters are directly
proportional to the number of LoRA modules. This modularized network,
underpinned by LLMs and augmented with LoRA modules, exhibits enhanced
generalization in the context of hateful meme detection. Our evaluation spans
three datasets designed for hateful meme detection in a few-shot learning
context. The proposed method demonstrates superior performance to traditional
in-context learning, which is also more computationally intensive during
inference. | [
"cs.CL",
"cs.CV"
] | false |
2402.11908 | 2024-02-19T07:48:25Z | Semantic Textual Similarity Assessment in Chest X-ray Reports Using a
Domain-Specific Cosine-Based Metric | [
"Sayeh Gholipour Picha",
"Dawood Al Chanti",
"Alice Caplier"
] | Medical language processing and deep learning techniques have emerged as
critical tools for improving healthcare, particularly in the analysis of
medical imaging and medical text data. These multimodal data fusion techniques
help to improve the interpretation of medical imaging and lead to increased
diagnostic accuracy, informed clinical decisions, and improved patient
outcomes. The success of these models relies on the ability to extract and
consolidate semantic information from clinical text. This paper addresses the
need for more robust methods to evaluate the semantic content of medical
reports. Conventional natural language processing approaches and metrics are
initially designed for considering the semantic context in the natural language
domain and machine translation, often failing to capture the complex semantic
meanings inherent in medical content. In this study, we introduce a novel
approach designed specifically for assessing the semantic similarity between
generated medical reports and the ground truth. Our approach is validated,
demonstrating its efficiency in assessing domain-specific semantic similarity
within medical contexts. By applying our metric to state-of-the-art Chest X-ray
report generation models, we obtain results that not only align with
conventional metrics but also provide more contextually meaningful scores in
the considered medical domain. | [
"cs.CL",
"cs.CV"
] | false |
2402.11929 | 2024-02-19T08:17:21Z | DiLightNet: Fine-grained Lighting Control for Diffusion-based Image
Generation | [
"Chong Zeng",
"Yue Dong",
"Pieter Peers",
"Youkang Kong",
"Hongzhi Wu",
"Xin Tong"
] | This paper presents a novel method for exerting fine-grained lighting control
during text-driven diffusion-based image generation. While existing diffusion
models already have the ability to generate images under any lighting
condition, without additional guidance these models tend to correlate image
content and lighting. Moreover, text prompts lack the necessary expressional
power to describe detailed lighting setups. To provide the content creator with
fine-grained control over the lighting during image generation, we augment the
text-prompt with detailed lighting information in the form of radiance hints,
i.e., visualizations of the scene geometry with a homogeneous canonical
material under the target lighting. However, the scene geometry needed to
produce the radiance hints is unknown. Our key observation is that we only need
to guide the diffusion process, hence exact radiance hints are not necessary;
we only need to point the diffusion model in the right direction. Based on this
observation, we introduce a three stage method for controlling the lighting
during image generation. In the first stage, we leverage a standard pretrained
diffusion model to generate a provisional image under uncontrolled lighting.
Next, in the second stage, we resynthesize and refine the foreground object in
the generated image by passing the target lighting to a refined diffusion
model, named DiLightNet, using radiance hints computed on a coarse shape of the
foreground object inferred from the provisional image. To retain the texture
details, we multiply the radiance hints with a neural encoding of the
provisional synthesized image before passing it to DiLightNet. Finally, in the
third stage, we resynthesize the background to be consistent with the lighting
on the foreground object. We demonstrate and validate our lighting controlled
diffusion model on a variety of text prompts and lighting conditions. | [
"cs.CV",
"cs.GR"
] | true |
2402.11985 | 2024-02-19T09:30:05Z | Weakly Supervised Object Detection in Chest X-Rays with Differentiable
ROI Proposal Networks and Soft ROI Pooling | [
"Philip Müller",
"Felix Meissen",
"Georgios Kaissis",
"Daniel Rueckert"
] | Weakly supervised object detection (WSup-OD) increases the usefulness and
interpretability of image classification algorithms without requiring
additional supervision. The successes of multiple instance learning in this
task for natural images, however, do not translate well to medical images due
to the very different characteristics of their objects (i.e. pathologies). In
this work, we propose Weakly Supervised ROI Proposal Networks (WSRPN), a new
method for generating bounding box proposals on the fly using a specialized
region of interest-attention (ROI-attention) module. WSRPN integrates well with
classic backbone-head classification algorithms and is end-to-end trainable
with only image-label supervision. We experimentally demonstrate that our new
method outperforms existing methods in the challenging task of disease
localization in chest X-ray images. Code:
https://github.com/philip-mueller/wsrpn | [
"cs.CV",
"cs.LG"
] | false |
Subsets and Splits