Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Can Deception Detection Go Deeper? Dataset, Evaluation, and Benchmark
for Deception Reasoning | Deception detection has attracted increasing attention due to its importance
in many practical scenarios. Currently, data scarcity harms the development of
this field. On the one hand, it is costly to hire participants to simulate
deception scenarios. On the other hand, it is difficult to collect videos
containing deceptive behaviors on the Internet. To address data scarcity, this
paper proposes a new data collection pipeline. Specifically, we use GPT-4 to
simulate a role-play between a suspect and a police officer. During
interrogation, the suspect lies to the police officer to evade responsibility
for the crime, while the police officer uncovers the truth and gathers
evidence. Compared with previous datasets, this strategy reduces data
collection costs, providing a promising way to increase the dataset size.
Meanwhile, we extend the traditional deception detection task to deception
reasoning, further providing evidence for deceptive parts. This dataset can
also be used to evaluate the complex reasoning capability of current large
language models and serve as a reasoning benchmark for further research.
| 2,024 | Computation and Language |
Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models | Recent studies show that self-feedback improves large language models (LLMs)
on certain tasks while worsens other tasks. We discovered that such a contrary
is due to LLM's bias towards their own output. In this paper, we formally
define LLM's self-bias -- the tendency to favor its own generation -- using two
statistics. We analyze six LLMs on translation, constrained text generation,
and mathematical reasoning tasks. We find that self-bias is prevalent in all
examined LLMs across multiple languages and tasks. Our analysis reveals that
while the self-refine pipeline improves the fluency and understandability of
model outputs, it further amplifies self-bias. To mitigate such biases, we
discover that larger model size and external feedback with accurate assessment
can significantly reduce bias in the self-refine pipeline, leading to actual
performance improvement in downstream tasks.
| 2,024 | Computation and Language |
InfuserKI: Enhancing Large Language Models with Knowledge Graphs via
Infuser-Guided Knowledge Integration | Though Large Language Models (LLMs) have shown remarkable open-generation
capabilities across diverse domains, they struggle with knowledge-intensive
tasks. To alleviate this issue, knowledge integration methods have been
proposed to enhance LLMs with domain-specific knowledge graphs using external
modules. However, they suffer from data inefficiency as they require both known
and unknown knowledge for fine-tuning. Thus, we study a novel problem of
integrating unknown knowledge into LLMs efficiently without unnecessary overlap
of known knowledge. Injecting new knowledge poses the risk of forgetting
previously acquired knowledge. To tackle this, we propose a novel
Infuser-Guided Knowledge Integration (InfuserKI) framework that utilizes
transformer internal states to determine whether to enhance the original LLM
output with additional information, thereby effectively mitigating knowledge
forgetting. Evaluations on the UMLS-2.5k and MetaQA domain knowledge graphs
demonstrate that InfuserKI can effectively acquire new knowledge and outperform
state-of-the-art baselines by 9% and 6%, respectively, in reducing knowledge
forgetting.
| 2,024 | Computation and Language |
Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and
Improving LLMs | Large language models (LLMs) have achieved impressive human-like performance
across various reasoning tasks. However, their mastery of underlying
inferential rules still falls short of human capabilities. To investigate this,
we propose a logic scaffolding inferential rule generation framework, to
construct an inferential rule base, ULogic, comprising both primitive and
compositional rules across five domains. Our analysis of GPT-series models over
a rule subset reveals significant gaps in LLMs' logic understanding compared to
human performance, especially in compositional and structural complex rules
with certain bias patterns. We further distill these rules into a smaller-scale
inference engine for flexible rule generation and enhancing downstream
reasoning. Through a multi-judger evaluation, our inference engine proves
effective in generating accurate, complex and abstract conclusions and
premises, and improve various commonsense reasoning tasks. Overall, our work
sheds light on LLMs' limitations in grasping inferential rule and suggests ways
to enhance their logical reasoning abilities~\footnote{Code and data are
available at \url{https://github.com/SiyuanWangw/ULogic}.}.
| 2,024 | Computation and Language |
Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM
Evaluation | This paper presents a benchmark self-evolving framework to dynamically
evaluate rapidly advancing Large Language Models (LLMs), aiming for a more
accurate assessment of their capabilities and limitations. We utilize a
multi-agent system to manipulate the context or question of original instances,
reframing new evolving instances with high confidence that dynamically extend
existing benchmarks. Towards a more scalable, robust and fine-grained
evaluation, we implement six reframing operations to construct evolving
instances testing LLMs against diverse queries, data noise and probing their
problem-solving sub-abilities. With this framework, we extend benchmark
datasets of four tasks. Experimental results show a general performance decline
in most LLMs against their original results. This decline under our scalable
and robust evaluations, alongside our fine-grained evaluation, more accurately
reflect models' capabilities. Besides, our framework widens performance
discrepancies both between different models and within the same model across
various tasks, facilitating more informed model selection for specific tasks
(Code and data are available at
https://github.com/NanshineLoong/Self-Evolving-Benchmark).
| 2,024 | Computation and Language |
In-Context Example Ordering Guided by Label Distributions | By allowing models to predict without task-specific training, in-context
learning (ICL) with pretrained LLMs has enormous potential in NLP. However, a
number of problems persist in ICL. In particular, its performance is sensitive
to the choice and order of in-context examples. Given the same set of
in-context examples with different orderings, model performance may vary
between near random to near state-of-the-art. In this work, we formulate
in-context example ordering as an optimization problem. We examine three
problem settings that differ in the assumptions they make about what is known
about the task. Inspired by the idea of learning from label proportions, we
propose two principles for in-context example ordering guided by model's
probability predictions. We apply our proposed principles to thirteen text
classification datasets and nine different autoregressive LLMs with 700M to 13B
parameters. We demonstrate that our approach outperforms the baselines by
improving the classification accuracy, reducing model miscalibration, and also
by selecting better in-context examples.
| 2,024 | Computation and Language |
SciAgent: Tool-augmented Language Models for Scientific Reasoning | Scientific reasoning poses an excessive challenge for even the most advanced
Large Language Models (LLMs). To make this task more practical and solvable for
LLMs, we introduce a new task setting named tool-augmented scientific
reasoning. This setting supplements LLMs with scalable toolsets, and shifts the
focus from pursuing an omniscient problem solver to a proficient tool-user. To
facilitate the research of such setting, we construct a tool-augmented training
corpus named MathFunc which encompasses over 30,000 samples and roughly 6,000
tools. Building on MathFunc, we develop SciAgent to retrieve, understand and,
if necessary, use tools for scientific problem solving. Additionally, we craft
a benchmark, SciToolBench, spanning five scientific domains to evaluate LLMs'
abilities with tool assistance. Extensive experiments on SciToolBench confirm
the effectiveness of SciAgent. Notably, SciAgent-Mistral-7B surpasses other
LLMs with the same size by more than 13% in absolute accuracy. Furthermore,
SciAgent-DeepMath-7B shows much superior performance than ChatGPT.
| 2,024 | Computation and Language |
AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via
Controllable Question Decomposition | Recent advancements in large language models (LLMs) have shown promise in
multi-step reasoning tasks, yet their reliance on extensive manual labeling to
provide procedural feedback remains a significant impediment. To address this
challenge, in this paper, we propose a novel self-supervised framework AutoPRM
that efficiently enhances the fine-tuning of LLMs for intricate reasoning
challenges. Specifically, AutoPRM first decomposes complex problems into more
manageable subquestions with a controllable granularity switch, then
sequentially apply reinforcement learning to iteratively improve the
subquestion solver. Additionally, we propose context-guided-decoding to avoid
reward tampering and guide the subquestion solver towards the solution of the
holistic problem. Extensive experiments show that AutoPRM significantly
improves performance on mathematical and commonsense reasoning tasks over SOTA.
More encouragingly, AutoPRM can be easily integrated with other orthogonal
reasoning pipelines.
| 2,024 | Computation and Language |
MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific
Data Visualization | Scientific data visualization plays a crucial role in research by enabling
the direct display of complex information and assisting researchers in
identifying implicit patterns. Despite its importance, the use of Large
Language Models (LLMs) for scientific data visualization remains rather
unexplored. In this study, we introduce MatPlotAgent, an efficient
model-agnostic LLM agent framework designed to automate scientific data
visualization tasks. Leveraging the capabilities of both code LLMs and
multi-modal LLMs, MatPlotAgent consists of three core modules: query
understanding, code generation with iterative debugging, and a visual feedback
mechanism for error correction. To address the lack of benchmarks in this
field, we present MatPlotBench, a high-quality benchmark consisting of 100
human-verified test cases. Additionally, we introduce a scoring approach that
utilizes GPT-4V for automatic evaluation. Experimental results demonstrate that
MatPlotAgent can improve the performance of various LLMs, including both
commercial and open-source models. Furthermore, the proposed evaluation method
shows a strong correlation with human-annotated scores.
| 2,024 | Computation and Language |
LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative
Tasks | LoRA employs lightweight modules to customize large language models (LLMs)
for each downstream task or domain, where different learned additional modules
represent diverse skills. Combining existing LoRAs to address new tasks can
enhance the reusability of learned LoRAs, particularly beneficial for tasks
with limited annotated data. Most prior works on LoRA combination primarily
rely on task-level weights for each involved LoRA, making different examples
and tokens share the same LoRA weights. However, in generative tasks, different
tokens may necessitate diverse skills to manage. Taking the Chinese math task
as an example, understanding the problem description may depend more on the
Chinese LoRA, while the calculation part may rely more on the math LoRA. To
this end, we propose LoRA-Flow, which utilizes dynamic weights to adjust the
impact of different LoRAs. The weights at each step are determined by a fusion
gate with extremely few parameters, which can be learned with only 200 training
examples. Experiments across six generative tasks demonstrate that our method
consistently outperforms baselines with task-level fusion weights. This
underscores the necessity of introducing dynamic fusion weights for LoRA
combination.
| 2,024 | Computation and Language |
FactPICO: Factuality Evaluation for Plain Language Summarization of
Medical Evidence | Plain language summarization with LLMs can be useful for improving textual
accessibility of technical content. But how factual are these summaries in a
high-stakes domain like medicine? This paper presents FactPICO, a factuality
benchmark for plain language summarization of medical texts describing
randomized controlled trials (RCTs), which are the basis of evidence-based
medicine and can directly inform patient treatment. FactPICO consists of 345
plain language summaries of RCT abstracts generated from three LLMs (i.e.,
GPT-4, Llama-2, and Alpaca), with fine-grained evaluation and natural language
rationales from experts. We assess the factuality of critical elements of RCTs
in those summaries: Populations, Interventions, Comparators, Outcomes (PICO),
as well as the reported findings concerning these. We also evaluate the
correctness of the extra information (e.g., explanations) added by LLMs. Using
FactPICO, we benchmark a range of existing factuality metrics, including the
newly devised ones based on LLMs. We find that plain language summarization of
medical evidence is still challenging, especially when balancing between
simplicity and factuality, and that existing metrics correlate poorly with
expert judgments on the instance level.
| 2,024 | Computation and Language |
When Do LLMs Need Retrieval Augmentation? Mitigating LLMs'
Overconfidence Helps Retrieval Augmentation | Large Language Models (LLMs) have been found to have difficulty knowing they
do not possess certain knowledge and tend to provide specious answers in such
cases. Retrieval Augmentation (RA) has been extensively studied to mitigate
LLMs' hallucinations. However, due to the extra overhead and unassured quality
of retrieval, it may not be optimal to conduct RA all the time. A
straightforward idea is to only conduct retrieval when LLMs are uncertain about
a question. This motivates us to enhance the LLMs' ability to perceive their
knowledge boundaries to help RA. In this paper, we first quantitatively measure
LLMs' such ability and confirm their overconfidence. Then, we study how LLMs'
certainty about a question correlates with their dependence on external
retrieved information. We propose several methods to enhance LLMs' perception
of knowledge boundaries and show that they are effective in reducing
overconfidence. Additionally, equipped with these methods, LLMs can achieve
comparable or even better performance of RA with much fewer retrieval calls.
| 2,024 | Computation and Language |
DictLLM: Harnessing Key-Value Data Structures with Large Language Models
for Enhanced Medical Diagnostics | Structured data offers a sophisticated mechanism for the organization of
information. Existing methodologies for the text-serialization of structured
data in the context of large language models fail to adequately address the
heterogeneity inherent in key-value structured data. These methods are not
ideal and frequently result in larger input sizes and poor adaptability to
input changes. In this paper, we introduce DictLLM, an innovative framework
designed to improve the modeling of key-value structured data, like medical
laboratory reports, for generating medical diagnoses. DictLLM integrates three
key components: (1) group positional encoding to maintain permutation
invariance, (2) hierarchical attention bias to capture the inherent bias in
structured data, and (3) an optimal transport alignment layer that aligns the
embedding generated by the dictionary encoder with the LLM, thereby producing a
sequence of fixed-length virtual tokens. We carry out experiments using various
LLM models on a comprehensive real-world medical laboratory report dataset for
automatic diagnosis generation, our findings illustrate that DictLLM
significantly outperforms established baseline methods and few-shot GPT-4
implementations in terms of both Rouge-L and Knowledge F1 scores. Furthermore,
our evaluation of the framework's scalability and robustness, through a series
of experiments, underscores its exceptional capability in accurately modeling
the complex key-value data structure of medical dictionary data.
| 2,024 | Computation and Language |
LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models
with Entity-based Data Augmentation | Adapting English-based large language models (LLMs) to other languages has
become increasingly popular due to the efficiency and potential of
cross-lingual transfer. However, existing language adaptation methods often
overlook the benefits of cross-lingual supervision. In this study, we introduce
LEIA, a language adaptation tuning method that utilizes Wikipedia entity names
aligned across languages. This method involves augmenting the target language
corpus with English entity names and training the model using left-to-right
language modeling. We assess LEIA on diverse question answering datasets using
7B-parameter LLMs, demonstrating significant performance gains across various
non-English languages. The source code is available at
https://github.com/studio-ousia/leia.
| 2,024 | Computation and Language |
What's the Plan? Evaluating and Developing Planning-Aware Techniques for
LLMs | Planning is a fundamental task in artificial intelligence that involves
finding a sequence of actions that achieve a specified goal in a given
environment. Large language models (LLMs) are increasingly used for
applications that require planning capabilities, such as web or embodied
agents. In line with recent studies, we demonstrate through experimentation
that LLMs lack necessary skills required for planning. Based on these
observations, we advocate for the potential of a hybrid approach that combines
LLMs with classical planning methodology. Then, we introduce SimPlan, a novel
hybrid-method, and evaluate its performance in a new challenging setup. Our
extensive experiments across various planning domains demonstrate that SimPlan
significantly outperforms existing LLM-based planners.
| 2,024 | Computation and Language |
Benchmarking Knowledge Boundary for Large Language Model: A Different
Perspective on Model Evaluation | In recent years, substantial advancements have been made in the development
of large language models, achieving remarkable performance across diverse
tasks. To evaluate the knowledge ability of language models, previous studies
have proposed lots of benchmarks based on question-answering pairs. We argue
that it is not reliable and comprehensive to evaluate language models with a
fixed question or limited paraphrases as the query, since language models are
sensitive to prompt. Therefore, we introduce a novel concept named knowledge
boundary to encompass both prompt-agnostic and prompt-sensitive knowledge
within language models. Knowledge boundary avoids prompt sensitivity in
language model evaluations, rendering them more dependable and robust. To
explore the knowledge boundary for a given model, we propose projected gradient
descent method with semantic constraints, a new algorithm designed to identify
the optimal prompt for each piece of knowledge. Experiments demonstrate a
superior performance of our algorithm in computing the knowledge boundary
compared to existing methods. Furthermore, we evaluate the ability of multiple
language models in several domains with knowledge boundary.
| 2,024 | Computation and Language |
Federated Fine-tuning of Large Language Models under Heterogeneous
Language Tasks and Client Resources | Federated Learning (FL) has recently been applied to the parameter-efficient
fine-tuning of Large Language Models (LLMs). While promising, it raises
significant challenges due to the heterogeneous resources and data
distributions of clients.This study introduces FlexLoRA, a simple yet effective
aggregation scheme for LLM fine-tuning, which mitigates the "buckets effect" in
traditional FL that restricts the potential of clients with ample resources by
tying them to the capabilities of the least-resourced participants. FlexLoRA
allows for dynamic adjustment of local LoRA ranks, fostering the development of
a global model imbued with broader, less task-specific knowledge. By
synthesizing a full-size LoRA weight from individual client contributions and
employing Singular Value Decomposition (SVD) for weight redistribution,
FlexLoRA fully leverages heterogeneous client resources. Involving over 1,600
clients performing diverse NLP tasks, our experiments validate the efficacy of
FlexLoRA, with the federated global model achieving up to a 3.1% average
improvement in downstream NLP task performance. FlexLoRA's practicality is
further underscored by its seamless integration with existing LoRA-based FL
methods and theoretical analysis, offering a path toward scalable,
privacy-preserving federated tuning for LLMs.
| 2,024 | Computation and Language |
From Prejudice to Parity: A New Approach to Debiasing Large Language
Model Word Embeddings | Embeddings play a pivotal role in the efficacy of Large Language Models. They
are the bedrock on which these models grasp contextual relationships and foster
a more nuanced understanding of language and consequently perform remarkably on
a plethora of complex tasks that require a fundamental understanding of human
language. Given that these embeddings themselves often reflect or exhibit bias,
it stands to reason that these models may also inadvertently learn this bias.
In this work, we build on the seminal previous work and propose DeepSoftDebias,
an algorithm that uses a neural network to perform 'soft debiasing'. We
exhaustively evaluate this algorithm across a variety of SOTA datasets,
accuracy metrics, and challenging NLP tasks. We find that DeepSoftDebias
outperforms the current state-of-the-art methods at reducing bias across
gender, race, and religion.
| 2,024 | Computation and Language |
Knowledge-to-SQL: Enhancing SQL Generation with Data Expert LLM | Generating accurate SQL for user queries (text-to-SQL) is a long-standing
problem since the generation of the SQL requires comprehending the query and
database and retrieving the accurate data from the database accordingly.
Existing models rely on the comprehensive ability of Large Language Models
(LLMs) to generate the SQL according to the database schema. However, there is
some necessary knowledge that is not explicitly included in the database schema
or has been learned by LLMs. Thus, the generated SQL of the
knowledge-insufficient queries may be inaccurate, which negatively impacts the
robustness of the text-to-SQL models. To deal with this situation, we propose
the Knowledge-to-SQL framework, which employs tailored Data Expert LLM (DELLM)
to provide helpful knowledge for all types of text-to-SQL models. Specifically,
we provide the detailed design of DELLM, in terms of table reading, and the
basic fine-tuning process. We further provide a Preference Learning via
Database Feedback (PLDBF) training strategy to guide the DELLM to generate more
helpful knowledge for LLMs. Extensive experiments verify DELLM can enhance the
state-of-the-art LLMs on text-to-SQL tasks. The model structure and the
parameter weight of DELLM are released for further research.
| 2,024 | Computation and Language |
Unveiling the Secrets of Engaging Conversations: Factors that Keep Users
Hooked on Role-Playing Dialog Agents | With the growing humanlike nature of dialog agents, people are now engaging
in extended conversations that can stretch from brief moments to substantial
periods of time. Understanding the factors that contribute to sustaining these
interactions is crucial, yet existing studies primarily focusing on short-term
simulations that rarely explore such prolonged and real conversations.
In this paper, we investigate the factors influencing retention rates in real
interactions with roleplaying models. By analyzing a large dataset of
interactions between real users and thousands of characters, we systematically
examine multiple factors and assess their impact on user retention rate.
Surprisingly, we find that the degree to which the bot embodies the roles it
plays has limited influence on retention rates, while the length of each turn
it speaks significantly affects retention rates. This study sheds light on the
critical aspects of user engagement with role-playing models and provides
valuable insights for future improvements in the development of large language
models for role-playing purposes.
| 2,024 | Computation and Language |
Advancing Translation Preference Modeling with RLHF: A Step Towards
Cost-Effective Solution | Faithfulness, expressiveness, and elegance is the constant pursuit in machine
translation. However, traditional metrics like \textit{BLEU} do not strictly
align with human preference of translation quality. In this paper, we explore
leveraging reinforcement learning with human feedback (\textit{RLHF}) to
improve translation quality. It is non-trivial to collect a large high-quality
dataset of human comparisons between translations, especially for low-resource
languages. To address this issue, we propose a cost-effective preference
learning strategy, optimizing reward models by distinguishing between human and
machine translations. In this manner, the reward model learns the deficiencies
of machine translation compared to human and guides subsequent improvements in
machine translation. Experimental results demonstrate that \textit{RLHF} can
effectively enhance translation quality and this improvement benefits other
translation directions not trained with \textit{RLHF}. Further analysis
indicates that the model's language capabilities play a crucial role in
preference learning. A reward model with strong language capabilities can more
sensitively learn the subtle differences in translation quality and align
better with real human translation preferences.
| 2,024 | Computation and Language |
Chain-of-Instructions: Compositional Instruction Tuning on Large
Language Models | Fine-tuning large language models (LLMs) with a collection of large and
diverse instructions has improved the model's generalization to different
tasks, even for unseen tasks. However, most existing instruction datasets
include only single instructions, and they struggle to follow complex
instructions composed of multiple subtasks (Wang et al., 2023a). In this work,
we propose a novel concept of compositional instructions called
chain-of-instructions (CoI), where the output of one instruction becomes an
input for the next like a chain. Unlike the conventional practice of solving
single instruction tasks, our proposed method encourages a model to solve each
subtask step by step until the final answer is reached. CoI-tuning (i.e.,
fine-tuning with CoI instructions) improves the model's ability to handle
instructions composed of multiple subtasks. CoI-tuned models also outperformed
baseline models on multilingual summarization, demonstrating the
generalizability of CoI models on unseen composite downstream tasks.
| 2,024 | Computation and Language |
PreAct: Predicting Future in ReAct Enhances Agent's Planning Ability | Addressing the discrepancies between predictions and actual outcomes often
aids individuals in expanding their thought processes and engaging in
reflection, thereby facilitating reasoning in the correct direction. In this
paper, we introduce $\textbf{PreAct}$, an agent framework that integrates
$\textbf{pre}$diction with $\textbf{rea}$soning and $\textbf{act}$ion.
Leveraging the information provided by predictions, a large language model
(LLM) based agent can offer more diversified and strategically oriented
reasoning, which in turn leads to more effective actions that help the agent
complete complex tasks. Our experiments demonstrate that PreAct outperforms the
ReAct approach in accomplishing complex tasks and that PreAct can be
co-enhanced when combined with Reflexion methods. We prompt the model with
different numbers of historical predictions and find that historical
predictions have a sustained positive effect on LLM planning. The differences
in single-step reasoning between PreAct and ReAct show that PreAct indeed
offers advantages in terms of diversity and strategic directivity over ReAct.
| 2,024 | Computation and Language |
Deciphering the lmpact of Pretraining Data on Large Language Models
through Machine Unlearning | Through pretraining on a corpus with various sources, Large Language Models
(LLMs) have gained impressive performance. However, the impact of each
component of the pretraining corpus remains opaque. As a result, the
organization of the pretraining corpus is still empirical and may deviate from
the optimal. To address this issue, we systematically analyze the impact of 48
datasets from 5 major categories of pretraining data of LLMs and measure their
impacts on LLMs using benchmarks about nine major categories of model
capabilities. Our analyses provide empirical results about the contribution of
multiple corpora on the performances of LLMs, along with their joint impact
patterns, including complementary, orthogonal, and correlational relationships.
We also identify a set of ``high-impact data'' such as Books that is
significantly related to a set of model capabilities. These findings provide
insights into the organization of data to support more efficient pretraining of
LLMs.
| 2,024 | Computation and Language |
Counter-intuitive: Large Language Models Can Better Understand Knowledge
Graphs Than We Thought | Although the method of enhancing large language models' (LLMs') reasoning
ability and reducing their hallucinations through the use of knowledge graphs
(KGs) has received widespread attention, the exploration of how to enable LLMs
to integrate the structured knowledge in KGs on-the-fly remains inadequate.
Researchers often co-train KG embeddings and LLM parameters to equip LLMs with
the ability of comprehending KG knowledge. However, this resource-hungry
training paradigm significantly increases the model learning cost and is also
unsuitable for non-open-source, black-box LLMs. In this paper, we employ
complex question answering (CQA) as a task to assess the LLM's ability of
comprehending KG knowledge. We conducted a comprehensive comparison of KG
knowledge injection methods (from triples to natural language text), aiming to
explore the optimal prompting method for supplying KG knowledge to LLMs,
thereby enhancing their comprehension of KG. Contrary to our initial
expectations, our analysis revealed that LLMs effectively handle messy, noisy,
and linearized KG knowledge, outperforming methods that employ well-designed
natural language (NL) textual prompts. This counter-intuitive finding provides
substantial insights for future research on LLMs' comprehension of structured
knowledge.
| 2,024 | Computation and Language |
Question Answering Over Spatio-Temporal Knowledge Graph | Spatio-temporal knowledge graphs (STKGs) extend the concept of knowledge
graphs (KGs) by incorporating time and location information. While the research
community's focus on Knowledge Graph Question Answering (KGQA), the field of
answering questions incorporating both spatio-temporal information based on
STKGs remains largely unexplored. Furthermore, a lack of comprehensive datasets
also has hindered progress in this area. To address this issue, we present
STQAD, a dataset comprising 10,000 natural language questions for
spatio-temporal knowledge graph question answering (STKGQA). Unfortunately,
various state-of-the-art KGQA approaches fall far short of achieving
satisfactory performance on our dataset. In response, we propose STCQA, a new
spatio-temporal KGQA approach that utilizes a novel STKG embedding method named
STComplEx. By extracting temporal and spatial information from a question, our
QA model can better comprehend the question and retrieve accurate answers from
the STKG. Through extensive experiments, we demonstrate the quality of our
dataset and the effectiveness of our STKGQA method.
| 2,024 | Computation and Language |
KMMLU: Measuring Massive Multitask Language Understanding in Korean | We propose KMMLU, a new Korean benchmark with 35,030 expert-level
multiple-choice questions across 45 subjects ranging from humanities to STEM.
Unlike previous Korean benchmarks that are translated from existing English
benchmarks, KMMLU is collected from original Korean exams, capturing linguistic
and cultural aspects of the Korean language. We test 26 publically available
and proprietary LLMs, identifying significant room for improvement. The best
publicly available model achieves 50.54% on KMMLU, far below the average human
performance of 62.6%. This model was primarily trained for English and Chinese,
not Korean. Current LLMs tailored to Korean, such as Polyglot-Ko, perform far
worse. Surprisingly, even the most capable proprietary LLMs, e.g., GPT-4 and
HyperCLOVA X, achieve 59.95% and 53.40%, respectively. This suggests that
further work is needed to improve Korean LLMs, and KMMLU offers the right tool
to track this progress. We make our dataset publicly available on the Hugging
Face Hub and integrate the benchmark into EleutherAI's Language Model
Evaluation Harness.
| 2,024 | Computation and Language |
Syntactic Language Change in English and German: Metrics, Parsers, and
Convergences | Many studies have shown that human languages tend to optimize for lower
complexity and increased communication efficiency. Syntactic dependency
distance, which measures the linear distance between dependent words, is often
considered a key indicator of language processing difficulty and working memory
load. The current paper looks at diachronic trends in syntactic language change
in both English and German, using corpora of parliamentary debates from the
last c. 160 years. We base our observations on five dependency parsers,
including the widely used Stanford CoreNLP as well as 4 newer alternatives. Our
analysis of syntactic language change goes beyond linear dependency distance
and explores 15 metrics relevant to dependency distance minimization (DDM)
and/or based on tree graph properties, such as the tree height and degree
variance. Even though we have evidence that recent parsers trained on modern
treebanks are not heavily affected by data 'noise' such as spelling changes and
OCR errors in our historic data, we find that results of syntactic language
change are sensitive to the parsers involved, which is a caution against using
a single parser for evaluating syntactic language change as done in previous
work. We also show that syntactic language change over the time period
investigated is largely similar between English and German across the different
metrics explored: only 4% of cases we examine yield opposite conclusions
regarding upwards and downtrends of syntactic metrics across German and
English. We also show that changes in syntactic measures seem to be more
frequent at the tails of sentence length distributions. To our best knowledge,
ours is the most comprehensive analysis of syntactic language using modern NLP
technology in recent corpora of English and German.
| 2,024 | Computation and Language |
LongAgent: Scaling Language Models to 128k Context through Multi-Agent
Collaboration | Large language models (LLMs) have demonstrated impressive performance in
understanding language and executing complex reasoning tasks. However, LLMs
with long context windows have been notorious for their expensive training
costs and high inference latency. Even the most advanced models such as GPT-4
and Claude2 often make mistakes when processing inputs of over $100k$ tokens, a
phenomenon also known as \textit{lost in the middle}. In this paper, we propose
\textsc{LongAgent}, a method based on multi-agent collaboration, which scales
LLMs (e.g., LLaMA) to a context of 128K and demonstrates potential superiority
in long-text processing compared to GPT-4. In \textsc{LongAgent}, a leader is
responsible for understanding user intent and directing team members to acquire
information from documents. Due to members' hallucinations, it is non-trivial
for a leader to obtain accurate information from the responses of dozens to
hundreds of members. To address this, we develop an \textit{inter-member
communication} mechanism to resolve response conflicts caused by hallucinations
through information sharing. Our experimental results indicate that
\textsc{LongAgent} offers a promising alternative for long-text processing. The
agent team instantiated with LLaMA-7B achieves significant improvements in
tasks such as 128k-long text retrieval, multi-hop question answering, compared
to GPT-4.
| 2,024 | Computation and Language |
Cobra Effect in Reference-Free Image Captioning Metrics | Evaluating the compatibility between textual descriptions and corresponding
images represents a core endeavor within multi-modal research. In recent years,
a proliferation of reference-free methods, leveraging visual-language
pre-trained models (VLMs), has emerged. Empirical evidence has substantiated
that these innovative approaches exhibit a higher correlation with human
judgment, marking a significant advancement in the field. However, does a
higher correlation with human evaluations alone sufficiently denote the
complete of a metric? In response to this question, in this paper, we study if
there are any deficiencies in reference-free metrics. Specifically, inspired by
the Cobra Effect, we utilize metric scores as rewards to direct the captioning
model toward generating descriptions that closely align with the metric's
criteria. If a certain metric has flaws, it will be exploited by the model and
reflected in the generated sentences. Our findings reveal that descriptions
guided by these metrics contain significant flaws, e.g. incoherent statements
and excessive repetition. Subsequently, we propose a novel method termed
Self-Improving to rectify the identified shortcomings within these metrics. We
employ GPT-4V as an evaluative tool to assess generated sentences and the
result reveals that our approach achieves state-of-the-art (SOTA) performance.
In addition, we also introduce a challenging evaluation benchmark called Flaws
Caption to evaluate reference-free image captioning metrics comprehensively.
Our code is available at
https://github.com/aaronma2020/robust_captioning_metric
| 2,024 | Computation and Language |
BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval
Augmented Long-Context Large Language Models | Large language models (LLMs) call for extension of context to handle many
critical applications. However, the existing approaches are prone to expensive
costs and inferior quality of context extension. In this work, we
proposeExtensible Embedding, which realizes high-quality extension of LLM's
context with strong flexibility and cost-effectiveness. Extensible embedding
stand as an enhancement of typical token embedding, which represents the
information for an extensible scope of context instead of a single token. By
leveraging such compact input units of higher information density, the LLM can
access to a vast scope of context even with a small context window. Extensible
embedding is systematically optimized in architecture and training method,
which leads to multiple advantages. 1) High flexibility of context extension,
which flexibly supports ad-hoc extension of diverse context lengths. 2) Strong
sample efficiency of training, which enables the embedding model to be learned
in a cost-effective way. 3) Superior compatibility with the existing LLMs,
where the extensible embedding can be seamlessly introduced as a plug-in
component. Comprehensive evaluations on long-context language modeling and
understanding tasks verify extensible embedding as an effective, efficient,
flexible, and compatible method to extend the LLM's context.
| 2,024 | Computation and Language |
Extensible Embedding: A Flexible Multipler For LLM's Context Length | Large language models (LLMs) call for extension of context to handle many
critical applications. However, the existing approaches are prone to expensive
costs and inferior quality of context extension. In this work, we propose
Extensible Embedding, which realizes high-quality extension of LLM's context
with strong flexibility and cost-effectiveness. Extensible embedding stand as
an enhancement of typical token embedding, which represents the information for
an extensible scope of context instead of a single token. By leveraging such
compact input units of higher information density, the LLM can access to a vast
scope of context even with a small context window. Extensible embedding is
systematically optimized in architecture and training method, which leads to
multiple advantages. 1) High flexibility of context extension, which flexibly
supports ad-hoc extension of diverse context lengths. 2) Strong sample
efficiency of training, which enables the embedding model to be learned in a
cost-effective way. 3) Superior compatibility with the existing LLMs, where the
extensible embedding can be seamlessly introduced as a plug-in component.
Comprehensive evaluations on long-context language modeling and understanding
tasks verify extensible embedding as an effective, efficient, flexible, and
compatible method to extend the LLM's context.
| 2,024 | Computation and Language |
Multi-Task Inference: Can Large Language Models Follow Multiple
Instructions at Once? | Large language models (LLMs) are typically prompted to follow a single
instruction per inference call. In this work, we analyze whether LLMs also hold
the capability to handle multiple instructions simultaneously, denoted as
Multi-Task Inference. For this purpose, we introduce the MTI Bench(Multi-Task
Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000
instances across 25 tasks. Each task in the MTI Bench involves 2 to 3
sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces
the total inference time by 1.46 times in average since it does not require
multiple inference calls. Interestingly, contrary to the expectation that LLMs
would perform better when tasks are divided, we find that state-of-the-art
LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3% and 12.4% improved
performance with Multi-Task Inference compared to Single-Task Inference on the
MTI Bench. We release the MTI Bench dataset and our code at this link
https://github.com/guijinSON/MTI-Bench.
| 2,024 | Computation and Language |
Metric-Learning Encoding Models Identify Processing Profiles of
Linguistic Features in BERT's Representations | We introduce Metric-Learning Encoding Models (MLEMs) as a new approach to
understand how neural systems represent the theoretical features of the objects
they process. As a proof-of-concept, we apply MLEMs to neural representations
extracted from BERT, and track a wide variety of linguistic features (e.g.,
tense, subject person, clause type, clause embedding). We find that: (1)
linguistic features are ordered: they separate representations of sentences to
different degrees in different layers; (2) neural representations are organized
hierarchically: in some layers, we find clusters of representations nested
within larger clusters, following successively important linguistic features;
(3) linguistic features are disentangled in middle layers: distinct, selective
units are activated by distinct linguistic features. Methodologically, MLEMs
are superior (4) to multivariate decoding methods, being more robust to type-I
errors, and (5) to univariate encoding methods, in being able to predict both
local and distributed representations. Together, this demonstrates the utility
of Metric-Learning Encoding Methods for studying how linguistic features are
neurally encoded in language models and the advantage of MLEMs over traditional
methods. MLEMs can be extended to other domains (e.g. vision) and to other
neural systems, such as the human brain.
| 2,024 | Computation and Language |
Decoding News Narratives: A Critical Analysis of Large Language Models
in Framing Bias Detection | This work contributes to the expanding research on the applicability of LLMs
in social sciences by examining the performance of GPT-3.5 Turbo, GPT-4, and
Flan-T5 models in detecting framing bias in news headlines through zero-shot,
few-shot, and explainable prompting methods. A key insight from our evaluation
is the notable efficacy of explainable prompting in enhancing the reliability
of these models, highlighting the importance of explainable settings for social
science research on framing bias. GPT-4, in particular, demonstrated enhanced
performance in few-shot scenarios when presented with a range of relevant,
in-domain examples. FLAN-T5's poor performance indicates that smaller models
may require additional task-specific fine-tuning for identifying framing bias
detection. Our study also found that models, particularly GPT-4, often
misinterpret emotional language as an indicator of framing bias, underscoring
the challenge of distinguishing between reporting genuine emotional expression
and intentionally use framing bias in news headlines. We further evaluated the
models on two subsets of headlines where the presence or absence of framing
bias was either clear-cut or more contested, with the results suggesting that
these models' can be useful in flagging potential annotation inaccuracies
within existing or new datasets. Finally, the study evaluates the models in
real-world conditions ("in the wild"), moving beyond the initial dataset
focused on U.S. Gun Violence, assessing the models' performance on framed
headlines covering a broad range of topics.
| 2,024 | Computation and Language |
SpeCrawler: Generating OpenAPI Specifications from API Documentation
Using Large Language Models | In the digital era, the widespread use of APIs is evident. However, scalable
utilization of APIs poses a challenge due to structure divergence observed in
online API documentation. This underscores the need for automatic tools to
facilitate API consumption. A viable approach involves the conversion of
documentation into an API Specification format. While previous attempts have
been made using rule-based methods, these approaches encountered difficulties
in generalizing across diverse documentation. In this paper we introduce
SpeCrawler, a comprehensive system that utilizes large language models (LLMs)
to generate OpenAPI Specifications from diverse API documentation through a
carefully crafted pipeline. By creating a standardized format for numerous
APIs, SpeCrawler aids in streamlining integration processes within API
orchestrating systems and facilitating the incorporation of tools into LLMs.
The paper explores SpeCrawler's methodology, supported by empirical evidence
and case studies, demonstrating its efficacy through LLM capabilities.
| 2,024 | Computation and Language |
Metacognitive Retrieval-Augmented Large Language Models | Retrieval-augmented generation have become central in natural language
processing due to their efficacy in generating factual content. While
traditional methods employ single-time retrieval, more recent approaches have
shifted towards multi-time retrieval for multi-hop reasoning tasks. However,
these strategies are bound by predefined reasoning steps, potentially leading
to inaccuracies in response generation. This paper introduces MetaRAG, an
approach that combines the retrieval-augmented generation process with
metacognition. Drawing from cognitive psychology, metacognition allows an
entity to self-reflect and critically evaluate its cognitive processes. By
integrating this, MetaRAG enables the model to monitor, evaluate, and plan its
response strategies, enhancing its introspective reasoning abilities. Through a
three-step metacognitive regulation pipeline, the model can identify
inadequacies in initial cognitive responses and fixes them. Empirical
evaluations show that MetaRAG significantly outperforms existing methods.
| 2,024 | Computation and Language |
Self-seeding and Multi-intent Self-instructing LLMs for Generating
Intent-aware Information-Seeking dialogs | Identifying user intents in information-seeking dialogs is crucial for a
system to meet user's information needs. Intent prediction (IP) is challenging
and demands sufficient dialogs with human-labeled intents for training.
However, manually annotating intents is resource-intensive. While large
language models (LLMs) have been shown to be effective in generating synthetic
data, there is no study on using LLMs to generate intent-aware
information-seeking dialogs. In this paper, we focus on leveraging LLMs for
zero-shot generation of large-scale, open-domain, and intent-aware
information-seeking dialogs. We propose SOLID, which has novel self-seeding and
multi-intent self-instructing schemes. The former improves the generation
quality by using the LLM's own knowledge scope to initiate dialog generation;
the latter prompts the LLM to generate utterances sequentially, and mitigates
the need for manual prompt design by asking the LLM to autonomously adapt its
prompt instruction when generating complex multi-intent utterances.
Furthermore, we propose SOLID-RL, which is further trained to generate a dialog
in one step on the data generated by SOLID. We propose a length-based quality
estimation mechanism to assign varying weights to SOLID-generated dialogs based
on their quality during the training process of SOLID-RL. We use SOLID and
SOLID-RL to generate more than 300k intent-aware dialogs, surpassing the size
of existing datasets. Experiments show that IP methods trained on dialogs
generated by SOLID and SOLID-RL achieve better IP quality than ones trained on
human-generated dialogs.
| 2,024 | Computation and Language |
Stumbling Blocks: Stress Testing the Robustness of Machine-Generated
Text Detectors Under Attacks | The widespread use of large language models (LLMs) is increasing the demand
for methods that detect machine-generated text to prevent misuse. The goal of
our study is to stress test the detectors' robustness to malicious attacks
under realistic scenarios. We comprehensively study the robustness of popular
machine-generated text detectors under attacks from diverse categories:
editing, paraphrasing, prompting, and co-generating. Our attacks assume limited
access to the generator LLMs, and we compare the performance of detectors on
different attacks under different budget levels. Our experiments reveal that
almost none of the existing detectors remain robust under all the attacks, and
all detectors exhibit different loopholes. Averaging all detectors, the
performance drops by 35% across all attacks. Further, we investigate the
reasons behind these defects and propose initial out-of-the-box patches to
improve robustness.
| 2,024 | Computation and Language |
Learning From Failure: Integrating Negative Examples when Fine-tuning
Large Language Models as Agents | Large language models (LLMs) have achieved success in acting as agents, which
interact with environments through tools like search engines. However, LLMs are
not optimized specifically for tool use during training or alignment, limiting
their effectiveness as agents. To resolve this problem, previous work has
collected interaction trajectories between GPT-4 and environments, and
fine-tuned smaller models with them. As part of this, the standard approach has
been to simply discard trajectories that do not finish the task successfully,
which, on the one hand, leads to a significant waste of data and resources, and
on the other hand, has the potential to limit the possible optimization paths
during fine-tuning. In this paper, we contend that large language models can
learn from failures through appropriate data cleaning and fine-tuning
strategies. We conduct experiments on mathematical reasoning, multi-hop
question answering, and strategic question answering tasks. Experimental
results demonstrate that compared to solely using positive examples,
incorporating negative examples enhances model performance by a large margin.
| 2,024 | Computation and Language |
Competition of Mechanisms: Tracing How Language Models Handle Facts and
Counterfactuals | Interpretability research aims to bridge the gap between the empirical
success and our scientific understanding of the inner workings of large
language models (LLMs). However, most existing research in this area focused on
analyzing a single mechanism, such as how models copy or recall factual
knowledge. In this work, we propose the formulation of competition of
mechanisms, which instead of individual mechanisms focuses on the interplay of
multiple mechanisms, and traces how one of them becomes dominant in the final
prediction. We uncover how and where the competition of mechanisms happens
within LLMs using two interpretability methods, logit inspection and attention
modification. Our findings show traces of the mechanisms and their competition
across various model components, and reveal attention positions that
effectively control the strength of certain mechanisms. Our code and data are
at https://github.com/francescortu/Competition_of_Mechanisms.
| 2,024 | Computation and Language |
Autocorrect for Estonian texts: final report from project EKTB25 | The project was funded in 2021-2023 by the National Programme of Estonian
Language Technology. Its main aim was to develop spelling and grammar
correction tools for the Estonian language. The main challenge was the very
small amount of available error correction data needed for such development. To
mitigate this, (1) we annotated more correction data for model training and
testing, (2) we tested transfer-learning, i.e. retraining machine learning
models created for other tasks, so as not to depend solely on correction data,
(3) we compared the developed method and model with alternatives, including
large language models. We also developed automatic evaluation, which can
calculate the accuracy and yield of corrections by error category, so that the
effectiveness of different methods can be compared in detail.
There has been a breakthrough in large language models during the project:
GPT4, a commercial language model with Estonian-language support, has been
created. We took into account the existence of the model when adjusting plans
and in the report we present a comparison with the ability of GPT4 to improve
the Estonian language text.
The final results show that the approach we have developed provides better
scores than GPT4 and the result is usable but not entirely reliable yet. The
report also contains ideas on how GPT4 and other major language models can be
implemented in the future, focusing on open-source solutions.
All results of this project are open-data/open-source, with licenses that
allow them to be used for purposes including commercial ones.
| 2,024 | Computation and Language |
A Multi-Aspect Framework for Counter Narrative Evaluation using Large
Language Models | Counter narratives - informed responses to hate speech contexts designed to
refute hateful claims and de-escalate encounters - have emerged as an effective
hate speech intervention strategy. While previous work has proposed automatic
counter narrative generation methods to aid manual interventions, the
evaluation of these approaches remains underdeveloped. Previous automatic
metrics for counter narrative evaluation lack alignment with human judgment as
they rely on superficial reference comparisons instead of incorporating key
aspects of counter narrative quality as evaluation criteria. To address prior
evaluation limitations, we propose a novel evaluation framework prompting LLMs
to provide scores and feedback for generated counter narrative candidates using
5 defined aspects derived from guidelines from counter narrative specialized
NGOs. We found that LLM evaluators achieve strong alignment to human-annotated
scores and feedback and outperform alternative metrics, indicating their
potential as multi-aspect, reference-free and interpretable evaluators for
counter narrative evaluation.
| 2,024 | Computation and Language |
Opening the black box of language acquisition | Recent advances in large language models using deep learning techniques have
renewed interest on how languages can be learned from data. However, it is
unclear whether or how these models represent grammatical information from the
learned languages. In addition, the models must be pre-trained on large corpora
before they can be used. In this work, we propose an alternative, more
transparent and cognitively plausible architecture for learning language.
Instead of using deep learning, our approach uses a minimal cognitive
architecture based on sequence memory and chunking. The learning mechanism is
based on the principles of reinforcement learning. We test our architecture on
a number of natural-like toy languages. Results show that the model can learn
these artificial languages from scratch and extract grammatical information
that supports learning. Our study demonstrates the power of this simple
architecture and stresses the importance of sequence memory as a key component
of the language learning process. Since other animals do not seem to have a
faithful sequence memory, this may explain why only humans have developed
complex languages.
| 2,024 | Computation and Language |
One Prompt To Rule Them All: LLMs for Opinion Summary Evaluation | Evaluation of opinion summaries using conventional reference-based metrics
rarely provides a holistic evaluation and has been shown to have a relatively
low correlation with human judgments. Recent studies suggest using Large
Language Models (LLMs) as reference-free metrics for NLG evaluation, however,
they remain unexplored for opinion summary evaluation. Moreover, limited
opinion summary evaluation datasets inhibit progress. To address this, we
release the SUMMEVAL-OP dataset covering 7 dimensions related to the evaluation
of opinion summaries: fluency, coherence, relevance, faithfulness, aspect
coverage, sentiment consistency, and specificity. We investigate Op-I-Prompt a
dimension-independent prompt, and Op-Prompts, a dimension-dependent set of
prompts for opinion summary evaluation. Experiments indicate that Op-I-Prompt
emerges as a good alternative for evaluating opinion summaries achieving an
average Spearman correlation of 0.70 with humans, outperforming all previous
approaches. To the best of our knowledge, we are the first to investigate LLMs
as evaluators on both closed-source and open-source models in the opinion
summarization domain.
| 2,024 | Computation and Language |
ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language
Model | Recent advancements in Large Vision-Language Models (LVLMs) have enabled
processing of multimodal inputs in language models but require significant
computational resources for deployment, especially in edge devices. This study
aims to bridge the performance gap between traditional-scale LVLMs and
resource-friendly lite versions by adopting high-quality training data. To do
this, a synthetic dataset is created by leveraging GPT-4V's ability to generate
detailed captions, complex reasoning instructions and detailed answers from
images. The resulted model trained with our data, ALLaVA, achieves competitive
performance on 12 benchmarks up to 3B LVLMs. This work highlights the
feasibility of adopting high-quality data in crafting more efficient LVLMs. Our
online demo is available at \url{https://allava.freedomai.cn}.
| 2,024 | Computation and Language |
Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning | Despite vision-language models' (VLMs) remarkable capabilities as versatile
visual assistants, two substantial challenges persist within the existing VLM
frameworks: (1) lacking task diversity in pretraining and visual instruction
tuning, and (2) annotation error and bias in GPT-4 synthesized instruction
tuning data. Both challenges lead to issues such as poor generalizability,
hallucination, and catastrophic forgetting. To address these challenges, we
construct Vision-Flan, the most diverse publicly available visual instruction
tuning dataset to date, comprising 187 diverse tasks and 1,664,261 instances
sourced from academic datasets, and each task is accompanied by an
expert-written instruction. In addition, we propose a two-stage instruction
tuning framework, in which VLMs are firstly finetuned on Vision-Flan and
further tuned on GPT-4 synthesized data. We find this two-stage tuning
framework significantly outperforms the traditional single-stage visual
instruction tuning framework and achieves the state-of-the-art performance
across a wide range of multi-modal evaluation benchmarks. Finally, we conduct
in-depth analyses to understand visual instruction tuning and our findings
reveal that: (1) GPT-4 synthesized data does not substantially enhance VLMs'
capabilities but rather modulates the model's responses to human-preferred
formats; (2) A minimal quantity (e.g., 1,000) of GPT-4 synthesized data can
effectively align VLM responses with human-preference; (3) Visual instruction
tuning mainly helps large-language models (LLMs) to understand visual features.
| 2,024 | Computation and Language |
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the
Layers | Large Language Models (LLMs) possess outstanding capabilities in addressing
various natural language processing (NLP) tasks. However, the sheer size of
these models poses challenges in terms of storage, training and inference due
to the inclusion of billions of parameters through layer stacking. While
traditional approaches such as model pruning or distillation offer ways for
reducing model size, they often come at the expense of performance retention.
In our investigation, we systematically explore the approach of reducing the
number of layers in LLMs. Surprisingly, we observe that even with fewer layers,
LLMs maintain similar or better performance levels, particularly in
prompt-based fine-tuning for text classification tasks. Remarkably, in certain
cases, models with a single layer outperform their fully layered counterparts.
These findings offer valuable insights for future work aimed at mitigating the
size constraints of LLMs while preserving their performance, thereby opening
avenues for significantly more efficient use of LLMs.
| 2,024 | Computation and Language |
GNNavi: Navigating the Information Flow in Large Language Models by
Graph Neural Network | Large Language Models (LLMs) exhibit strong In-Context Learning (ICL)
capabilities when prompts with demonstrations are applied to them. However,
fine-tuning still remains crucial to further enhance their adaptability.
Prompt-based fine-tuning proves to be an effective fine-tuning method in
low-data scenarios, but high demands on computing resources limit its
practicality. We address this issue by introducing a prompt-based
parameter-efficient fine-tuning (PEFT) approach. GNNavi leverages insights into
ICL's information flow dynamics, which indicates that label words act in
prompts as anchors for information propagation. GNNavi employs a Graph Neural
Network (GNN) layer to precisely guide the aggregation and distribution of
information flow during the processing of prompts by hardwiring the desired
information flow into the GNN. Our experiments on text classification tasks
with GPT-2 and Llama2 shows GNNavi surpasses standard prompt-based fine-tuning
methods in few-shot settings by updating just 0.2% to 0.5% of parameters. We
compare GNNavi with prevalent PEFT approaches, such as prefix tuning, LoRA and
Adapter in terms of performance and efficiency. Our analysis reveals that
GNNavi enhances information flow and ensures a clear aggregation process.
| 2,024 | Computation and Language |
A Note on Bias to Complete | Minimizing social bias strengthens societal bonds, promoting shared
understanding and better decision-making. We revisit the definition of bias by
discovering new bias types (e.g., societal status) in dynamic environments and
describe them relative to context, such as culture, region, time, and personal
background. Our framework includes eight hypotheses about bias and a minimizing
bias strategy for each assumption as well as five methods as proposed solutions
in LLM. The realization of the framework is yet to be completed.
| 2,024 | Computation and Language |
MORL-Prompt: An Empirical Analysis of Multi-Objective Reinforcement
Learning for Discrete Prompt Optimization | RL-based techniques can be used to search for prompts that when fed into a
target language model maximize a set of user-specified reward functions.
However, in many target applications, the natural reward functions are in
tension with one another -- for example, content preservation vs. style
matching in style transfer tasks. Current techniques focus on maximizing the
average of reward functions, which does not necessarily lead to prompts that
achieve balance across rewards -- an issue that has been well-studied in the
multi-objective and robust optimization literature. In this paper, we adapt
several techniques for multi-objective optimization to RL-based discrete prompt
optimization -- two that consider volume of the Pareto reward surface, and
another that chooses an update direction that benefits all rewards
simultaneously. We conduct an empirical analysis of these methods on two NLP
tasks: style transfer and machine translation, each using three competing
reward functions. Our experiments demonstrate that multi-objective methods that
directly optimize volume perform better and achieve a better balance of all
rewards than those that attempt to find monotonic update directions.
| 2,024 | Computation and Language |
Modelling Political Coalition Negotiations Using LLM-based Agents | Coalition negotiations are a cornerstone of parliamentary democracies,
characterised by complex interactions and strategic communications among
political parties. Despite its significance, the modelling of these
negotiations has remained unexplored with the domain of Natural Language
Processing (NLP), mostly due to lack of proper data. In this paper, we
introduce coalition negotiations as a novel NLP task, and model it as a
negotiation between large language model-based agents. We introduce a
multilingual dataset, POLCA, comprising manifestos of European political
parties and coalition agreements over a number of elections in these countries.
This dataset addresses the challenge of the current scope limitations in
political negotiation modelling by providing a diverse, real-world basis for
simulation. Additionally, we propose a hierarchical Markov decision process
designed to simulate the process of coalition negotiation between political
parties and predict the outcomes. We evaluate the performance of
state-of-the-art large language models (LLMs) as agents in handling coalition
negotiations, offering insights into their capabilities and paving the way for
future advancements in political modelling.
| 2,024 | Computation and Language |
How Susceptible are Large Language Models to Ideological Manipulation? | Large Language Models (LLMs) possess the potential to exert substantial
influence on public perceptions and interactions with information. This raises
concerns about the societal impact that could arise if the ideologies within
these models can be easily manipulated. In this work, we investigate how
effectively LLMs can learn and generalize ideological biases from their
instruction-tuning data. Our findings reveal a concerning vulnerability:
exposure to only a small amount of ideologically driven samples significantly
alters the ideology of LLMs. Notably, LLMs demonstrate a startling ability to
absorb ideology from one topic and generalize it to even unrelated ones. The
ease with which LLMs' ideologies can be skewed underscores the risks associated
with intentionally poisoned training data by malicious actors or inadvertently
introduced biases by data annotators. It also emphasizes the imperative for
robust safeguards to mitigate the influence of ideological manipulations on
LLMs.
| 2,024 | Computation and Language |
Numerical Claim Detection in Finance: A New Financial Dataset,
Weak-Supervision Model, and Market Analysis | In this paper, we investigate the influence of claims in analyst reports and
earnings calls on financial market returns, considering them as significant
quarterly events for publicly traded companies. To facilitate a comprehensive
analysis, we construct a new financial dataset for the claim detection task in
the financial domain. We benchmark various language models on this dataset and
propose a novel weak-supervision model that incorporates the knowledge of
subject matter experts (SMEs) in the aggregation function, outperforming
existing approaches. Furthermore, we demonstrate the practical utility of our
proposed model by constructing a novel measure ``optimism". Furthermore, we
observed the dependence of earnings surprise and return on our optimism
measure. Our dataset, models, and code will be made publicly (under CC BY 4.0
license) available on GitHub and Hugging Face.
| 2,024 | Computation and Language |
Machine-generated Text Localization | Machine-Generated Text (MGT) detection aims to identify a piece of text as
machine or human written. Prior work has primarily formulated MGT as a binary
classification task over an entire document, with limited work exploring cases
where only part of a document is machine generated. This paper provides the
first in-depth study of MGT that localizes the portions of a document that were
machine generated. Thus, if a bad actor were to change a key portion of a news
article to spread misinformation, whole document MGT detection may fail since
the vast majority is human written, but our approach can succeed due to its
granular approach. A key challenge in our MGT localization task is that short
spans of text, e.g., a single sentence, provides little information indicating
if it is machine generated due to its short length. To address this, we
leverage contextual information, where we predict whether multiple sentences
are machine or human written at once. This enables our approach to identify
changes in style or content to boost performance. A gain of 4-13% mean Average
Precision (mAP) over prior work demonstrates the effectiveness of approach on
five diverse datasets: GoodNews, VisualNews, WikiText, Essay, and WP. We
release our implementation at
\href{https://github.com/Zhongping-Zhang/MGT_Localization}{this http URL}.
| 2,024 | Computation and Language |
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned
Language Models through Task Arithmetic | Aligned language models face a significant limitation as their fine-tuning
often results in compromised safety. To tackle this, we propose a simple method
RESTA that performs LLM safety realignment. RESTA stands for REstoring Safety
through Task Arithmetic. At its core, it involves a simple arithmetic addition
of a safety vector to the weights of the compromised model. We demonstrate the
effectiveness of RESTA in both parameter-efficient and full fine-tuning,
covering a wide range of downstream tasks, including instruction following in
Chinese, English, and Hindi, as well as problem-solving capabilities in Code
and Math. We also showcase the generalizability of RESTA on three existing
safety evaluation benchmarks and a multilingual benchmark dataset proposed as a
part of this work, consisting of 550 harmful questions covering 11 categories,
each with 5 sub-categories of harm. Overall, RESTA decreases the harmfulness of
the compromised model from 18.6% to 5.1% and from 9.2% to 1.5% in
parameter-efficient and full fine-tuning, respectively, while maintaining most
of the model's performance on the task. We release the source codes at:
https://github.com/declare-lab/resta.
| 2,024 | Computation and Language |
In-Context Learning Demonstration Selection via Influence Analysis | Large Language Models (LLMs) have demonstrated their In-Context Learning
(ICL) capabilities which provides an opportunity to perform few shot learning
without any gradient update. Despite its multiple benefits, ICL generalization
performance is sensitive to the selected demonstrations. Selecting effective
demonstrations for ICL is still an open research challenge. To address this
challenge, we propose a demonstration selection method called InfICL which
analyzes influences of training samples through influence functions.
Identifying highly influential training samples can potentially aid in
uplifting the ICL generalization performance. To limit the running cost of
InfICL, we only employ the LLM to generate sample embeddings, and don't perform
any costly fine tuning. We perform empirical study on multiple real-world
datasets and show merits of our InfICL against state-of-the-art baselines.
| 2,024 | Computation and Language |
ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs | Safety is critical to the usage of large language models (LLMs). Multiple
techniques such as data filtering and supervised fine-tuning have been
developed to strengthen LLM safety. However, currently known techniques presume
that corpora used for safety alignment of LLMs are solely interpreted by
semantics. This assumption, however, does not hold in real-world applications,
which leads to severe vulnerabilities in LLMs. For example, users of forums
often use ASCII art, a form of text-based art, to convey image information. In
this paper, we propose a novel ASCII art-based jailbreak attack and introduce a
comprehensive benchmark Vision-in-Text Challenge (ViTC) to evaluate the
capabilities of LLMs in recognizing prompts that cannot be solely interpreted
by semantics. We show that five SOTA LLMs (GPT-3.5, GPT-4, Gemini, Claude, and
Llama2) struggle to recognize prompts provided in the form of ASCII art. Based
on this observation, we develop the jailbreak attack ArtPrompt, which leverages
the poor performance of LLMs in recognizing ASCII art to bypass safety measures
and elicit undesired behaviors from LLMs. ArtPrompt only requires black-box
access to the victim LLMs, making it a practical attack. We evaluate ArtPrompt
on five SOTA LLMs, and show that ArtPrompt can effectively and efficiently
induce undesired behaviors from all five LLMs.
| 2,024 | Computation and Language |
MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in
Generative LLMs | Generative Large Language Models (LLMs) are widely utilized for their
excellence in various tasks. However, their tendency to produce inaccurate or
misleading outputs poses a potential risk, particularly in high-stakes
environments. Therefore, estimating the correctness of generative LLM outputs
is an important task for enhanced reliability. Uncertainty Estimation (UE) in
generative LLMs is an evolving domain, where SOTA probability-based methods
commonly employ length-normalized scoring. In this work, we propose
Meaning-Aware Response Scoring (MARS) as an alternative to length-normalized
scoring for UE methods. MARS is a novel scoring function that considers the
semantic contribution of each token in the generated sequence in the context of
the question. We demonstrate that integrating MARS into UE methods results in a
universal and significant improvement in UE performance. We conduct experiments
using three distinct closed-book question-answering datasets across five
popular pre-trained LLMs. Lastly, we validate the efficacy of MARS on a Medical
QA dataset. Code can be found https://github.com/Ybakman/LLM_Uncertainity.
| 2,024 | Computation and Language |
ChatGPT Based Data Augmentation for Improved Parameter-Efficient
Debiasing of LLMs | Large Language models (LLMs), while powerful, exhibit harmful social biases.
Debiasing is often challenging due to computational costs, data constraints,
and potential degradation of multi-task language capabilities. This work
introduces a novel approach utilizing ChatGPT to generate synthetic training
data, aiming to enhance the debiasing of LLMs. We propose two strategies:
Targeted Prompting, which provides effective debiasing for known biases but
necessitates prior specification of bias in question; and General Prompting,
which, while slightly less effective, offers debiasing across various
categories. We leverage resource-efficient LLM debiasing using adapter tuning
and compare the effectiveness of our synthetic data to existing debiasing
datasets. Our results reveal that: (1) ChatGPT can efficiently produce
high-quality training data for debiasing other LLMs; (2) data produced via our
approach surpasses existing datasets in debiasing performance while also
preserving internal knowledge of a pre-trained LLM; and (3) synthetic data
exhibits generalizability across categories, effectively mitigating various
biases, including intersectional ones. These findings underscore the potential
of synthetic data in advancing the fairness of LLMs with minimal retraining
cost.
| 2,024 | Computation and Language |
Structured Chain-of-Thought Prompting for Few-Shot Generation of
Content-Grounded QA Conversations | We introduce a structured chain-of-thought (SCoT) prompting approach to
generating content-grounded multi-turn question-answer conversations using a
pre-trained large language model (LLM). At the core of our proposal is a
structured breakdown of the complex task into a number of states in a state
machine, so that actions corresponding to various subtasks, e.g., content
reading and utterance generation, can be executed in their own dedicated
states. Each state leverages a unique set of resources including prompts and
(optionally) additional tools to augment the generation process. Our
experimental results show that SCoT prompting with designated states for
hallucination mitigation increases agent faithfulness to grounding documents by
up to 16.8%. When used as training data, our open-domain conversations
synthesized from only 6 Wikipedia-based seed demonstrations train strong
conversational QA agents; in out-of-domain evaluation, for example, we observe
improvements of up to 13.9% over target domain gold data when the latter is
augmented with our generated examples.
| 2,024 | Computation and Language |
Uncovering Latent Human Wellbeing in Language Model Embeddings | Do language models implicitly learn a concept of human wellbeing? We explore
this through the ETHICS Utilitarianism task, assessing if scaling enhances
pretrained models' representations. Our initial finding reveals that, without
any prompt engineering or finetuning, the leading principal component from
OpenAI's text-embedding-ada-002 achieves 73.9% accuracy. This closely matches
the 74.6% of BERT-large finetuned on the entire ETHICS dataset, suggesting
pretraining conveys some understanding about human wellbeing. Next, we consider
four language model families, observing how Utilitarianism accuracy varies with
increased parameters. We find performance is nondecreasing with increased model
size when using sufficient numbers of principal components.
| 2,024 | Computation and Language |
What Evidence Do Language Models Find Convincing? | Retrieval-augmented language models are being increasingly tasked with
subjective, contentious, and conflicting queries such as "is aspartame linked
to cancer". To resolve these ambiguous queries, one must search through a large
range of websites and consider "which, if any, of this evidence do I find
convincing?". In this work, we study how LLMs answer this question. In
particular, we construct ConflictingQA, a dataset that pairs controversial
queries with a series of real-world evidence documents that contain different
facts (e.g., quantitative results), argument styles (e.g., appeals to
authority), and answers (Yes or No). We use this dataset to perform sensitivity
and counterfactual analyses to explore which text features most affect LLM
predictions. Overall, we find that current models rely heavily on the relevance
of a website to the query, while largely ignoring stylistic features that
humans find important such as whether a text contains scientific references or
is written with a neutral tone. Taken together, these results highlight the
importance of RAG corpus quality (e.g., the need to filter misinformation), and
possibly even a shift in how LLMs are trained to better align with human
judgements.
| 2,024 | Computation and Language |
Unveiling the Magic: Investigating Attention Distillation in
Retrieval-augmented Generation | Retrieval-augmented generation framework can address the limitations of large
language models by enabling real-time knowledge updates for more accurate
answers. An efficient way in the training phase of retrieval-augmented models
is attention distillation, which uses attention scores as a supervision signal
instead of manually annotated query-document pairs. Despite its growing
popularity, the detailed mechanisms behind the success of attention
distillation remain unexplored, particularly the specific patterns it leverages
to benefit training. In this paper, we address this gap by conducting a
comprehensive review of attention distillation workflow and identifying key
factors influencing the learning quality of retrieval-augmented language
models. We further propose indicators for optimizing models' training methods
and avoiding ineffective training.
| 2,024 | Computation and Language |
Generation Meets Verification: Accelerating Large Language Model
Inference with Smart Parallel Auto-Correct Decoding | This research aims to accelerate the inference speed of large language models
(LLMs) with billions of parameters. We propose \textbf{S}mart \textbf{P}arallel
\textbf{A}uto-\textbf{C}orrect d\textbf{E}coding (SPACE), an innovative
approach designed for achieving lossless acceleration of LLMs. By integrating
semi-autoregressive inference and speculative decoding capabilities, SPACE
uniquely enables autoregressive LLMs to parallelize token generation and
verification. This is realized through a specialized semi-autoregressive
supervised fine-tuning process that equips existing LLMs with the ability to
simultaneously predict multiple tokens. Additionally, an auto-correct decoding
algorithm facilitates the simultaneous generation and verification of token
sequences within a single model invocation. Through extensive experiments on a
range of LLMs, SPACE has demonstrated inference speedup ranging from 2.7x-4.0x
on HumanEval-X while maintaining output quality.
| 2,024 | Computation and Language |
FIPO: Free-form Instruction-oriented Prompt Optimization with Preference
Dataset and Modular Fine-tuning Schema | In the quest to facilitate the deep intelligence of Large Language Models
(LLMs) accessible in final-end user-bot interactions, the art of prompt
crafting emerges as a critical yet complex task for the average user. Contrast
to previous model-oriented yet instruction-agnostic Automatic Prompt
Optimization methodologies, yielding polished results for predefined target
models while suffering rapid degradation with out-of-box models, we present
Free-form Instruction-oriented Prompt Optimization (FIPO). This approach is
supported by our large-scale prompt preference dataset and employs a modular
fine-tuning schema. The FIPO schema reimagines the optimization process into
manageable modules, anchored by a meta prompt that dynamically adapts content.
This allows for the flexible integration of the raw task instruction, the
optional instruction response, and the optional ground truth to produce finely
optimized task prompts. The FIPO preference dataset is meticulously constructed
using the optimal and suboptimal LLMs, undergoing rigorous cross-verification
by human experts and analytical models. Applying the insights from the data
with Tulu2 models and fine-tuning strategies, we validate the efficacy of FIPO
schema across five public benchmarks. Codes, data and scripts are here:
https://github.com/LuJunru/FIPO_Project.
| 2,024 | Computation and Language |
HU at SemEval-2024 Task 8A: Can Contrastive Learning Learn Embeddings to
Detect Machine-Generated Text? | This paper describes our system developed for SemEval-2024 Task 8,
"Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text
Detection." Machine-generated texts have been one of the main concerns due to
the use of large language models (LLM) in fake text generation, phishing,
cheating in exams, or even plagiarizing copyright materials. A lot of systems
have been developed to detect machine-generated text. Nonetheless, the majority
of these systems rely on the text-generating model, a limitation that is
impractical in real-world scenarios, as it's often impossible to know which
specific model the user has used for text generation. In this work, we propose
a single model based on contrastive learning, which uses ~40% of the baseline's
parameters (149M vs. 355M) but shows a comparable performance on the test
dataset (21st out of 137 participants). Our key finding is that even without an
ensemble of multiple models, a single base model can have comparable
performance with the help of data augmentation and contrastive learning.
| 2,024 | Computation and Language |
Where It Really Matters: Few-Shot Environmental Conservation Media
Monitoring for Low-Resource Languages | Environmental conservation organizations routinely monitor news content on
conservation in protected areas to maintain situational awareness of
developments that can have an environmental impact. Existing automated media
monitoring systems require large amounts of data labeled by domain experts,
which is only feasible at scale for high-resource languages like English.
However, such tools are most needed in the global south where news of interest
is mainly in local low-resource languages, and far fewer experts are available
to annotate datasets sustainably. In this paper, we propose NewsSerow, a method
to automatically recognize environmental conservation content in low-resource
languages. NewsSerow is a pipeline of summarization, in-context few-shot
classification, and self-reflection using large language models (LLMs). Using
at most 10 demonstration example news articles in Nepali, NewsSerow
significantly outperforms other few-shot methods and achieves comparable
performance with models fully fine-tuned using thousands of examples. The World
Wide Fund for Nature (WWF) has deployed NewsSerow for media monitoring in
Nepal, significantly reducing their operational burden, and ensuring that AI
tools for conservation actually reach the communities that need them the most.
NewsSerow has also been deployed for countries with other languages like
Colombia.
| 2,024 | Computation and Language |
Head-wise Shareable Attention for Large Language Models | Large Language Models (LLMs) suffer from huge number of parameters, which
restricts their deployment on edge devices. Weight sharing is one promising
solution that encourages weight reuse, effectively reducing memory usage with
less performance drop. However, current weight sharing techniques primarily
focus on small-scale models like BERT and employ coarse-grained sharing rules,
e.g., layer-wise. This becomes limiting given the prevalence of LLMs and
sharing an entire layer or block obviously diminishes the flexibility of weight
sharing. In this paper, we present a perspective on $\textit{$\textbf{head-wise
shareable attention for large language models}$}$. We further propose two
memory-efficient methods that share parameters across attention heads, with a
specific focus on LLMs. Both of them use the same dynamic strategy to select
the shared weight matrices. The first method directly reuses the pre-trained
weights without retraining, denoted as $\textbf{DirectShare}$. The second
method first post-trains with constraint on weight matrix similarity and then
shares, denoted as $\textbf{PostShare}$. Experimental results reveal our
head-wise shared models still maintain satisfactory capabilities, demonstrating
the feasibility of fine-grained weight sharing applied to LLMs.
| 2,024 | Computation and Language |
Modularized Networks for Few-shot Hateful Meme Detection | In this paper, we address the challenge of detecting hateful memes in the
low-resource setting where only a few labeled examples are available. Our
approach leverages the compositionality of Low-rank adaptation (LoRA), a widely
used parameter-efficient tuning technique. We commence by fine-tuning large
language models (LLMs) with LoRA on selected tasks pertinent to hateful meme
detection, thereby generating a suite of LoRA modules. These modules are
capable of essential reasoning skills for hateful meme detection. We then use
the few available annotated samples to train a module composer, which assigns
weights to the LoRA modules based on their relevance. The model's learnable
parameters are directly proportional to the number of LoRA modules. This
modularized network, underpinned by LLMs and augmented with LoRA modules,
exhibits enhanced generalization in the context of hateful meme detection. Our
evaluation spans three datasets designed for hateful meme detection in a
few-shot learning context. The proposed method demonstrates superior
performance to traditional in-context learning, which is also more
computationally intensive during inference.We then use the few available
annotated samples to train a module composer, which assigns weights to the LoRA
modules based on their relevance. The model's learnable parameters are directly
proportional to the number of LoRA modules. This modularized network,
underpinned by LLMs and augmented with LoRA modules, exhibits enhanced
generalization in the context of hateful meme detection. Our evaluation spans
three datasets designed for hateful meme detection in a few-shot learning
context. The proposed method demonstrates superior performance to traditional
in-context learning, which is also more computationally intensive during
inference.
| 2,024 | Computation and Language |
How Interpretable are Reasoning Explanations from Prompting Large
Language Models? | Prompt Engineering has garnered significant attention for enhancing the
performance of large language models across a multitude of tasks. Techniques
such as the Chain-of-Thought not only bolster task performance but also
delineate a clear trajectory of reasoning steps, offering a tangible form of
explanation for the audience. Prior works on interpretability assess the
reasoning chains yielded by Chain-of-Thought solely along a singular axis,
namely faithfulness. We present a comprehensive and multifaceted evaluation of
interpretability, examining not only faithfulness but also robustness and
utility across multiple commonsense reasoning benchmarks. Likewise, our
investigation is not confined to a single prompting technique; it expansively
covers a multitude of prevalent prompting techniques employed in large language
models, thereby ensuring a wide-ranging and exhaustive evaluation. In addition,
we introduce a simple interpretability alignment technique, termed
Self-Entailment-Alignment Chain-of-thought, that yields more than 70\%
improvements across multiple dimensions of interpretability. Code is available
at https://github.com/wj210/CoT_interpretability
| 2,024 | Computation and Language |
M2K-VDG: Model-Adaptive Multimodal Knowledge Anchor Enhanced
Video-grounded Dialogue Generation | Video-grounded dialogue generation (VDG) requires the system to generate a
fluent and accurate answer based on multimodal knowledge. However, the
difficulty in multimodal knowledge utilization brings serious hallucinations to
VDG models in practice. Although previous works mitigate the hallucination in a
variety of ways, they hardly take notice of the importance of the multimodal
knowledge anchor answer tokens. In this paper, we reveal via perplexity that
different VDG models experience varying hallucinations and exhibit diverse
anchor tokens. Based on this observation, we propose M2K-VDG, a model-adaptive
multimodal knowledge anchor enhancement framework for hallucination reduction.
Furthermore, we introduce the counterfactual effect for more accurate anchor
token detection. The experimental results on three popular benchmarks exhibit
the superiority of our approach over state-of-the-art methods, demonstrating
its effectiveness in reducing hallucinations.
| 2,024 | Computation and Language |
The Colorful Future of LLMs: Evaluating and Improving LLMs as Emotional
Supporters for Queer Youth | Queer youth face increased mental health risks, such as depression, anxiety,
and suicidal ideation. Hindered by negative stigma, they often avoid seeking
help and rely on online resources, which may provide incompatible information.
Although access to a supportive environment and reliable information is
invaluable, many queer youth worldwide have no access to such support. However,
this could soon change due to the rapid adoption of Large Language Models
(LLMs) such as ChatGPT. This paper aims to comprehensively explore the
potential of LLMs to revolutionize emotional support for queers. To this end,
we conduct a qualitative and quantitative analysis of LLM's interactions with
queer-related content. To evaluate response quality, we develop a novel
ten-question scale that is inspired by psychological standards and expert
input. We apply this scale to score several LLMs and human comments to posts
where queer youth seek advice and share experiences. We find that LLM responses
are supportive and inclusive, outscoring humans. However, they tend to be
generic, not empathetic enough, and lack personalization, resulting in
nonreliable and potentially harmful advice. We discuss these challenges,
demonstrate that a dedicated prompt can improve the performance, and propose a
blueprint of an LLM-supporter that actively (but sensitively) seeks user
context to provide personalized, empathetic, and reliable responses. Our
annotated dataset is available for further research.
| 2,024 | Computation and Language |
ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large
Language Models with Reverse Prompt Contrastive Decoding | With the development of instruction-tuned large language models (LLMs),
improving the safety of LLMs has become more critical. However, the current
approaches for aligning the LLMs output with expected safety usually require
substantial training efforts, e.g., high-quality safety data and expensive
computational resources, which are costly and inefficient. To this end, we
present reverse prompt contrastive decoding (ROSE), a simple-yet-effective
method to directly boost the safety of existing instruction-tuned LLMs without
any additional training. The principle of ROSE is to improve the probability of
desired safe output via suppressing the undesired output induced by the
carefully-designed reverse prompts. Experiments on 6 safety and 2
general-purpose tasks show that, our ROSE not only brings consistent and
significant safety improvements (up to +13.8% safety score) upon 5 types of
instruction-tuned LLMs, but also benefits the general-purpose ability of LLMs.
In-depth analyses explore the underlying mechanism of ROSE, and reveal when and
where to use it.
| 2,024 | Computation and Language |
Revisiting Knowledge Distillation for Autoregressive Language Models | Knowledge distillation (KD) is a common approach to compress a teacher model
to reduce its inference cost and memory footprint, by training a smaller
student model. However, in the context of autoregressive language models (LMs),
we empirically find that larger teacher LMs might dramatically result in a
poorer student. In response to this problem, we conduct a series of analyses
and reveal that different tokens have different teaching modes, neglecting
which will lead to performance degradation. Motivated by this, we propose a
simple yet effective adaptive teaching approach (ATKD) to improve the KD. The
core of ATKD is to reduce rote learning and make teaching more diverse and
flexible. Extensive experiments on 8 LM tasks show that, with the help of ATKD,
various baseline KD methods can achieve consistent and significant performance
gains (up to +3.04% average score) across all model types and sizes. More
encouragingly, ATKD can improve the student model generalization effectively.
| 2,024 | Computation and Language |
Have Seen Me Before? Automating Dataset Updates Towards Reliable and
Timely Evaluation | Due to the expanding capabilities and pre-training data, Large Language
Models (LLMs) are facing increasingly serious evaluation challenges. On one
hand, the data leakage issue cause over-estimation on existing benchmarks. On
the other hand, periodically curating datasets manually is costly. In this
paper, we propose to automate dataset updates for reliable and timely
evaluation. The basic idea is to generate unseen and high-quality testing
samples based on existing ones to mitigate leakage issues. In specific, we
propose two strategies with systematically verification. First, the mimicking
strategy employs LLMs to create new samples resembling existing ones, to the
maximum extent preserving the stylistic of the original dataset. Our
experiments demonstrate its evaluation stability across multiple instantiations
and its effectiveness in dealing with data leakage issues in most cases.
Second, for the cases that mimicking dataset works poorly, we design an
extending strategy that adjusts the difficulty of the generated samples
according to varying cognitive levels. This not only makes our evaluation more
systematic, but also, with a balanced difficulty, even discern model
capabilities better at fine-grained levels.
| 2,024 | Computation and Language |
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning | Fine-tuning all parameters of large language models (LLMs) necessitates
substantial computational power and extended time. Latest advancements in
parameter-efficient fine-tuning (PEFT) techniques, such as Adapter tuning and
LoRA, allow for adjustments to only a minor fraction of the parameters of these
LLMs. Concurrently, it has been noted that the issue of over-smoothing
diminishes the effectiveness of these Transformer-based LLMs, resulting in
suboptimal performances in downstream tasks. In this paper, we present SIBO,
which is a SImple BOoster to enhance PEFT, by injecting an initial residual.
SIBO is straight-forward and readily extensible to a range of state-of-the-art
PEFT techniques to alleviate over-smoothing and enhance performance. Extensive
experiments on 22 benchmark datasets demonstrate that SIBO significantly
enhances the performance of various strong baselines, achieving up to 15.7% and
23.5% improvement over existing PEFT methods on the arithmetic and commonsense
reasoning tasks, respectively.
| 2,024 | Computation and Language |
Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large
Language Models | Recent work has showcased the powerful capability of large language models
(LLMs) in recalling knowledge and reasoning. However, the reliability of LLMs
in combining these two capabilities into reasoning through multi-hop facts has
not been widely explored. This paper systematically investigates the
possibilities for LLMs to utilize shortcuts based on direct connections between
the initial and terminal entities of multi-hop knowledge. We first explore the
existence of factual shortcuts through Knowledge Neurons, revealing that: (i)
the strength of factual shortcuts is highly correlated with the frequency of
co-occurrence of initial and terminal entities in the pre-training corpora;
(ii) few-shot prompting leverage more shortcuts in answering multi-hop
questions compared to chain-of-thought prompting. Then, we analyze the risks
posed by factual shortcuts from the perspective of multi-hop knowledge editing.
Analysis shows that approximately 20% of the failures are attributed to
shortcuts, and the initial and terminal entities in these failure instances
usually have higher co-occurrences in the pre-training corpus. Finally, we
propose erasing shortcut neurons to mitigate the associated risks and find that
this approach significantly reduces failures in multiple-hop knowledge editing
caused by shortcuts.
| 2,024 | Computation and Language |
SoLA: Solver-Layer Adaption of LLM for Better Logic Reasoning | Considering the challenges faced by large language models (LLMs) on logical
reasoning, prior efforts have sought to transform problem-solving through tool
learning. While progress has been made on small-scale problems, solving
industrial cases remains difficult due to their large scale and intricate
expressions. In this paper, we propose a novel solver-layer adaptation (SoLA)
method, where we introduce a solver as a new layer of the LLM to differentially
guide solutions towards satisfiability. In SoLA, LLM aims to comprehend the
search space described in natural language and identify local solutions of the
highest quality, while the solver layer focuses solely on constraints not
satisfied by the initial solution. Leveraging MaxSAT as a bridge, we define
forward and backward transfer gradients, enabling the final model to converge
to a satisfied solution or prove unsatisfiability. The backdoor theory ensures
that SoLA can obtain accurate solutions within polynomial loops. We evaluate
the performance of SoLA on various datasets and empirically demonstrate its
consistent outperformance against existing symbolic solvers (including Z3 and
Kissat) and tool-learning methods in terms of efficiency in large-scale
problem-solving.
| 2,024 | Computation and Language |
Learning to Edit: Aligning LLMs with Knowledge Editing | Knowledge editing techniques, aiming to efficiently modify a minor proportion
of knowledge in large language models (LLMs) without negatively impacting
performance across other inputs, have garnered widespread attention. However,
existing methods predominantly rely on memorizing the updated knowledge,
impeding LLMs from effectively combining the new knowledge with their inherent
knowledge when answering questions. To this end, we propose a Learning to Edit
(LTE) framework, focusing on teaching LLMs to apply updated knowledge into
input questions, inspired by the philosophy of "Teach a man to fish." LTE
features a two-phase process: (i) the Alignment Phase, which fine-tunes LLMs on
a meticulously curated parallel dataset to make reliable, in-scope edits while
preserving out-of-scope information and linguistic proficiency; and (ii) the
Inference Phase, which employs a retrieval-based mechanism for real-time and
mass knowledge editing. By comparing our approach with seven advanced baselines
across four popular knowledge editing benchmarks and two LLM architectures, we
demonstrate LTE's superiority in knowledge editing performance, robustness in
both batch and sequential editing, minimal interference on general tasks, and
rapid editing speeds. The data and code are available at
https://github.com/YJiangcm/LTE.
| 2,024 | Computation and Language |
Direct Large Language Model Alignment Through Self-Rewarding Contrastive
Prompt Distillation | Aligning large language models (LLMs) with human expectations without
human-annotated preference data is an important problem. In this paper, we
propose a method to evaluate the response preference by using the output
probabilities of response pairs under contrastive prompt pairs, which could
achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF. Based
on this, we propose an automatic alignment method, Direct Large Model Alignment
(DLMA). First, we use contrastive prompt pairs to automatically generate
preference data. Then, we continue to evaluate the generated preference data
using contrastive prompt pairs and calculate a self-rewarding score. Finally,
we use the DPO algorithm to effectively align LLMs by combining this
self-rewarding score. In the experimental stage, our DLMA method could surpass
the \texttt{RLHF} method without relying on human-annotated preference data.
| 2,024 | Computation and Language |
Semantic Textual Similarity Assessment in Chest X-ray Reports Using a
Domain-Specific Cosine-Based Metric | Medical language processing and deep learning techniques have emerged as
critical tools for improving healthcare, particularly in the analysis of
medical imaging and medical text data. These multimodal data fusion techniques
help to improve the interpretation of medical imaging and lead to increased
diagnostic accuracy, informed clinical decisions, and improved patient
outcomes. The success of these models relies on the ability to extract and
consolidate semantic information from clinical text. This paper addresses the
need for more robust methods to evaluate the semantic content of medical
reports. Conventional natural language processing approaches and metrics are
initially designed for considering the semantic context in the natural language
domain and machine translation, often failing to capture the complex semantic
meanings inherent in medical content. In this study, we introduce a novel
approach designed specifically for assessing the semantic similarity between
generated medical reports and the ground truth. Our approach is validated,
demonstrating its efficiency in assessing domain-specific semantic similarity
within medical contexts. By applying our metric to state-of-the-art Chest X-ray
report generation models, we obtain results that not only align with
conventional metrics but also provide more contextually meaningful scores in
the considered medical domain.
| 2,024 | Computation and Language |
MRKE: The Multi-hop Reasoning Evaluation of LLMs by Knowledge Edition | Although Large Language Models (LLMs) have shown strong performance in
Multi-hop Question Answering (MHQA) tasks, their real reasoning ability remains
exploration. Current LLM QA evaluation benchmarks have shown limitations,
including 1) data contamination, the evaluation data are potentially exposed to
LLMs during the pretraining stage; and 2) ignoration of the reasoning chain
evaluation. Thus we introduce an LLM MHQA evaluation benchmark, the first QA
benchmark based on the new, unprecedented knowledge by editing the
off-the-shelf HotpotQA dataset; Besides, we also annotate and evaluate the
reasoning chain in the form of sub-questions and intermediate answers
corresponding to the multi-hop questions. Specifically, based on the
observation, 1) LLMs show a performance gap between the original HotpotQA and
our edited data, deeming that current MHQA benchmarks have the potential risk
of data contamination that hard to evaluate LLMs' performance objectively and
scientifically; 2) LLMs only get a small percentage of the right reasoning
chain, e.g. GPT-4 only gets 36.3\% right reasoning chain. We believe this new
Multi-hop QA evaluation benchmark and novel evaluation methods will facilitate
the development of trustworthy LLM evaluation on the MHQA task.
| 2,024 | Computation and Language |
Team QUST at SemEval-2024 Task 8: A Comprehensive Study of Monolingual
and Multilingual Approaches for Detecting AI-generated Text | This paper presents the participation of team QUST in Task 8 SemEval 2024. We
first performed data augmentation and cleaning on the dataset to enhance model
training efficiency and accuracy. In the monolingual task, we evaluated
traditional deep-learning methods, multiscale positive-unlabeled framework
(MPU), fine-tuning, adapters and ensemble methods. Then, we selected the
top-performing models based on their accuracy from the monolingual models and
evaluated them in subtasks A and B. The final model construction employed a
stacking ensemble that combined fine-tuning with MPU. Our system achieved 8th
(scored 8th in terms of accuracy, officially ranked 13th) place in the official
test set in multilingual settings of subtask A. We release our system code
at:https://github.com/warmth27/SemEval2024_QUST
| 2,024 | Computation and Language |
Comprehensive Cognitive LLM Agent for Smartphone GUI Automation | Large language models (LLMs) have shown remarkable potential as human-like
autonomous language agents to interact with real-world environments, especially
for graphical user interface (GUI) automation. However, those GUI agents
require comprehensive cognition ability including exhaustive perception and
reliable action response. We propose \underline{Co}mprehensive
\underline{Co}gnitive LLM \underline{Agent}, CoCo-Agent, with two novel
approaches, comprehensive environment perception (CEP) and conditional action
prediction (CAP), to systematically improve the GUI automation performance.
First, CEP facilitates the GUI perception through different aspects and
granularity, including screenshots and complementary detailed layouts for the
visual channel and historical actions for the textual channel. Second, CAP
decomposes the action prediction into sub-problems: action type prediction and
action target conditioned on the action type. With our technical design, our
agent achieves new state-of-the-art performance on AITW and META-GUI
benchmarks, showing promising abilities in realistic scenarios.
| 2,024 | Computation and Language |
LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with
External Knowledge Augmentation | The rise of multimodal misinformation on social platforms poses significant
challenges for individuals and societies. Its increased credibility and broader
impact compared to textual misinformation make detection complex, requiring
robust reasoning across diverse media types and profound knowledge for accurate
verification. The emergence of Large Vision Language Model (LVLM) offers a
potential solution to this problem. Leveraging their proficiency in processing
visual and textual information, LVLM demonstrates promising capabilities in
recognizing complex information and exhibiting strong reasoning skills. In this
paper, we first investigate the potential of LVLM on multimodal misinformation
detection. We find that even though LVLM has a superior performance compared to
LLMs, its profound reasoning may present limited power with a lack of evidence.
Based on these observations, we propose LEMMA: LVLM-Enhanced Multimodal
Misinformation Detection with External Knowledge Augmentation. LEMMA leverages
LVLM intuition and reasoning capabilities while augmenting them with external
knowledge to enhance the accuracy of misinformation detection. Our method
improves the accuracy over the top baseline LVLM by 7% and 13% on Twitter and
Fakeddit datasets respectively.
| 2,024 | Computation and Language |
Analysis of Multidomain Abstractive Summarization Using Salience
Allocation | This paper explores the realm of abstractive text summarization through the
lens of the SEASON (Salience Allocation as Guidance for Abstractive
SummarizatiON) technique, a model designed to enhance summarization by
leveraging salience allocation techniques. The study evaluates SEASON's
efficacy by comparing it with prominent models like BART, PEGASUS, and
ProphetNet, all fine-tuned for various text summarization tasks. The assessment
is conducted using diverse datasets including CNN/Dailymail, SAMSum, and
Financial-news based Event-Driven Trading (EDT), with a specific focus on a
financial dataset containing a substantial volume of news articles from
2020/03/01 to 2021/05/06. This paper employs various evaluation metrics such as
ROUGE, METEOR, BERTScore, and MoverScore to evaluate the performance of these
models fine-tuned for generating abstractive summaries. The analysis of these
metrics offers a thorough insight into the strengths and weaknesses
demonstrated by each model in summarizing news dataset, dialogue dataset and
financial text dataset. The results presented in this paper not only contribute
to the evaluation of the SEASON model's effectiveness but also illuminate the
intricacies of salience allocation techniques across various types of datasets.
| 2,024 | Computation and Language |
Automatic Evaluation for Mental Health Counseling using LLMs | High-quality psychological counseling is crucial for mental health worldwide,
and timely evaluation is vital for ensuring its effectiveness. However,
obtaining professional evaluation for each counseling session is expensive and
challenging. Existing methods that rely on self or third-party manual reports
to assess the quality of counseling suffer from subjective biases and
limitations of time-consuming.
To address above challenges, this paper proposes an innovative and efficient
automatic approach using large language models (LLMs) to evaluate the working
alliance in counseling conversations. We collected a comprehensive counseling
dataset and conducted multiple third-party evaluations based on therapeutic
relationship theory. Our LLM-based evaluation, combined with our guidelines,
shows high agreement with human evaluations and provides valuable insights into
counseling scripts. This highlights the potential of LLMs as supervisory tools
for psychotherapists. By integrating LLMs into the evaluation process, our
approach offers a cost-effective and dependable means of assessing counseling
quality, enhancing overall effectiveness.
| 2,024 | Computation and Language |
What Do Dialect Speakers Want? A Survey of Attitudes Towards Language
Technology for German Dialects | Natural language processing (NLP) has largely focused on modelling
standardized languages. More recently, attention has increasingly shifted to
local, non-standardized languages and dialects. However, the relevant speaker
populations' needs and wishes with respect to NLP tools are largely unknown. In
this paper, we focus on dialects and regional languages related to German -- a
group of varieties that is heterogeneous in terms of prestige and
standardization. We survey speakers of these varieties (N=327) and present
their opinions on hypothetical language technologies for their dialects.
Although attitudes vary among subgroups of our respondents, we find that
respondents are especially in favour of potential NLP tools that work with
dialectal input (especially audio input) such as virtual assistants, and less
so for applications that produce dialectal output such as machine translation
or spellcheckers.
| 2,024 | Computation and Language |
Compress to Impress: Unleashing the Potential of Compressive Memory in
Real-World Long-Term Conversations | Existing retrieval-based methods have made significant strides in maintaining
long-term conversations. However, these approaches face challenges in memory
database management and accurate memory retrieval, hindering their efficacy in
dynamic, real-world interactions. This study introduces a novel framework,
COmpressive Memory-Enhanced Dialogue sYstems (COMEDY), which eschews
traditional retrieval modules and memory databases. Instead, COMEDY adopts a
''One-for-All'' approach, utilizing a single language model to manage memory
generation, compression, and response generation. Central to this framework is
the concept of compressive memory, which intergrates session-specific
summaries, user-bot dynamics, and past events into a concise memory format. To
support COMEDY, we curated a large-scale Chinese instruction-tuning dataset,
Dolphin, derived from real user-chatbot interactions. Comparative evaluations
demonstrate COMEDY's superiority over traditional retrieval-based methods in
producing more nuanced and human-like conversational experiences. Our codes are
available at https://github.com/nuochenpku/COMEDY.
| 2,024 | Computation and Language |
Remember This Event That Year? Assessing Temporal Information and
Reasoning in Large Language Models | Large Language Models (LLMs) are increasingly becoming ubiquitous, yet their
ability to reason about and retain temporal information remains limited. This
hinders their application in real-world scenarios where understanding the
sequential nature of events is crucial. This paper experiments with
state-of-the-art models on a novel, large-scale temporal dataset,
\textbf{TempUN}, to reveal significant limitations in temporal retention and
reasoning abilities. Interestingly, closed-source models indicate knowledge
gaps more frequently, potentially suggesting a trade-off between uncertainty
awareness and incorrect responses. Further, exploring various fine-tuning
approaches yielded no major performance improvements. The associated dataset
and code are available at the following URL
(https://github.com/lingoiitgn/TempUN).
| 2,024 | Computation and Language |
A Systematic Comparison of Contextualized Word Embeddings for Lexical
Semantic Change | Contextualized embeddings are the preferred tool for modeling Lexical
Semantic Change (LSC). Current evaluations typically focus on a specific task
known as Graded Change Detection (GCD). However, performance comparison across
work are often misleading due to their reliance on diverse settings. In this
paper, we evaluate state-of-the-art models and approaches for GCD under equal
conditions. We further break the LSC problem into Word-in-Context (WiC) and
Word Sense Induction (WSI) tasks, and compare models across these different
levels. Our evaluation is performed across different languages on eight
available benchmarks for LSC, and shows that (i) APD outperforms other
approaches for GCD; (ii) XL-LEXEME outperforms other contextualized models for
WiC, WSI, and GCD, while being comparable to GPT-4; (iii) there is a clear need
for improving the modeling of word meanings, as well as focus on how, when, and
why these meanings change, rather than solely focusing on the extent of
semantic change.
| 2,024 | Computation and Language |
Distilling Large Language Models for Text-Attributed Graph Learning | Text-Attributed Graphs (TAGs) are graphs of connected textual documents.
Graph models can efficiently learn TAGs, but their training heavily relies on
human-annotated labels, which are scarce or even unavailable in many
applications. Large language models (LLMs) have recently demonstrated
remarkable capabilities in few-shot and zero-shot TAG learning, but they suffer
from scalability, cost, and privacy issues. Therefore, in this work, we focus
on synergizing LLMs and graph models with their complementary strengths by
distilling the power of LLMs to a local graph model on TAG learning. To address
the inherent gaps between LLMs (generative models for texts) and graph models
(discriminative models for graphs), we propose first to let LLMs teach an
interpreter with rich textual rationale and then let a student model mimic the
interpreter's reasoning without LLMs' textual rationale. Extensive experiments
validate the efficacy of our proposed framework.
| 2,024 | Computation and Language |
Speech Translation with Speech Foundation Models and Large Language
Models: What is There and What is Missing? | The field of natural language processing (NLP) has recently witnessed a
transformative shift with the emergence of foundation models, particularly
Large Language Models (LLMs) that have revolutionized text-based NLP. This
paradigm has extended to other modalities, including speech, where researchers
are actively exploring the combination of Speech Foundation Models (SFMs) and
LLMs into single, unified models capable of addressing multimodal tasks. Among
such tasks, this paper focuses on speech-to-text translation (ST). By examining
the published papers on the topic, we propose a unified view of the
architectural solutions and training strategies presented so far, highlighting
similarities and differences among them. Based on this examination, we not only
organize the lessons learned but also show how diverse settings and evaluation
approaches hinder the identification of the best-performing solution for each
architectural building block and training choice. Lastly, we outline
recommendations for future works on the topic aimed at better understanding the
strengths and weaknesses of the SFM+LLM solutions for ST.
| 2,024 | Computation and Language |
Acquiring Clean Language Models from Backdoor Poisoned Datasets by
Downscaling Frequency Space | Despite the notable success of language models (LMs) in various natural
language processing (NLP) tasks, the reliability of LMs is susceptible to
backdoor attacks. Prior research attempts to mitigate backdoor learning while
training the LMs on the poisoned dataset, yet struggles against complex
backdoor attacks in real-world scenarios. In this paper, we investigate the
learning mechanisms of backdoor LMs in the frequency space by Fourier analysis.
Our findings indicate that the backdoor mapping presented on the poisoned
datasets exhibits a more discernible inclination towards lower frequency
compared to clean mapping, resulting in the faster convergence of backdoor
mapping. To alleviate this dilemma, we propose Multi-Scale Low-Rank Adaptation
(MuScleLoRA), which deploys multiple radial scalings in the frequency space
with low-rank adaptation to the target model and further aligns the gradients
when updating parameters. Through downscaling in the frequency space,
MuScleLoRA encourages the model to prioritize the learning of relatively
high-frequency clean mapping, consequently mitigating backdoor learning.
Experimental results demonstrate that MuScleLoRA outperforms baselines
significantly. Notably, MuScleLoRA reduces the average success rate of diverse
backdoor attacks to below 15\% across multiple datasets and generalizes to
various backbone LMs, including BERT, RoBERTa, and Llama2. The codes are
available at https://github.com/ZrW00/MuScleLoRA.
| 2,024 | Computation and Language |
Towards Cross-Tokenizer Distillation: the Universal Logit Distillation
Loss for LLMs | Deploying large language models (LLMs) of several billion parameters can be
impractical in most industrial use cases due to constraints such as cost,
latency limitations, and hardware accessibility. Knowledge distillation (KD)
offers a solution by compressing knowledge from resource-intensive large models
to smaller ones. Various strategies exist, some relying on the text generated
by the teacher model and optionally utilizing his logits to enhance learning.
However, these methods based on logits often require both teacher and student
models to share the same tokenizer, limiting their applicability across
different LLM families. In this paper, we introduce Universal Logit
Distillation (ULD) loss, grounded in optimal transport, to address this
limitation. Our experimental results demonstrate the effectiveness of ULD loss
in enabling distillation across models with different architectures and
tokenizers, paving the way to a more widespread use of distillation techniques.
| 2,024 | Computation and Language |
Language Model Adaptation to Specialized Domains through Selective
Masking based on Genre and Topical Characteristics | Recent advances in pre-trained language modeling have facilitated significant
progress across various natural language processing (NLP) tasks. Word masking
during model training constitutes a pivotal component of language modeling in
architectures like BERT. However, the prevalent method of word masking relies
on random selection, potentially disregarding domain-specific linguistic
attributes. In this article, we introduce an innovative masking approach
leveraging genre and topicality information to tailor language models to
specialized domains. Our method incorporates a ranking process that prioritizes
words based on their significance, subsequently guiding the masking procedure.
Experiments conducted using continual pre-training within the legal domain have
underscored the efficacy of our approach on the LegalGLUE benchmark in the
English language. Pre-trained language models and code are freely available for
use.
| 2,024 | Computation and Language |
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large
Language Models | Catastrophic forgetting emerges as a critical challenge when fine-tuning
multi-modal large language models (MLLMs), where improving performance on
unseen tasks often leads to a significant performance drop on the original
tasks. This paper presents a comprehensive analysis of catastrophic forgetting
in MLLMs and introduces a post-training adjustment method called Model Tailor.
Our method primarily preserves the pre-trained parameters while replacing a
small number ($\leq$ 10\%) of fine-tuned parameters, maintaining $\sim$ 99\%
effectiveness on original tasks versus pre-training, and achieving $\sim$ 97\%
on new tasks compared to standard fine-tuning. Specifically, we derive a sparse
mask to identify the "model patch", based on a fusion strategy that integrates
salience and sensitivity analysis. Subsequently, a compensation mechanism is
introduced to "decorate the patch", enhancing the model's performance on both
target and original tasks. Additionally, our method is adaptable to multi-task
scenarios. Through extensive experiments on InstructBLIP and LLaVA-1.5 in both
image captioning and visual question answering tasks, our approach demonstrates
significant task adaptability while preserving inherent pre-trained
capabilities.
| 2,024 | Computation and Language |
Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When
and What to Retrieve for LLMs | The integration of large language models (LLMs) and search engines represents
a significant evolution in knowledge acquisition methodologies. However,
determining the knowledge that an LLM already possesses and the knowledge that
requires the help of a search engine remains an unresolved issue. Most existing
methods solve this problem through the results of preliminary answers or
reasoning done by the LLM itself, but this incurs excessively high
computational costs. This paper introduces a novel collaborative approach,
namely SlimPLM, that detects missing knowledge in LLMs with a slim proxy model,
to enhance the LLM's knowledge acquisition process. We employ a proxy model
which has far fewer parameters, and take its answers as heuristic answers.
Heuristic answers are then utilized to predict the knowledge required to answer
the user question, as well as the known and unknown knowledge within the LLM.
We only conduct retrieval for the missing knowledge in questions that the LLM
does not know. Extensive experimental results on five datasets with two LLMs
demonstrate a notable improvement in the end-to-end performance of LLMs in
question-answering tasks, achieving or surpassing current state-of-the-art
models with lower LLM inference costs.
| 2,024 | Computation and Language |
Are LLM-based Evaluators Confusing NLG Quality Criteria? | Some prior work has shown that LLMs perform well in NLG evaluation for
different tasks. However, we discover that LLMs seem to confuse different
evaluation criteria, which reduces their reliability. For further verification,
we first consider avoiding issues of inconsistent conceptualization and vague
expression in existing NLG quality criteria themselves. So we summarize a clear
hierarchical classification system for 11 common aspects with corresponding
different criteria from previous studies involved. Inspired by behavioral
testing, we elaborately design 18 types of aspect-targeted perturbation attacks
for fine-grained analysis of the evaluation behaviors of different LLMs. We
also conduct human annotations beyond the guidance of the classification system
to validate the impact of the perturbations. Our experimental results reveal
confusion issues inherent in LLMs, as well as other noteworthy phenomena, and
necessitate further research and improvements for LLM-based evaluation.
| 2,024 | Computation and Language |
Subsets and Splits