Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
CFEVER: A Chinese Fact Extraction and VERification Dataset | We present CFEVER, a Chinese dataset designed for Fact Extraction and
VERification. CFEVER comprises 30,012 manually created claims based on content
in Chinese Wikipedia. Each claim in CFEVER is labeled as "Supports", "Refutes",
or "Not Enough Info" to depict its degree of factualness. Similar to the FEVER
dataset, claims in the "Supports" and "Refutes" categories are also annotated
with corresponding evidence sentences sourced from single or multiple pages in
Chinese Wikipedia. Our labeled dataset holds a Fleiss' kappa value of 0.7934
for five-way inter-annotator agreement. In addition, through the experiments
with the state-of-the-art approaches developed on the FEVER dataset and a
simple baseline for CFEVER, we demonstrate that our dataset is a new rigorous
benchmark for factual extraction and verification, which can be further used
for developing automated systems to alleviate human fact-checking efforts.
CFEVER is available at https://ikmlab.github.io/CFEVER.
| 2,024 | Computation and Language |
Heterogeneous Graph Reasoning for Fact Checking over Texts and Tables | Fact checking aims to predict claim veracity by reasoning over multiple
evidence pieces. It usually involves evidence retrieval and veracity reasoning.
In this paper, we focus on the latter, reasoning over unstructured text and
structured table information. Previous works have primarily relied on
fine-tuning pretrained language models or training homogeneous-graph-based
models. Despite their effectiveness, we argue that they fail to explore the
rich semantic information underlying the evidence with different structures. To
address this, we propose a novel word-level Heterogeneous-graph-based model for
Fact Checking over unstructured and structured information, namely HeterFC. Our
approach leverages a heterogeneous evidence graph, with words as nodes and
thoughtfully designed edges representing different evidence properties. We
perform information propagation via a relational graph neural network,
facilitating interactions between claims and evidence. An attention-based
method is utilized to integrate information, combined with a language model for
generating predictions. We introduce a multitask loss function to account for
potential inaccuracies in evidence retrieval. Comprehensive experiments on the
large fact checking dataset FEVEROUS demonstrate the effectiveness of HeterFC.
Code will be released at: https://github.com/Deno-V/HeterFC.
| 2,024 | Computation and Language |
Learning to Check: Unleashing Potentials for Self-Correction in Large
Language Models | Large language models (LLMs) have made significant strides in reasoning
capabilities, with ongoing efforts to refine their reasoning through
self-correction. However, recent studies suggest that self-correction can be
limited or even counterproductive without external accurate knowledge, raising
questions about the limits and effectiveness of self-correction. In this paper,
we aim to enhance LLM's self-checking capabilities by meticulously designing
training data, thereby improving the accuracy of self-correction. We conduct a
detailed analysis of error types in mathematical reasoning and develop a
tailored prompt, termed "Step CoT Check". Then we construct a
checking-correction dataset for training models. After integrating the original
CoT data and checking-correction data for training, we observe that models
could improve their self-checking capabilities, thereby enhancing their
self-correction capacity and eliminating the need for external feedback or
ground truth labels to ascertain the endpoint of correction. We compare the
performance of models fine-tuned with the "Step CoT Check" prompt against those
refined using other promps within the context of checking-correction data. The
"Step CoT Check" outperforms the other two check formats in model with lager
parameters, providing more precise feedback thus achieving a higher rate of
correctness. For reproducibility, all the datasets and codes are provided in
https://github.com/bammt/Learn-to-check.
| 2,024 | Computation and Language |
SiLLM: Large Language Models for Simultaneous Machine Translation | Simultaneous Machine Translation (SiMT) generates translations while reading
the source sentence, necessitating a policy to determine the optimal timing for
reading and generating words. Despite the remarkable performance achieved by
Large Language Models (LLM) across various NLP tasks, existing SiMT methods
predominantly focus on conventional transformers, employing a single model to
concurrently determine the policy and generate the translations. However, given
the complexity of SiMT, it is challenging to effectively address both tasks
with a single model. Therefore, there is a need to decouple the SiMT task into
policy-decision and translation sub-tasks. We propose SiLLM, which delegates
the two sub-tasks to separate agents, thereby incorporating LLM into SiMT. The
policy-decision agent is managed by a conventional SiMT model, responsible for
determining the translation policy. The translation agent, leveraging the
capabilities of LLM, generates translation using the partial source sentence.
The two agents collaborate to accomplish SiMT. To facilitate the application of
token-level policies determined by conventional SiMT models to LLM, we propose
a word-level policy adapted for LLM. Experiments on two datasets demonstrate
that, with a small amount of data for fine-tuning LLM, SiLLM attains
state-of-the-art performance.
| 2,024 | Computation and Language |
Effective and Efficient Conversation Retrieval for Dialogue State
Tracking with Implicit Text Summaries | Few-shot dialogue state tracking (DST) with Large Language Models (LLM)
relies on an effective and efficient conversation retriever to find similar
in-context examples for prompt learning. Previous works use raw dialogue
context as search keys and queries, and a retriever is fine-tuned with
annotated dialogues to achieve superior performance. However, the approach is
less suited for scaling to new domains or new annotation languages, where
fine-tuning data is unavailable. To address this problem, we handle the task of
conversation retrieval based on text summaries of the conversations. A
LLM-based conversation summarizer is adopted for query and key generation,
which enables effective maximum inner product search. To avoid the extra
inference cost brought by LLM-based conversation summarization, we further
distill a light-weight conversation encoder which produces query embeddings
without decoding summaries for test conversations. We validate our retrieval
approach on MultiWOZ datasets with GPT-Neo-2.7B and LLaMA-7B/30B. The
experimental results show a significant improvement over relevant baselines in
real few-shot DST settings.
| 2,024 | Computation and Language |
Stable Knowledge Editing in Large Language Models | Efficient knowledge editing of large language models is crucial for replacing
obsolete information or incorporating specialized knowledge on a large scale.
However, previous methods implicitly assume that knowledge is localized and
isolated within the model, an assumption that oversimplifies the interconnected
nature of model knowledge. The premise of localization results in an incomplete
knowledge editing, whereas an isolated assumption may impair both other
knowledge and general abilities. It introduces instability to the performance
of the knowledge editing method. To transcend these assumptions, we introduce
StableKE, a method adopts a novel perspective based on knowledge augmentation
rather than knowledge localization. To overcome the expense of human labeling,
StableKE integrates two automated knowledge augmentation strategies: Semantic
Paraphrase Enhancement strategy, which diversifies knowledge descriptions to
facilitate the teaching of new information to the model, and Contextual
Description Enrichment strategy, expanding the surrounding knowledge to prevent
the forgetting of related information. StableKE surpasses other knowledge
editing methods, demonstrating stability both edited knowledge and multi-hop
knowledge, while also preserving unrelated knowledge and general abilities.
Moreover, StableKE can edit knowledge on ChatGPT.
| 2,024 | Computation and Language |
Identifying Semantic Induction Heads to Understand In-Context Learning | Although large language models (LLMs) have demonstrated remarkable
performance, the lack of transparency in their inference logic raises concerns
about their trustworthiness. To gain a better understanding of LLMs, we conduct
a detailed analysis of the operations of attention heads and aim to better
understand the in-context learning of LLMs. Specifically, we investigate
whether attention heads encode two types of relationships between tokens
present in natural languages: the syntactic dependency parsed from sentences
and the relation within knowledge graphs. We find that certain attention heads
exhibit a pattern where, when attending to head tokens, they recall tail tokens
and increase the output logits of those tail tokens. More crucially, the
formulation of such semantic induction heads has a close correlation with the
emergence of the in-context learning ability of language models. The study of
semantic attention heads advances our understanding of the intricate operations
of attention heads in transformers, and further provides new insights into the
in-context learning of LLMs.
| 2,024 | Computation and Language |
Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for
Language Models | We introduce Generalized Instruction Tuning (called GLAN), a general and
scalable method for instruction tuning of Large Language Models (LLMs). Unlike
prior work that relies on seed examples or existing datasets to construct
instruction tuning data, GLAN exclusively utilizes a pre-curated taxonomy of
human knowledge and capabilities as input and generates large-scale synthetic
instruction data across all disciplines. Specifically, inspired by the
systematic structure in human education system, we build the taxonomy by
decomposing human knowledge and capabilities to various fields, sub-fields and
ultimately, distinct disciplines semi-automatically, facilitated by LLMs.
Subsequently, we generate a comprehensive list of subjects for every discipline
and proceed to design a syllabus tailored to each subject, again utilizing
LLMs. With the fine-grained key concepts detailed in every class session of the
syllabus, we are able to generate diverse instructions with a broad coverage
across the entire spectrum of human knowledge and skills. Extensive experiments
on large language models (e.g., Mistral) demonstrate that GLAN excels in
multiple dimensions from mathematical reasoning, coding, academic exams,
logical reasoning to general instruction following without using task-specific
training data of these tasks. In addition, GLAN allows for easy customization
and new fields or skills can be added by simply incorporating a new node into
our taxonomy.
| 2,024 | Computation and Language |
Event-level Knowledge Editing | Knowledge editing aims at updating knowledge of large language models (LLMs)
to prevent them from becoming outdated. Existing work edits LLMs at the level
of factual knowledge triplets. However, natural knowledge updates in the real
world come from the occurrences of new events rather than direct changes in
factual triplets. In this paper, we propose a new task setting: event-level
knowledge editing, which directly edits new events into LLMs and improves over
conventional triplet-level editing on (1) Efficiency. A single event edit leads
to updates in multiple entailed knowledge triplets. (2) Completeness. Beyond
updating factual knowledge, event-level editing also requires considering the
event influences and updating LLMs' knowledge about future trends. We construct
a high-quality event-level editing benchmark ELKEN, consisting of 1,515 event
edits, 6,449 questions about factual knowledge, and 10,150 questions about
future tendencies. We systematically evaluate the performance of various
knowledge editing methods and LLMs on this benchmark. We find that ELKEN poses
significant challenges to existing knowledge editing approaches. Our codes and
dataset are publicly released to facilitate further research.
| 2,024 | Computation and Language |
Digital Comprehensibility Assessment of Simplified Texts among Persons
with Intellectual Disabilities | Text simplification refers to the process of increasing the comprehensibility
of texts. Automatic text simplification models are most commonly evaluated by
experts or crowdworkers instead of the primary target groups of simplified
texts, such as persons with intellectual disabilities. We conducted an
evaluation study of text comprehensibility including participants with and
without intellectual disabilities reading unsimplified, automatically and
manually simplified German texts on a tablet computer. We explored four
different approaches to measuring comprehensibility: multiple-choice
comprehension questions, perceived difficulty ratings, response time, and
reading speed. The results revealed significant variations in these
measurements, depending on the reader group and whether the text had undergone
automatic or manual simplification. For the target group of persons with
intellectual disabilities, comprehension questions emerged as the most reliable
measure, while analyzing reading speed provided valuable insights into
participants' reading behavior.
| 2,024 | Computation and Language |
ELAD: Explanation-Guided Large Language Models Active Distillation | The deployment and application of Large Language Models (LLMs) is hindered by
their memory inefficiency, computational demands, and the high costs of API
inferences. Traditional distillation methods, which transfer the capabilities
of LLMs to smaller models, often fail to determine whether the knowledge has
been sufficiently transferred, potentially resulting in high costs or
incomplete distillation. In this paper, we propose an Explanation-Guided LLMs
Active Distillation (ELAD) framework that employs an active learning strategy
to optimize the balance between annotation costs and model performance. To
improve efficient sample selection, we introduce an explanation-guided sample
selection method that identifies samples challenging its reasoning by
exploiting uncertainties in explanation steps. Additionally, we present a
customized LLM-annotated explanation revision technique where the teacher model
detects and corrects flaws in the student model's reasoning. Our experiments
across various reasoning datasets demonstrate that our framework significantly
enhances the efficiency of LLM knowledge distillation.
| 2,024 | Computation and Language |
CIF-Bench: A Chinese Instruction-Following Benchmark for Evaluating the
Generalizability of Large Language Models | The advancement of large language models (LLMs) has enhanced the ability to
generalize across a wide range of unseen natural language processing (NLP)
tasks through instruction-following. Yet, their effectiveness often diminishes
in low-resource languages like Chinese, exacerbated by biased evaluations from
data leakage, casting doubt on their true generalizability to new linguistic
territories. In response, we introduce the Chinese Instruction-Following
Benchmark (CIF-Bench), designed to evaluate the zero-shot generalizability of
LLMs to the Chinese language. CIF-Bench comprises 150 tasks and 15,000
input-output pairs, developed by native speakers to test complex reasoning and
Chinese cultural nuances across 20 categories. To mitigate evaluation bias, we
release only half of the dataset publicly, with the remainder kept private, and
introduce diversified instructions to minimize score variance, totaling 45,000
data instances. Our evaluation of 28 selected LLMs reveals a noticeable
performance gap, with the best model scoring only 52.9%, highlighting the
limitations of LLMs in less familiar language and task contexts. This work aims
to uncover the current limitations of LLMs in handling Chinese tasks, pushing
towards the development of more culturally informed and linguistically diverse
models with the released data and benchmark
(https://yizhilll.github.io/CIF-Bench/).
| 2,024 | Computation and Language |
When Only Time Will Tell: Interpreting How Transformers Process Local
Ambiguities Through the Lens of Restart-Incrementality | Incremental models that process sentences one token at a time will sometimes
encounter points where more than one interpretation is possible. Causal models
are forced to output one interpretation and continue, whereas models that can
revise may edit their previous output as the ambiguity is resolved. In this
work, we look at how restart-incremental Transformers build and update internal
states, in an effort to shed light on what processes cause revisions not viable
in autoregressive models. We propose an interpretable way to analyse the
incremental states, showing that their sequential structure encodes information
on the garden path effect and its resolution. Our method brings insights on
various bidirectional encoders for contextualised meaning representation and
dependency parsing, contributing to show their advantage over causal models
when it comes to revisions.
| 2,024 | Computation and Language |
A Survey on Knowledge Distillation of Large Language Models | In the era of Large Language Models (LLMs), Knowledge Distillation (KD)
emerges as a pivotal methodology for transferring advanced capabilities from
leading proprietary LLMs, such as GPT-4, to their open-source counterparts like
LLaMA and Mistral. Additionally, as open-source LLMs flourish, KD plays a
crucial role in both compressing these models, and facilitating their
self-improvement by employing themselves as teachers. This paper presents a
comprehensive survey of KD's role within the realm of LLM, highlighting its
critical function in imparting advanced knowledge to smaller models and its
utility in model compression and self-improvement. Our survey is meticulously
structured around three foundational pillars: \textit{algorithm},
\textit{skill}, and \textit{verticalization} -- providing a comprehensive
examination of KD mechanisms, the enhancement of specific cognitive abilities,
and their practical implications across diverse fields. Crucially, the survey
navigates the intricate interplay between data augmentation (DA) and KD,
illustrating how DA emerges as a powerful paradigm within the KD framework to
bolster LLMs' performance. By leveraging DA to generate context-rich,
skill-specific training data, KD transcends traditional boundaries, enabling
open-source models to approximate the contextual adeptness, ethical alignment,
and deep semantic insights characteristic of their proprietary counterparts.
This work aims to provide an insightful guide for researchers and
practitioners, offering a detailed overview of current methodologies in KD and
proposing future research directions. Importantly, we firmly advocate for
compliance with the legal terms that regulate the use of LLMs, ensuring ethical
and lawful application of KD of LLMs. An associated Github repository is
available at https://github.com/Tebmer/Awesome-Knowledge-Distillation-of-LLMs.
| 2,024 | Computation and Language |
TreeEval: Benchmark-Free Evaluation of Large Language Models through
Tree Planning | Recently, numerous new benchmarks have been established to evaluate the
performance of large language models (LLMs) via either computing a holistic
score or employing another LLM as a judge. However, these approaches suffer
from data leakage due to the open access of the benchmark and inflexible
evaluation process. To address this issue, we introduce $\textbf{TreeEval}$, a
benchmark-free evaluation method for LLMs that let a high-performance LLM host
an irreproducible evaluation session and essentially avoids the data leakage.
Moreover, this LLM performs as an examiner to raise up a series of questions
under a topic with a tree planing strategy, which considers the current
evaluation status to decide the next question generation and ensures the
completeness and efficiency of the evaluation process. We evaluate $6$ models
of different parameter sizes, including $7$B, $13$B, and $33$B, and ultimately
achieved the highest correlation coefficient with AlpacaEval2.0 using only
around $45$ questions. We also conduct more analysis to show the robustness and
reliability of TreeEval. Our code can be accessed via the provided
https://github.com/Ashura5/TreeEval.
| 2,024 | Computation and Language |
Are ELECTRA's Sentence Embeddings Beyond Repair? The Case of Semantic
Textual Similarity | While BERT produces high-quality sentence embeddings, its pre-training
computational cost is a significant drawback. In contrast, ELECTRA delivers a
cost-effective pre-training objective and downstream task performance
improvements, but not as performant sentence embeddings. The community tacitly
stopped utilizing ELECTRA's sentence embeddings for semantic textual similarity
(STS). We notice a significant drop in performance when using the ELECTRA
discriminator's last layer in comparison to earlier layers. We explore this
drop and devise a way to repair ELECTRA's embeddings, proposing a novel
truncated model fine-tuning (TMFT) method. TMFT improves the Spearman
correlation coefficient by over 8 points while increasing parameter efficiency
on the STS benchmark dataset. We extend our analysis to various model sizes and
languages. Further, we discover the surprising efficacy of ELECTRA's generator
model, which performs on par with BERT, using significantly fewer parameters
and a substantially smaller embedding size. Finally, we observe further boosts
by combining TMFT with a word similarity task or domain adaptive pre-training.
| 2,024 | Computation and Language |
The Hidden Space of Transformer Language Adapters | We analyze the operation of transformer language adapters, which are small
modules trained on top of a frozen language model to adapt its predictions to
new target languages. We show that adapted predictions mostly evolve in the
source language the model was trained on, while the target language becomes
pronounced only in the very last layers of the model. Moreover, the adaptation
process is gradual and distributed across layers, where it is possible to skip
small groups of adapters without decreasing adaptation performance. Last, we
show that adapters operate on top of the model's frozen representation space
while largely preserving its structure, rather than on an 'isolated' subspace.
Our findings provide a deeper view into the adaptation process of language
models to new languages, showcasing the constraints imposed on it by the
underlying model and introduces practical implications to enhance its
efficiency.
| 2,024 | Computation and Language |
CMDAG: A Chinese Metaphor Dataset with Annotated Grounds as CoT for
Boosting Metaphor Generation | Metaphor is a prominent linguistic device in human language and literature,
as they add color, imagery, and emphasis to enhance effective communication.
This paper introduces a large-scale high quality annotated Chinese Metaphor
Corpus, which comprises around 28K sentences drawn from a diverse range of
Chinese literary sources, such as poems, prose, song lyrics, etc. To ensure the
accuracy and consistency of our annotations, we introduce a comprehensive set
of guidelines. These guidelines address the facets of metaphor annotation,
including identifying tenors, vehicles, and grounds to handling the
complexities of similes, personifications, juxtapositions, and hyperboles.
Breaking tradition, our approach to metaphor generation emphasizes grounds and
their distinct features rather than the conventional combination of tenors and
vehicles. By integrating "ground" as a CoT (Chain of Thoughts) input, we are
able to generate metaphors that resonate more with real-world intuition. We
test generative models such as Belle, Baichuan, and Chinese-alpaca-33B using
our annotated corpus. These models are able to generate creative and fluent
metaphor sentences more frequently induced by selected samples from our
dataset, demonstrating the value of our corpus for Chinese metaphor research.
The code is available in
https://github.com/JasonShao55/Chinese_Metaphor_Explanation.
| 2,024 | Computation and Language |
Benchmarking Retrieval-Augmented Generation for Medicine | While large language models (LLMs) have achieved state-of-the-art performance
on a wide range of medical question answering (QA) tasks, they still face
challenges with hallucinations and outdated knowledge. Retrieval-augmented
generation (RAG) is a promising solution and has been widely adopted. However,
a RAG system can involve multiple flexible components, and there is a lack of
best practices regarding the optimal RAG setting for various medical purposes.
To systematically evaluate such systems, we propose the Medical Information
Retrieval-Augmented Generation Evaluation (MIRAGE), a first-of-its-kind
benchmark including 7,663 questions from five medical QA datasets. Using
MIRAGE, we conducted large-scale experiments with over 1.8 trillion prompt
tokens on 41 combinations of different corpora, retrievers, and backbone LLMs
through the MedRAG toolkit introduced in this work. Overall, MedRAG improves
the accuracy of six different LLMs by up to 18% over chain-of-thought
prompting, elevating the performance of GPT-3.5 and Mixtral to GPT-4-level. Our
results show that the combination of various medical corpora and retrievers
achieves the best performance. In addition, we discovered a log-linear scaling
property and the "lost-in-the-middle" effects in medical RAG. We believe our
comprehensive evaluations can serve as practical guidelines for implementing
RAG systems for medicine.
| 2,024 | Computation and Language |
What if LLMs Have Different World Views: Simulating Alien Civilizations
with LLM-based Agents | In this study, we introduce "CosmoAgent," an innovative artificial
intelligence framework utilizing Large Language Models (LLMs) to simulate
complex interactions between human and extraterrestrial civilizations, with a
special emphasis on Stephen Hawking's cautionary advice about not sending radio
signals haphazardly into the universe. The goal is to assess the feasibility of
peaceful coexistence while considering potential risks that could threaten
well-intentioned civilizations. Employing mathematical models and state
transition matrices, our approach quantitatively evaluates the development
trajectories of civilizations, offering insights into future decision-making at
critical points of growth and saturation. Furthermore, the paper acknowledges
the vast diversity in potential living conditions across the universe, which
could foster unique cosmologies, ethical codes, and worldviews among various
civilizations. Recognizing the Earth-centric bias inherent in current LLM
designs, we propose the novel concept of using LLMs with diverse ethical
paradigms and simulating interactions between entities with distinct moral
principles. This innovative research provides a new way to understand complex
inter-civilizational dynamics, expanding our perspective while pioneering novel
strategies for conflict resolution, crucial for preventing interstellar
conflicts. We have also released the code and datasets to enable further
academic investigation into this interesting area of research. The code is
available at https://github.com/agiresearch/AlienAgent.
| 2,024 | Computation and Language |
Question Calibration and Multi-Hop Modeling for Temporal Question
Answering | Many models that leverage knowledge graphs (KGs) have recently demonstrated
remarkable success in question answering (QA) tasks. In the real world, many
facts contained in KGs are time-constrained thus temporal KGQA has received
increasing attention. Despite the fruitful efforts of previous models in
temporal KGQA, they still have several limitations. (I) They adopt pre-trained
language models (PLMs) to obtain question representations, while PLMs tend to
focus on entity information and ignore entity transfer caused by temporal
constraints, and finally fail to learn specific temporal representations of
entities. (II) They neither emphasize the graph structure between entities nor
explicitly model the multi-hop relationship in the graph, which will make it
difficult to solve complex multi-hop question answering. To alleviate this
problem, we propose a novel Question Calibration and Multi-Hop Modeling
(QC-MHM) approach. Specifically, We first calibrate the question representation
by fusing the question and the time-constrained concepts in KG. Then, we
construct the GNN layer to complete multi-hop message passing. Finally, the
question representation is combined with the embedding output by the GNN to
generate the final prediction. Empirical results verify that the proposed model
achieves better performance than the state-of-the-art models in the benchmark
dataset. Notably, the Hits@1 and Hits@10 results of QC-MHM on the CronQuestions
dataset's complex questions are absolutely improved by 5.1% and 1.2% compared
to the best-performing baseline. Moreover, QC-MHM can generate interpretable
and trustworthy predictions.
| 2,024 | Computation and Language |
How do Hyenas deal with Human Speech? Speech Recognition and Translation
with ConfHyena | The attention mechanism, a cornerstone of state-of-the-art neural models,
faces computational hurdles in processing long sequences due to its quadratic
complexity. Consequently, research efforts in the last few years focused on
finding more efficient alternatives. Among them, Hyena (Poli et al., 2023)
stands out for achieving competitive results in both language modeling and
image classification, while offering sub-quadratic memory and computational
complexity. Building on these promising results, we propose ConfHyena, a
Conformer whose encoder self-attentions are replaced with an adaptation of
Hyena for speech processing, where the long input sequences cause high
computational costs. Through experiments in automatic speech recognition (for
English) and translation (from English into 8 target languages), we show that
our best ConfHyena model significantly reduces the training time by 27%, at the
cost of minimal quality degradation (~1%), which, in most cases, is not
statistically significant.
| 2,024 | Computation and Language |
Can Large Language Models be Good Emotional Supporter? Mitigating
Preference Bias on Emotional Support Conversation | Emotional Support Conversation (ESC) is a task aimed at alleviating
individuals' emotional distress through daily conversation. Given its inherent
complexity and non-intuitive nature, ESConv dataset incorporates support
strategies to facilitate the generation of appropriate responses. Recently,
despite the remarkable conversational ability of large language models (LLMs),
previous studies have suggested that they often struggle with providing useful
emotional support. Hence, this work initially analyzes the results of LLMs on
ESConv, revealing challenges in selecting the correct strategy and a notable
preference for a specific strategy. Motivated by these, we explore the impact
of the inherent preference in LLMs on providing emotional support, and
consequently, we observe that exhibiting high preference for specific
strategies hinders effective emotional support, aggravating its robustness in
predicting the appropriate strategy. Moreover, we conduct a methodological
study to offer insights into the necessary approaches for LLMs to serve as
proficient emotional supporters. Our findings emphasize that (1) low preference
for specific strategies hinders the progress of emotional support, (2) external
assistance helps reduce preference bias, and (3) LLMs alone cannot become good
emotional supporters. These insights suggest promising avenues for future
research to enhance the emotional intelligence of LLMs.
| 2,024 | Computation and Language |
Soft Self-Consistency Improves Language Model Agents | Generations from large language models (LLMs) can be improved by sampling and
scoring multiple solutions to select a final answer. Current "sample and
select" methods such as self-consistency (SC) rely on majority voting to score
answers. However, when tasks have many distinct and valid answers, selection by
voting requires a large number of samples. This makes SC prohibitively
expensive for interactive tasks that involve generating multiple actions
(answers) sequentially. After establishing that majority voting fails to
provide consistent gains on such tasks, we demonstrate how to increase success
rates by softening the scoring criterion. We introduce Soft Self-Consistency
(Soft-SC), which replaces SC's discontinuous scoring with a continuous score
computed from model likelihoods, allowing for selection even when actions are
sparsely distributed. Soft-SC improves both performance and efficiency on
long-horizon interactive tasks, requiring half as many samples as SC for
comparable or better performance. For a fixed number of samples, Soft-SC leads
to a 1.3% increase over SC in absolute success rate on writing bash programs, a
6.6% increase on online shopping (WebShop), and a 4.7% increase for an
interactive household game (ALFWorld). Finally, we show that Soft-SC can be
applied to both open-source and black-box models.
| 2,024 | Computation and Language |
Softmax Probabilities (Mostly) Predict Large Language Model Correctness
on Multiple-Choice Q&A | Although large language models (LLMs) perform impressively on many tasks,
overconfidence remains a problem. We hypothesized that on multiple-choice Q&A
tasks, wrong answers would be associated with smaller maximum softmax
probabilities (MSPs) compared to correct answers. We comprehensively evaluate
this hypothesis on ten open-source LLMs and five datasets, and find strong
evidence for our hypothesis among models which perform well on the original Q&A
task. For the six LLMs with the best Q&A performance, the AUROC derived from
the MSP was better than random chance with p < 10^{-4} in 59/60 instances.
Among those six LLMs, the average AUROC ranged from 60% to 69%. Leveraging
these findings, we propose a multiple-choice Q&A task with an option to abstain
and show that performance can be improved by selectively abstaining based on
the MSP of the initial model response. We also run the same experiments with
pre-softmax logits instead of softmax probabilities and find similar (but not
identical) results.
| 2,024 | Computation and Language |
RoCode: A Dataset for Measuring Code Intelligence from Problem
Definitions in Romanian | Recently, large language models (LLMs) have become increasingly powerful and
have become capable of solving a plethora of tasks through proper instructions
in natural language. However, the vast majority of testing suites assume that
the instructions are written in English, the de facto prompting language. Code
intelligence and problem solving still remain a difficult task, even for the
most advanced LLMs. Currently, there are no datasets to measure the
generalization power for code-generation models in a language other than
English. In this work, we present RoCode, a competitive programming dataset,
consisting of 2,642 problems written in Romanian, 11k solutions in C, C++ and
Python and comprehensive testing suites for each problem. The purpose of RoCode
is to provide a benchmark for evaluating the code intelligence of language
models trained on Romanian / multilingual text as well as a fine-tuning set for
pretrained Romanian models. Through our results and review of related works, we
argue for the need to develop code models for languages other than English.
| 2,024 | Computation and Language |
AgentMD: Empowering Language Agents for Risk Prediction with Large-Scale
Clinical Tool Learning | Clinical calculators play a vital role in healthcare by offering accurate
evidence-based predictions for various purposes such as prognosis.
Nevertheless, their widespread utilization is frequently hindered by usability
challenges, poor dissemination, and restricted functionality. Augmenting large
language models with extensive collections of clinical calculators presents an
opportunity to overcome these obstacles and improve workflow efficiency, but
the scalability of the manual curation process poses a significant challenge.
In response, we introduce AgentMD, a novel language agent capable of curating
and applying clinical calculators across various clinical contexts. Using the
published literature, AgentMD has automatically curated a collection of 2,164
diverse clinical calculators with executable functions and structured
documentation, collectively named RiskCalcs. Manual evaluations show that
RiskCalcs tools achieve an accuracy of over 80% on three quality metrics. At
inference time, AgentMD can automatically select and apply the relevant
RiskCalcs tools given any patient description. On the newly established RiskQA
benchmark, AgentMD significantly outperforms chain-of-thought prompting with
GPT-4 (87.7% vs. 40.9% in accuracy). Additionally, we also applied AgentMD to
real-world clinical notes for analyzing both population-level and risk-level
patient characteristics. In summary, our study illustrates the utility of
language agents augmented with clinical calculators for healthcare analytics
and patient care.
| 2,024 | Computation and Language |
Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive | Direct Preference Optimisation (DPO) is effective at significantly improving
the performance of large language models (LLMs) on downstream tasks such as
reasoning, summarisation, and alignment. Using pairs of preferred and
dispreferred data, DPO models the \textit{relative} probability of picking one
response over another. In this work, first we show theoretically that the
standard DPO loss can lead to a \textit{reduction} of the model's likelihood of
the preferred examples, as long as the relative probability between the
preferred and dispreferred classes increases. We then show empirically that
this phenomenon occurs when fine-tuning LLMs on common datasets, especially
datasets in which the edit distance between pairs of completions is low. Using
these insights, we design DPO-Positive (DPOP), a new loss function and training
procedure which avoids this failure mode. Surprisingly, we also find that DPOP
significantly outperforms DPO across a wide variety of datasets and downstream
tasks, including datasets with high edit distances between completions. By
fine-tuning with DPOP, we create and release Smaug-34B and Smaug-72B, which
achieve state-of-the-art open-source performance. Notably, Smaug-72B is nearly
2\% better than any other open-source model on the HuggingFace Open LLM
Leaderboard and becomes the first open-source LLM to surpass an average
accuracy of 80\%.
| 2,024 | Computation and Language |
Investigating Cultural Alignment of Large Language Models | The intricate relationship between language and culture has long been a
subject of exploration within the realm of linguistic anthropology. Large
Language Models (LLMs), promoted as repositories of collective human knowledge,
raise a pivotal question: do these models genuinely encapsulate the diverse
knowledge adopted by different cultures? Our study reveals that these models
demonstrate greater cultural alignment along two dimensions -- firstly, when
prompted with the dominant language of a specific culture, and secondly, when
pretrained with a refined mixture of languages employed by that culture. We
quantify cultural alignment by simulating sociological surveys, comparing model
responses to those of actual survey participants as references. Specifically,
we replicate a survey conducted in various regions of Egypt and the United
States through prompting LLMs with different pretraining data mixtures in both
Arabic and English with the personas of the real respondents and the survey
questions. Further analysis reveals that misalignment becomes more pronounced
for underrepresented personas and for culturally sensitive topics, such as
those probing social values. Finally, we introduce Anthropological Prompting, a
novel method leveraging anthropological reasoning to enhance cultural
alignment. Our study emphasizes the necessity for a more balanced multilingual
pretraining dataset to better represent the diversity of human experience and
the plurality of different cultures with many implications on the topic of
cross-lingual transfer.
| 2,024 | Computation and Language |
TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue
Summarization | Single document news summarization has seen substantial progress on
faithfulness in recent years, driven by research on the evaluation of factual
consistency, or hallucinations. We ask whether these advances carry over to
other text summarization domains. We propose a new evaluation benchmark on
topic-focused dialogue summarization, generated by LLMs of varying sizes. We
provide binary sentence-level human annotations of the factual consistency of
these summaries along with detailed explanations of factually inconsistent
sentences. Our analysis shows that existing LLMs hallucinate significant
amounts of factual errors in the dialogue domain, regardless of the model's
size. On the other hand, when LLMs, including GPT-4, serve as binary factual
evaluators, they perform poorly and can be outperformed by prevailing
state-of-the-art specialized factuality evaluation metrics. Finally, we
conducted an analysis of hallucination types with a curated error taxonomy. We
find that there are diverse errors and error distributions in model-generated
summaries and that non-LLM based metrics can capture all error types better
than LLM-based evaluators.
| 2,024 | Computation and Language |
BiMediX: Bilingual Medical Mixture of Experts LLM | In this paper, we introduce BiMediX, the first bilingual medical mixture of
experts LLM designed for seamless interaction in both English and Arabic. Our
model facilitates a wide range of medical interactions in English and Arabic,
including multi-turn chats to inquire about additional details such as patient
symptoms and medical history, multiple-choice question answering, and
open-ended question answering. We propose a semi-automated English-to-Arabic
translation pipeline with human refinement to ensure high-quality translations.
We also introduce a comprehensive evaluation benchmark for Arabic medical LLMs.
Furthermore, we introduce BiMed1.3M, an extensive Arabic-English bilingual
instruction set covering 1.3 Million diverse medical interactions, resulting in
over 632 million healthcare specialized tokens for instruction tuning. Our
BiMed1.3M dataset includes 250k synthesized multi-turn doctor-patient chats and
maintains a 1:2 Arabic-to-English ratio. Our model outperforms state-of-the-art
Med42 and Meditron by average absolute gains of 2.5% and 4.1%, respectively,
computed across multiple medical evaluation benchmarks in English, while
operating at 8-times faster inference. Moreover, our BiMediX outperforms the
generic Arabic-English bilingual LLM, Jais-30B, by average absolute gains of
10% on our Arabic medical benchmark and 15% on bilingual evaluations across
multiple datasets. Our project page with source code and trained model is
available at https://github.com/mbzuai-oryx/BiMediX .
| 2,024 | Computation and Language |
Enhancing Modern Supervised Word Sense Disambiguation Models by Semantic
Lexical Resources | Supervised models for Word Sense Disambiguation (WSD) currently yield to
state-of-the-art results in the most popular benchmarks. Despite the recent
introduction of Word Embeddings and Recurrent Neural Networks to design
powerful context-related features, the interest in improving WSD models using
Semantic Lexical Resources (SLRs) is mostly restricted to knowledge-based
approaches. In this paper, we enhance "modern" supervised WSD models exploiting
two popular SLRs: WordNet and WordNet Domains. We propose an effective way to
introduce semantic features into the classifiers, and we consider using the SLR
structure to augment the training data. We study the effect of different types
of semantic features, investigating their interaction with local contexts
encoded by means of mixtures of Word Embeddings or Recurrent Neural Networks,
and we extend the proposed model into a novel multi-layer architecture for WSD.
A detailed experimental comparison in the recent Unified Evaluation Framework
(Raganato et al., 2017) shows that the proposed approach leads to supervised
models that compare favourably with the state-of-the art.
| 2,018 | Computation and Language |
Enhanced Hallucination Detection in Neural Machine Translation through
Simple Detector Aggregation | Hallucinated translations pose significant threats and safety concerns when
it comes to the practical deployment of machine translation systems. Previous
research works have identified that detectors exhibit complementary performance
different detectors excel at detecting different types of hallucinations. In
this paper, we propose to address the limitations of individual detectors by
combining them and introducing a straightforward method for aggregating
multiple detectors. Our results demonstrate the efficacy of our aggregated
detector, providing a promising step towards evermore reliable machine
translation systems.
| 2,024 | Computation and Language |
PIRB: A Comprehensive Benchmark of Polish Dense and Hybrid Text
Retrieval Methods | We present Polish Information Retrieval Benchmark (PIRB), a comprehensive
evaluation framework encompassing 41 text information retrieval tasks for
Polish. The benchmark incorporates existing datasets as well as 10 new,
previously unpublished datasets covering diverse topics such as medicine, law,
business, physics, and linguistics. We conduct an extensive evaluation of over
20 dense and sparse retrieval models, including the baseline models trained by
us as well as other available Polish and multilingual methods. Finally, we
introduce a three-step process for training highly effective language-specific
retrievers, consisting of knowledge distillation, supervised fine-tuning, and
building sparse-dense hybrid retrievers using a lightweight rescoring model. In
order to validate our approach, we train new text encoders for Polish and
compare their results with previously evaluated methods. Our dense models
outperform the best solutions available to date, and the use of hybrid methods
further improves their performance.
| 2,024 | Computation and Language |
A Simple but Effective Approach to Improve Structured Language Model
Output for Information Extraction | Large language models (LLMs) have demonstrated impressive abilities in
generating unstructured natural language according to instructions. However,
their performance can be inconsistent when tasked with producing text that
adheres to specific structured formats, which is crucial in applications like
named entity recognition (NER) or relation extraction (RE). To address this
issue, this paper introduces an efficient method, G&O, to enhance their
structured text generation capabilities. It breaks the generation into a
two-step pipeline: initially, LLMs generate answers in natural language as
intermediate responses. Subsequently, LLMs are asked to organize the output
into the desired structure, using the intermediate responses as context. G&O
effectively separates the generation of content from the structuring process,
reducing the pressure of completing two orthogonal tasks simultaneously. Tested
on zero-shot NER and RE, the results indicate a significant improvement in LLM
performance with minimal additional efforts. This straightforward and adaptable
prompting technique can also be combined with other strategies, like
self-consistency, to further elevate LLM capabilities in various structured
text generation tasks.
| 2,024 | Computation and Language |
EvoGrad: A Dynamic Take on the Winograd Schema Challenge with Human
Adversaries | While Large Language Models (LLMs) excel at the Winograd Schema Challenge
(WSC), a coreference resolution task testing common-sense reasoning through
pronoun disambiguation, they struggle with instances that feature minor
alterations or rewording. To address this, we introduce EvoGrad, an open-source
platform that harnesses a human-in-the-loop approach to create a dynamic
dataset tailored to such altered WSC instances. Leveraging ChatGPT's
capabilities, we expand our task instances from 182 to 3,691, setting a new
benchmark for diverse common-sense reasoning datasets. Additionally, we
introduce the error depth metric, assessing model stability in dynamic tasks.
Our results emphasize the challenge posed by EvoGrad: Even the best performing
LLM, GPT-3.5, achieves an accuracy of 65.0% with an average error depth of 7.2,
a stark contrast to human performance of 92. 8% accuracy without perturbation
errors. This highlights ongoing model limitations and the value of dynamic
datasets in uncovering them.
| 2,024 | Computation and Language |
Reliable LLM-based User Simulator for Task-Oriented Dialogue Systems | In the realm of dialogue systems, user simulation techniques have emerged as
a game-changer, redefining the evaluation and enhancement of task-oriented
dialogue (TOD) systems. These methods are crucial for replicating real user
interactions, enabling applications like synthetic data augmentation, error
detection, and robust evaluation. However, existing approaches often rely on
rigid rule-based methods or on annotated data. This paper introduces DAUS, a
Domain-Aware User Simulator. Leveraging large language models, we fine-tune
DAUS on real examples of task-oriented dialogues. Results on two relevant
benchmarks showcase significant improvements in terms of user goal fulfillment.
Notably, we have observed that fine-tuning enhances the simulator's coherence
with user goals, effectively mitigating hallucinations -- a major source of
inconsistencies in simulator responses.
| 2,024 | Computation and Language |
A Unified Taxonomy-Guided Instruction Tuning Framework for Entity Set
Expansion and Taxonomy Expansion | Entity Set Expansion, Taxonomy Expansion, and Seed-Guided Taxonomy
Construction are three representative tasks that can be used to automatically
populate an existing taxonomy with new entities. However, previous approaches
often address these tasks separately with heterogeneous techniques, lacking a
unified perspective. To tackle this issue, in this paper, we identify the
common key skills needed for these tasks from the view of taxonomy structures
-- finding 'siblings' and finding 'parents' -- and propose a unified
taxonomy-guided instruction tuning framework to jointly solve the three tasks.
To be specific, by leveraging the existing taxonomy as a rich source of entity
relationships, we utilize instruction tuning to fine-tune a large language
model to generate parent and sibling entities. Extensive experiments on
multiple benchmark datasets demonstrate the effectiveness of TaxoInstruct,
which outperforms task-specific baselines across all three tasks.
| 2,024 | Computation and Language |
Healthcare Copilot: Eliciting the Power of General LLMs for Medical
Consultation | The copilot framework, which aims to enhance and tailor large language models
(LLMs) for specific complex tasks without requiring fine-tuning, is gaining
increasing attention from the community. In this paper, we introduce the
construction of a Healthcare Copilot designed for medical consultation. The
proposed Healthcare Copilot comprises three main components: 1) the Dialogue
component, responsible for effective and safe patient interactions; 2) the
Memory component, storing both current conversation data and historical patient
information; and 3) the Processing component, summarizing the entire dialogue
and generating reports. To evaluate the proposed Healthcare Copilot, we
implement an auto-evaluation scheme using ChatGPT for two roles: as a virtual
patient engaging in dialogue with the copilot, and as an evaluator to assess
the quality of the dialogue. Extensive results demonstrate that the proposed
Healthcare Copilot significantly enhances the capabilities of general LLMs for
medical consultations in terms of inquiry capability, conversational fluency,
response accuracy, and safety. Furthermore, we conduct ablation studies to
highlight the contribution of each individual module in the Healthcare Copilot.
Code will be made publicly available on GitHub.
| 2,024 | Computation and Language |
Structure Guided Prompt: Instructing Large Language Model in Multi-Step
Reasoning by Exploring Graph Structure of the Text | Although Large Language Models (LLMs) excel at addressing straightforward
reasoning tasks, they frequently struggle with difficulties when confronted by
more complex multi-step reasoning due to a range of factors. Firstly, natural
language often encompasses complex relationships among entities, making it
challenging to maintain a clear reasoning chain over longer spans. Secondly,
the abundance of linguistic diversity means that the same entities and
relationships can be expressed using different terminologies and structures,
complicating the task of identifying and establishing connections between
multiple pieces of information. Graphs provide an effective solution to
represent data rich in relational information and capture long-term
dependencies among entities. To harness the potential of graphs, our paper
introduces Structure Guided Prompt, an innovative three-stage task-agnostic
prompting framework designed to improve the multi-step reasoning capabilities
of LLMs in a zero-shot setting. This framework explicitly converts unstructured
text into a graph via LLMs and instructs them to navigate this graph using
task-specific strategies to formulate responses. By effectively organizing
information and guiding navigation, it enables LLMs to provide more accurate
and context-aware responses. Our experiments show that this framework
significantly enhances the reasoning capabilities of LLMs, enabling them to
excel in a broader spectrum of natural language scenarios.
| 2,024 | Computation and Language |
Explaining Relationships Among Research Papers | Due to the rapid pace of research publications, keeping up to date with all
the latest related papers is very time-consuming, even with daily feed tools.
There is a need for automatically generated, short, customized literature
reviews of sets of papers to help researchers decide what to read. While
several works in the last decade have addressed the task of explaining a single
research paper, usually in the context of another paper citing it, the
relationship among multiple papers has been ignored; prior works have focused
on generating a single citation sentence in isolation, without addressing the
expository and transition sentences needed to connect multiple papers in a
coherent story. In this work, we explore a feature-based, LLM-prompting
approach to generate richer citation texts, as well as generating multiple
citations at once to capture the complex relationships among research papers.
We perform an expert evaluation to investigate the impact of our proposed
features on the quality of the generated paragraphs and find a strong
correlation between human preference and integrative writing style, suggesting
that humans prefer high-level, abstract citations, with transition sentences
between them to provide an overall story.
| 2,024 | Computation and Language |
DrBenchmark: A Large Language Understanding Evaluation Benchmark for
French Biomedical Domain | The biomedical domain has sparked a significant interest in the field of
Natural Language Processing (NLP), which has seen substantial advancements with
pre-trained language models (PLMs). However, comparing these models has proven
challenging due to variations in evaluation protocols across different models.
A fair solution is to aggregate diverse downstream tasks into a benchmark,
allowing for the assessment of intrinsic PLMs qualities from various
perspectives. Although still limited to few languages, this initiative has been
undertaken in the biomedical field, notably English and Chinese. This
limitation hampers the evaluation of the latest French biomedical models, as
they are either assessed on a minimal number of tasks with non-standardized
protocols or evaluated using general downstream tasks. To bridge this research
gap and account for the unique sensitivities of French, we present the
first-ever publicly available French biomedical language understanding
benchmark called DrBenchmark. It encompasses 20 diversified tasks, including
named-entity recognition, part-of-speech tagging, question-answering, semantic
textual similarity, and classification. We evaluate 8 state-of-the-art
pre-trained masked language models (MLMs) on general and biomedical-specific
data, as well as English specific MLMs to assess their cross-lingual
capabilities. Our experiments reveal that no single model excels across all
tasks, while generalist models are sometimes still competitive.
| 2,024 | Computation and Language |
Structured Tree Alignment for Evaluation of (Speech) Constituency
Parsing | We present the structured average intersection-over-union ratio (STRUCT-IOU),
a similarity metric between constituency parse trees motivated by the problem
of evaluating speech parsers. STRUCT-IOU enables comparison between a
constituency parse tree (over automatically recognized spoken word boundaries)
with the ground-truth parse (over written words). To compute the metric, we
project the ground-truth parse tree to the speech domain by forced alignment,
align the projected ground-truth constituents with the predicted ones under
certain structured constraints, and calculate the average IOU score across all
aligned constituent pairs. STRUCT-IOU takes word boundaries into account and
overcomes the challenge that the predicted words and ground truth may not have
perfect one-to-one correspondence. Extending to the evaluation of text
constituency parsing, we demonstrate that STRUCT-IOU shows higher tolerance to
syntactically plausible parses than PARSEVAL (Black et al., 1991).
| 2,024 | Computation and Language |
Large Language Models for Data Annotation: A Survey | Data annotation is the labeling or tagging of raw data with relevant
information, essential for improving the efficacy of machine learning models.
The process, however, is labor-intensive and expensive. The emergence of
advanced Large Language Models (LLMs), exemplified by GPT-4, presents an
unprecedented opportunity to revolutionize and automate the intricate process
of data annotation. While existing surveys have extensively covered LLM
architecture, training, and general applications, this paper uniquely focuses
on their specific utility for data annotation. This survey contributes to three
core aspects: LLM-Based Data Annotation, Assessing LLM-generated Annotations,
and Learning with LLM-generated annotations. Furthermore, the paper includes an
in-depth taxonomy of methodologies employing LLMs for data annotation, a
comprehensive review of learning strategies for models incorporating
LLM-generated annotations, and a detailed discussion on primary challenges and
limitations associated with using LLMs for data annotation. As a key guide,
this survey aims to direct researchers and practitioners in exploring the
potential of the latest LLMs for data annotation, fostering future advancements
in this critical domain. We provide a comprehensive papers list at
\url{https://github.com/Zhen-Tan-dmml/LLM4Annotation.git}.
| 2,024 | Computation and Language |
ED-Copilot: Reduce Emergency Department Wait Time with Language Model
Diagnostic Assistance | In the emergency department (ED), patients undergo triage and multiple
laboratory tests before diagnosis. This process is time-consuming, and causes
ED crowding which significantly impacts patient mortality, medical errors,
staff burnout, etc. This work proposes (time) cost-effective diagnostic
assistance that explores the potential of artificial intelligence (AI) systems
in assisting ED clinicians to make time-efficient and accurate diagnoses. Using
publicly available patient data, we collaborate with ED clinicians to curate
MIMIC-ED-Assist, a benchmark that measures the ability of AI systems in
suggesting laboratory tests that minimize ED wait times, while correctly
predicting critical outcomes such as death. We develop ED-Copilot which
sequentially suggests patient-specific laboratory tests and makes diagnostic
predictions. ED-Copilot uses a pre-trained bio-medical language model to encode
patient information and reinforcement learning to minimize ED wait time and
maximize prediction accuracy of critical outcomes. On MIMIC-ED-Assist,
ED-Copilot improves prediction accuracy over baselines while halving average
wait time from four hours to two hours. Ablation studies demonstrate the
importance of model scale and use of a bio-medical language model. Further
analyses reveal the necessity of personalized laboratory test suggestions for
diagnosing patients with severe cases, as well as the potential of ED-Copilot
in providing ED clinicians with informative laboratory test recommendations.
Our code is available at https://github.com/cxcscmu/ED-Copilot.
| 2,024 | Computation and Language |
CAMELoT: Towards Large Language Models with Training-Free Consolidated
Associative Memory | Large Language Models (LLMs) struggle to handle long input sequences due to
high memory and runtime costs. Memory-augmented models have emerged as a
promising solution to this problem, but current methods are hindered by limited
memory capacity and require costly re-training to integrate with a new LLM. In
this work, we introduce an associative memory module which can be coupled to
any pre-trained (frozen) attention-based LLM without re-training, enabling it
to handle arbitrarily long input sequences. Unlike previous methods, our
associative memory module consolidates representations of individual tokens
into a non-parametric distribution model, dynamically managed by properly
balancing the novelty and recency of the incoming data. By retrieving
information from this consolidated associative memory, the base LLM can achieve
significant (up to 29.7% on Arxiv) perplexity reduction in long-context
modeling compared to other baselines evaluated on standard benchmarks. This
architecture, which we call CAMELoT (Consolidated Associative Memory Enhanced
Long Transformer), demonstrates superior performance even with a tiny context
window of 128 tokens, and also enables improved in-context learning with a much
larger set of demonstrations.
| 2,024 | Computation and Language |
Potential and Challenges of Model Editing for Social Debiasing | Large language models (LLMs) trained on vast corpora suffer from inevitable
stereotype biases. Mitigating these biases with fine-tuning could be both
costly and data-hungry. Model editing methods, which focus on modifying LLMs in
a post-hoc manner, are of great potential to address debiasing. However, it
lacks a comprehensive study that facilitates both internal and external model
editing methods, supports various bias types, as well as understands the pros
and cons of applying editing methods to stereotypical debiasing. To mitigate
this gap, we carefully formulate social debiasing into an editing problem and
benchmark seven existing model editing algorithms on stereotypical debiasing,
i.e., debias editing. Our findings in three scenarios reveal both the potential
and challenges of debias editing: (1) Existing model editing methods can
effectively preserve knowledge and mitigate biases, while the generalization of
debias effect from edited sentences to semantically equivalent sentences is
limited.(2) Sequential editing highlights the robustness of SERAC (Mitchell et
al. 2022b), while internal editing methods degenerate with the number of edits.
(3) Model editing algorithms achieve generalization towards unseen biases both
within the same type and from different types. In light of these findings, we
further propose two simple but effective methods to improve debias editing, and
experimentally show the effectiveness of the proposed methods.
| 2,024 | Computation and Language |
RefuteBench: Evaluating Refuting Instruction-Following for Large
Language Models | The application scope of large language models (LLMs) is increasingly
expanding. In practical use, users might provide feedback based on the model's
output, hoping for a responsive model that can complete responses according to
their feedback. Whether the model can appropriately respond to users' refuting
feedback and consistently follow through with execution has not been thoroughly
analyzed. In light of this, this paper proposes a comprehensive benchmark,
RefuteBench, covering tasks such as question answering, machine translation,
and email writing. The evaluation aims to assess whether models can positively
accept feedback in form of refuting instructions and whether they can
consistently adhere to user demands throughout the conversation. We conduct
evaluations on numerous LLMs and find that LLMs are stubborn, i.e. exhibit
inclination to their internal knowledge, often failing to comply with user
feedback. Additionally, as the length of the conversation increases, models
gradually forget the user's stated feedback and roll back to their own
responses. We further propose a recall-and-repeat prompts as a simple and
effective way to enhance the model's responsiveness to feedback.
| 2,024 | Computation and Language |
How Important is Domain Specificity in Language Models and Instruction
Finetuning for Biomedical Relation Extraction? | Cutting edge techniques developed in the general NLP domain are often
subsequently applied to the high-value, data-rich biomedical domain. The past
few years have seen generative language models (LMs), instruction finetuning,
and few-shot learning become foci of NLP research. As such, generative LMs
pretrained on biomedical corpora have proliferated and biomedical instruction
finetuning has been attempted as well, all with the hope that domain
specificity improves performance on downstream tasks. Given the nontrivial
effort in training such models, we investigate what, if any, benefits they have
in the key biomedical NLP task of relation extraction. Specifically, we address
two questions: (1) Do LMs trained on biomedical corpora outperform those
trained on general domain corpora? (2) Do models instruction finetuned on
biomedical datasets outperform those finetuned on assorted datasets or those
simply pretrained? We tackle these questions using existing LMs, testing across
four datasets. In a surprising result, general-domain models typically
outperformed biomedical-domain models. However, biomedical instruction
finetuning improved performance to a similar degree as general instruction
finetuning, despite having orders of magnitude fewer instructions. Our findings
suggest it may be more fruitful to focus research effort on larger-scale
biomedical instruction finetuning of general LMs over building domain-specific
biomedical LMs
| 2,024 | Computation and Language |
Retrieval-Augmented Data Augmentation for Low-Resource Domain Tasks | Despite large successes of recent language models on diverse tasks, they
suffer from severe performance degeneration in low-resource settings with
limited training data available. Many existing works tackle this problem by
generating synthetic data from the training data and then training models on
them, recently using Large Language Models (LLMs). However, in low-resource
settings, the amount of seed data samples to use for data augmentation is very
small, which makes generated samples suboptimal and less diverse. To tackle
this challenge, we propose a novel method that augments training data by
incorporating a wealth of examples from other datasets, along with the given
training data. Specifically, we first retrieve the relevant instances from
other datasets, such as their input-output pairs or contexts, based on their
similarities with the given seed data, and then prompt LLMs to generate new
samples with the contextual information within and across the original and
retrieved samples. This approach can ensure that the generated data is not only
relevant but also more diverse than what could be achieved using the limited
seed data alone. We validate our proposed Retrieval-Augmented Data Augmentation
(RADA) framework on multiple datasets under low-resource settings of training
and test-time data augmentation scenarios, on which it outperforms existing
LLM-powered data augmentation baselines.
| 2,024 | Computation and Language |
Retrieval Helps or Hurts? A Deeper Dive into the Efficacy of Retrieval
Augmentation to Language Models | While large language models (LMs) demonstrate remarkable performance, they
encounter challenges in providing accurate responses when queried for
information beyond their pre-trained memorization. Although augmenting them
with relevant external information can mitigate these issues, failure to
consider the necessity of retrieval may adversely affect overall performance.
Previous research has primarily focused on examining how entities influence
retrieval models and knowledge recall in LMs, leaving other aspects relatively
unexplored. In this work, our goal is to offer a more detailed, fact-centric
analysis by exploring the effects of combinations of entities and relations. To
facilitate this, we construct a new question answering (QA) dataset called
WiTQA (Wikipedia Triple Question Answers). This dataset includes questions
about entities and relations of various popularity levels, each accompanied by
a supporting passage. Our extensive experiments with diverse LMs and retrievers
reveal when retrieval does not consistently enhance LMs from the viewpoints of
fact-centric popularity.Confirming earlier findings, we observe that larger LMs
excel in recalling popular facts. However, they notably encounter difficulty
with infrequent entity-relation pairs compared to retrievers. Interestingly,
they can effectively retain popular relations of less common entities. We
demonstrate the efficacy of our finer-grained metric and insights through an
adaptive retrieval system that selectively employs retrieval and recall based
on the frequencies of entities and relations in the question.
| 2,024 | Computation and Language |
GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient
Analysis | Large Language Models (LLMs) face threats from unsafe prompts. Existing
methods for detecting unsafe prompts are primarily online moderation APIs or
finetuned LLMs. These strategies, however, often require extensive and
resource-intensive data collection and training processes. In this study, we
propose GradSafe, which effectively detects unsafe prompts by scrutinizing the
gradients of safety-critical parameters in LLMs. Our methodology is grounded in
a pivotal observation: the gradients of an LLM's loss for unsafe prompts paired
with compliance response exhibit similar patterns on certain safety-critical
parameters. In contrast, safe prompts lead to markedly different gradient
patterns. Building on this observation, GradSafe analyzes the gradients from
prompts (paired with compliance responses) to accurately detect unsafe prompts.
We show that GradSafe, applied to Llama-2 without further training, outperforms
Llama Guard, despite its extensive finetuning with a large dataset, in
detecting unsafe prompts. This superior performance is consistent across both
zero-shot and adaptation scenarios, as evidenced by our evaluations on the
ToxicChat and XSTest. The source code is available at
https://github.com/xyq7/GradSafe.
| 2,024 | Computation and Language |
The Lay Person's Guide to Biomedicine: Orchestrating Large Language
Models | Automated lay summarisation (LS) aims to simplify complex technical documents
into a more accessible format to non-experts. Existing approaches using
pre-trained language models, possibly augmented with external background
knowledge, tend to struggle with effective simplification and explanation.
Moreover, automated methods that can effectively assess the `layness' of
generated summaries are lacking. Recently, large language models (LLMs) have
demonstrated a remarkable capacity for text simplification, background
information generation, and text evaluation. This has motivated our systematic
exploration into using LLMs to generate and evaluate lay summaries of
biomedical articles. We propose a novel \textit{Explain-then-Summarise} LS
framework, which leverages LLMs to generate high-quality background knowledge
to improve supervised LS. We also evaluate the performance of LLMs for
zero-shot LS and propose two novel LLM-based LS evaluation metrics, which
assess layness from multiple perspectives. Finally, we conduct a human
assessment of generated lay summaries. Our experiments reveal that
LLM-generated background information can support improved supervised LS.
Furthermore, our novel zero-shot LS evaluation metric demonstrates a high
degree of alignment with human preferences. We conclude that LLMs have an
important part to play in improving both the performance and evaluation of LS
methods.
| 2,024 | Computation and Language |
Self-DC: When to retrieve and When to generate? Self Divide-and-Conquer
for Compositional Unknown Questions | Retrieve-then-read and generate-then-read are two typical solutions to handle
unknown and known questions in open-domain question-answering, while the former
retrieves necessary external knowledge and the later prompt the large language
models to generate internal known knowledge encoded in the parameters. However,
few of previous works consider the compositional unknown questions, which
consist of several known or unknown sub-questions. Thus, simple binary
classification (known or unknown) becomes sub-optimal and inefficient since it
will call external retrieval excessively for each compositional unknown
question. To this end, we propose the first Compositional unknown
Question-Answering dataset (CuQA), and introduce a Self Divide-and-Conquer
(Self-DC) framework to empower LLMs to adaptively call different methods
on-demand, resulting in better performance and efficiency. Experimental results
on two datasets (CuQA and FreshQA) demonstrate that Self-DC can achieve
comparable or even better performance with much more less retrieval times
compared with several strong baselines.
| 2,024 | Computation and Language |
Round Trip Translation Defence against Large Language Model Jailbreaking
Attacks | Large language models (LLMs) are susceptible to social-engineered attacks
that are human-interpretable but require a high level of comprehension for LLMs
to counteract. Existing defensive measures can only mitigate less than half of
these attacks at most. To address this issue, we propose the Round Trip
Translation (RTT) method, the first algorithm specifically designed to defend
against social-engineered attacks on LLMs. RTT paraphrases the adversarial
prompt and generalizes the idea conveyed, making it easier for LLMs to detect
induced harmful behavior. This method is versatile, lightweight, and
transferrable to different LLMs. Our defense successfully mitigated over 70% of
Prompt Automatic Iterative Refinement (PAIR) attacks, which is currently the
most effective defense to the best of our knowledge. We are also the first to
attempt mitigating the MathsAttack and reduced its attack success rate by
almost 40%. Our code is publicly available at
https://github.com/Cancanxxx/Round_Trip_Translation_Defence
| 2,024 | Computation and Language |
RecMind: Japanese Movie Recommendation Dialogue with Seeker's Internal
State | Humans pay careful attention to the interlocutor's internal state in
dialogues. For example, in recommendation dialogues, we make recommendations
while estimating the seeker's internal state, such as his/her level of
knowledge and interest. Since there are no existing annotated resources for the
analysis, we constructed RecMind, a Japanese movie recommendation dialogue
dataset with annotations of the seeker's internal state at the entity level.
Each entity has a subjective label annotated by the seeker and an objective
label annotated by the recommender. RecMind also features engaging dialogues
with long seeker's utterances, enabling a detailed analysis of the seeker's
internal state. Our analysis based on RecMind reveals that entities that the
seeker has no knowledge about but has an interest in contribute to
recommendation success. We also propose a response generation framework that
explicitly considers the seeker's internal state, utilizing the
chain-of-thought prompting. The human evaluation results show that our proposed
method outperforms the baseline method in both consistency and the success of
recommendations.
| 2,024 | Computation and Language |
OMGEval: An Open Multilingual Generative Evaluation Benchmark for Large
Language Models | Modern large language models (LLMs) should generally benefit individuals from
various cultural backgrounds around the world. However, most recent advanced
generative evaluation benchmarks tailed for LLMs mainly focus on English. To
this end, we introduce OMGEval, the first Open-source Multilingual Generative
test set that can assess the capability of LLMs in different languages. For
each language, OMGEval provides 804 open-ended questions, covering a wide range
of important capabilities of LLMs, such as general knowledge, logical
reasoning, and so on. Each question is rigorously verified by human annotators.
Notably, to sufficiently reflect the compatibility of LLMs in different
cultural backgrounds, we perform localization for each non-English language.
Specifically, the current version of OMGEval includes 5 languages (i.e., Zh,
Ru, Fr, Es, Ar). Following AlpacaEval, we employ GPT-4 as the adjudicator to
automatically score different model outputs, which is shown closely related to
human evaluation. We evaluate several representative multilingual LLMs on the
proposed OMGEval, which we believe will provide a valuable reference for the
community to further understand and improve the multilingual capability of
LLMs. OMGEval is available at https://github.com/blcuicall/OMGEval.
| 2,024 | Computation and Language |
Backdoor Attacks on Dense Passage Retrievers for Disseminating
Misinformation | Dense retrievers and retrieval-augmented language models have been widely
used in various NLP applications. Despite being designed to deliver reliable
and secure outcomes, the vulnerability of retrievers to potential attacks
remains unclear, raising concerns about their security. In this paper, we
introduce a novel scenario where the attackers aim to covertly disseminate
targeted misinformation, such as hate speech or advertisement, through a
retrieval system. To achieve this, we propose a perilous backdoor attack
triggered by grammar errors in dense passage retrieval. Our approach ensures
that attacked models can function normally for standard queries but are
manipulated to return passages specified by the attacker when users
unintentionally make grammatical mistakes in their queries. Extensive
experiments demonstrate the effectiveness and stealthiness of our proposed
attack method. When a user query is error-free, our model consistently
retrieves accurate information while effectively filtering out misinformation
from the top-k results. However, when a query contains grammar errors, our
system shows a significantly higher success rate in fetching the targeted
content.
| 2,024 | Computation and Language |
An Effective Incorporating Heterogeneous Knowledge Curriculum Learning
for Sequence Labeling | Sequence labeling models often benefit from incorporating external knowledge.
However, this practice introduces data heterogeneity and complicates the model
with additional modules, leading to increased expenses for training a
high-performing model. To address this challenge, we propose a two-stage
curriculum learning (TCL) framework specifically designed for sequence labeling
tasks. The TCL framework enhances training by gradually introducing data
instances from easy to hard, aiming to improve both performance and training
speed. Furthermore, we explore different metrics for assessing the difficulty
levels of sequence labeling tasks. Through extensive experimentation on six
Chinese word segmentation (CWS) and Part-of-speech tagging (POS) datasets, we
demonstrate the effectiveness of our model in enhancing the performance of
sequence labeling models. Additionally, our analysis indicates that TCL
accelerates training and alleviates the slow training problem associated with
complex models.
| 2,024 | Computation and Language |
ARL2: Aligning Retrievers for Black-box Large Language Models via
Self-guided Adaptive Relevance Labeling | Retrieval-augmented generation enhances large language models (LLMs) by
incorporating relevant information from external knowledge sources. This
enables LLMs to adapt to specific domains and mitigate hallucinations in
knowledge-intensive tasks. However, existing retrievers are often misaligned
with LLMs due to their separate training processes and the black-box nature of
LLMs. To address this challenge, we propose ARL2, a retriever learning
technique that harnesses LLMs as labelers. ARL2 leverages LLMs to annotate and
score relevant evidence, enabling learning the retriever from robust LLM
supervision. Furthermore, ARL2 uses an adaptive self-training strategy for
curating high-quality and diverse relevance data, which can effectively reduce
the annotation cost. Extensive experiments demonstrate the effectiveness of
ARL2, achieving accuracy improvements of 5.4% on NQ and 4.6% on MMLU compared
to the state-of-the-art methods. Additionally, ARL2 exhibits robust transfer
learning capabilities and strong zero-shot generalization abilities. Our code
will be published at \url{https://github.com/zhanglingxi-cs/ARL2}.
| 2,024 | Computation and Language |
LLMs Meet Long Video: Advancing Long Video Comprehension with An
Interactive Visual Adapter in LLMs | Long video understanding is a significant and ongoing challenge in the
intersection of multimedia and artificial intelligence. Employing large
language models (LLMs) for comprehending video becomes an emerging and
promising method. However, this approach incurs high computational costs due to
the extensive array of video tokens, experiences reduced visual clarity as a
consequence of token aggregation, and confronts challenges arising from
irrelevant visual tokens while answering video-related questions. To alleviate
these issues, we present an Interactive Visual Adapter (IVA) within LLMs,
designed to enhance interaction with fine-grained visual elements.
Specifically, we first transform long videos into temporal video tokens via
leveraging a visual encoder alongside a pretrained causal transformer, then
feed them into LLMs with the video instructions. Subsequently, we integrated
IVA, which contains a lightweight temporal frame selector and a spatial feature
interactor, within the internal blocks of LLMs to capture instruction-aware and
fine-grained visual signals. Consequently, the proposed video-LLM facilitates a
comprehensive understanding of long video content through appropriate long
video modeling and precise visual interactions. We conducted extensive
experiments on nine video understanding benchmarks and experimental results
show that our interactive visual adapter significantly improves the performance
of video LLMs on long video QA tasks. Ablation studies further verify the
effectiveness of IVA in long and short video understandings.
| 2,024 | Computation and Language |
ActiveRAG: Revealing the Treasures of Knowledge via Active Learning | Retrieval Augmented Generation (RAG) has introduced a new paradigm for Large
Language Models (LLMs), aiding in the resolution of knowledge-intensive tasks.
However, current RAG models position LLMs as passive knowledge receptors,
thereby restricting their capacity for learning and comprehending external
knowledge. In this paper, we present ActiveRAG, an innovative RAG framework
that shifts from passive knowledge acquisition to an active learning mechanism.
This approach utilizes the Knowledge Construction mechanism to develop a deeper
understanding of external knowledge by associating it with previously acquired
or memorized knowledge. Subsequently, it designs the Cognitive Nexus mechanism
to incorporate the outcomes from both chains of thought and knowledge
construction, thereby calibrating the intrinsic cognition of LLMs. Our
experimental results demonstrate that ActiveRAG surpasses previous RAG models,
achieving a 5% improvement on question-answering datasets. All data and codes
are available at https://github.com/OpenMatch/ActiveRAG.
| 2,024 | Computation and Language |
Are LLMs Effective Negotiators? Systematic Evaluation of the
Multifaceted Capabilities of LLMs in Negotiation Dialogues | A successful negotiation demands a deep comprehension of the conversation
context, Theory-of-Mind (ToM) skills to infer the partner's motives, as well as
strategic reasoning and effective communication, making it challenging for
automated systems. Given the remarkable performance of LLMs across a variety of
NLP tasks, in this work, we aim to understand how LLMs can advance different
aspects of negotiation research, ranging from designing dialogue systems to
providing pedagogical feedback and scaling up data collection practices. To
this end, we devise a methodology to analyze the multifaceted capabilities of
LLMs across diverse dialogue scenarios covering all the time stages of a
typical negotiation interaction. Our analysis adds to the increasing evidence
for the superiority of GPT-4 across various tasks while also providing insights
into specific tasks that remain difficult for LLMs. For instance, the models
correlate poorly with human players when making subjective assessments about
the negotiation dialogues and often struggle to generate responses that are
contextually appropriate as well as strategically advantageous.
| 2,024 | Computation and Language |
Graph Representation of Narrative Context: Coherence Dependency via
Retrospective Questions | This work introduces a novel and practical paradigm for narrative
comprehension, stemming from the observation that individual passages within
narratives are often cohesively related than being isolated. We therefore
propose to formulate a graph upon narratives dubbed NARCO that depicts a
task-agnostic coherence dependency of the entire context. Especially, edges in
NARCO encompass retrospective free-form questions between two context snippets
reflecting high-level coherent relations, inspired by the cognitive perception
of humans who constantly reinstate relevant events from prior context.
Importantly, our graph is instantiated through our designed two-stage LLM
prompting, thereby without reliance on human annotations. We present three
unique studies on its practical utility, examining the edge efficacy via recap
identification, local context augmentation via plot retrieval, and broader
applications exemplified by long document QA. Experiments suggest that our
approaches leveraging NARCO yield performance boost across all three tasks.
| 2,024 | Computation and Language |
Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension
with Enhanced Visual Knowledge Alignment | Evaluating and Rethinking the current landscape of Large Multimodal Models
(LMMs), we observe that widely-used visual-language projection approaches
(e.g., Q-former or MLP) focus on the alignment of image-text descriptions yet
ignore the visual knowledge-dimension alignment, i.e., connecting visuals to
their relevant knowledge. Visual knowledge plays a significant role in
analyzing, inferring, and interpreting information from visuals, helping
improve the accuracy of answers to knowledge-based visual questions. In this
paper, we mainly explore improving LMMs with visual-language knowledge
alignment, especially aimed at challenging knowledge-based visual question
answering (VQA). To this end, we present a Cognitive Visual-Language Mapper
(CVLM), which contains a pretrained Visual Knowledge Aligner (VKA) and a
Fine-grained Knowledge Adapter (FKA) used in the multimodal instruction tuning
stage. Specifically, we design the VKA based on the interaction between a small
language model and a visual encoder, training it on collected image-knowledge
pairs to achieve visual knowledge acquisition and projection. FKA is employed
to distill the fine-grained visual knowledge of an image and inject it into
Large Language Models (LLMs). We conduct extensive experiments on
knowledge-based VQA benchmarks and experimental results show that CVLM
significantly improves the performance of LMMs on knowledge-based VQA (average
gain by 5.0%). Ablation studies also verify the effectiveness of VKA and FKA,
respectively.
| 2,024 | Computation and Language |
Analysis of Multi-Source Language Training in Cross-Lingual Transfer | The successful adaptation of multilingual language models (LMs) to a specific
language-task pair critically depends on the availability of data tailored for
that condition. While cross-lingual transfer (XLT) methods have contributed to
addressing this data scarcity problem, there still exists ongoing debate about
the mechanisms behind their effectiveness. In this work, we focus on one of
promising assumptions about inner workings of XLT, that it encourages
multilingual LMs to place greater emphasis on language-agnostic or
task-specific features. We test this hypothesis by examining how the patterns
of XLT change with a varying number of source languages involved in the
process. Our experimental findings show that the use of multiple source
languages in XLT-a technique we term Multi-Source Language Training
(MSLT)-leads to increased mingling of embedding spaces for different languages,
supporting the claim that XLT benefits from making use of language-independent
information. On the other hand, we discover that using an arbitrary combination
of source languages does not always guarantee better performance. We suggest
simple heuristics for identifying effective language combinations for MSLT and
empirically prove its effectiveness.
| 2,024 | Computation and Language |
Multilingual Coreference Resolution in Low-resource South Asian
Languages | Coreference resolution involves the task of identifying text spans within a
discourse that pertain to the same real-world entity. While this task has been
extensively explored in the English language, there has been a notable scarcity
of publicly accessible resources and models for coreference resolution in South
Asian languages. We introduce a Translated dataset for Multilingual Coreference
Resolution (TransMuCoRes) in 31 South Asian languages using off-the-shelf tools
for translation and word-alignment. Nearly all of the predicted translations
successfully pass a sanity check, and 75% of English references align with
their predicted translations. Using multilingual encoders, two off-the-shelf
coreference resolution models were trained on a concatenation of TransMuCoRes
and a Hindi coreference resolution dataset with manual annotations. The best
performing model achieved a score of 64 and 68 for LEA F1 and CoNLL F1,
respectively, on our test-split of Hindi golden set. This study is the first to
evaluate an end-to-end coreference resolution model on a Hindi golden set.
Furthermore, this work underscores the limitations of current coreference
evaluation metrics when applied to datasets with split antecedents, advocating
for the development of more suitable evaluation metrics.
| 2,024 | Computation and Language |
BBA: Bi-Modal Behavioral Alignment for Reasoning with Large
Vision-Language Models | Multimodal reasoning stands as a pivotal capability for large vision-language
models (LVLMs). The integration with Domain-Specific Languages (DSL), offering
precise visual representations, equips these models with the opportunity to
execute more accurate reasoning in complex and professional domains. However,
the vanilla Chain-of-Thought (CoT) prompting method faces challenges in
effectively leveraging the unique strengths of visual and DSL representations,
primarily due to their differing reasoning mechanisms. Additionally, it often
falls short in addressing critical steps in multi-step reasoning tasks. To
mitigate these challenges, we introduce the \underline{B}i-Modal
\underline{B}ehavioral \underline{A}lignment (BBA) prompting method, designed
to maximize the potential of DSL in augmenting complex multi-modal reasoning
tasks. This method initiates by guiding LVLMs to create separate reasoning
chains for visual and DSL representations. Subsequently, it aligns these chains
by addressing any inconsistencies, thus achieving a cohesive integration of
behaviors from different modalities. Our experiments demonstrate that BBA
substantially improves the performance of GPT-4V(ision) on geometry problem
solving ($28.34\% \to 34.22\%$), chess positional advantage prediction
($42.08\% \to 46.99\%$) and molecular property prediction ($77.47\% \to
83.52\%$).
| 2,024 | Computation and Language |
LongWanjuan: Towards Systematic Measurement for Long Text Quality | The quality of training data are crucial for enhancing the long-text
capabilities of foundation models. Despite existing efforts to refine data
quality through heuristic rules and evaluations based on data diversity and
difficulty, there's a lack of systematic approaches specifically tailored for
assessing long texts. Addressing this gap, our work systematically measures the
quality of long texts by evaluating three fundamental linguistic dimensions:
coherence, cohesion, and complexity. Drawing inspiration from the
aforementioned three dimensions, we introduce a suite of metrics designed to
evaluate the quality of long texts, encompassing both statistical and
pre-trained language model-based ones. Leveraging these metrics, we present
LongWanjuan, a bilingual dataset specifically tailored to enhance the training
of language models for long-text tasks with over 160B tokens. In LongWanjuan,
we categorize long texts into holistic, aggregated, and chaotic types, enabling
a detailed analysis of long-text quality. Furthermore, we devise a data mixture
recipe that strategically balances different types of long texts within
LongWanjuan, leading to significant improvements in model performance on
long-text tasks. The code and dataset are available at
https://github.com/OpenLMLab/LongWanjuan.
| 2,024 | Computation and Language |
WinoViz: Probing Visual Properties of Objects Under Different States | Humans perceive and comprehend different visual properties of an object based
on specific contexts. For instance, we know that a banana turns brown ``when it
becomes rotten,'' whereas it appears green ``when it is unripe.'' Previous
studies on probing visual commonsense knowledge have primarily focused on
examining language models' understanding of typical properties (e.g., colors
and shapes) of objects. We present WinoViz, a text-only evaluation dataset,
consisting of 1,380 examples that probe the reasoning abilities of language
models regarding variant visual properties of objects under different contexts
or states. Our task is challenging since it requires pragmatic reasoning
(finding intended meanings) and visual knowledge reasoning. We also present
multi-hop data, a more challenging version of our data, which requires
multi-step reasoning chains to solve our task. In our experimental analysis,
our findings are: a) Large language models such as GPT-4 demonstrate effective
performance, but when it comes to multi-hop data, their performance is
significantly degraded. b) Large models perform well on pragmatic reasoning,
but visual knowledge reasoning is a bottleneck in our task. c) Vision-language
models outperform their language-model counterparts. d) A model with
machine-generated images performs poorly in our task. This is due to the poor
quality of the generated images.
| 2,024 | Computation and Language |
A Multimodal In-Context Tuning Approach for E-Commerce Product
Description Generation | In this paper, we propose a new setting for generating product descriptions
from images, augmented by marketing keywords. It leverages the combined power
of visual and textual information to create descriptions that are more tailored
to the unique features of products. For this setting, previous methods utilize
visual and textual encoders to encode the image and keywords and employ a
language model-based decoder to generate the product description. However, the
generated description is often inaccurate and generic since same-category
products have similar copy-writings, and optimizing the overall framework on
large-scale samples makes models concentrate on common words yet ignore the
product features. To alleviate the issue, we present a simple and effective
Multimodal In-Context Tuning approach, named ModICT, which introduces a similar
product sample as the reference and utilizes the in-context learning capability
of language models to produce the description. During training, we keep the
visual encoder and language model frozen, focusing on optimizing the modules
responsible for creating multimodal in-context references and dynamic prompts.
This approach preserves the language generation prowess of large language
models (LLMs), facilitating a substantial increase in description diversity. To
assess the effectiveness of ModICT across various language model scales and
types, we collect data from three distinct product categories within the
E-commerce domain. Extensive experiments demonstrate that ModICT significantly
improves the accuracy (by up to 3.3% on Rouge-L) and diversity (by up to 9.4%
on D-5) of generated results compared to conventional methods. Our findings
underscore the potential of ModICT as a valuable tool for enhancing automatic
generation of product descriptions in a wide range of applications. Code is at:
https://github.com/HITsz-TMG/Multimodal-In-Context-Tuning
| 2,024 | Computation and Language |
Knowledge Graph Enhanced Large Language Model Editing | Large language models (LLMs) are pivotal in advancing natural language
processing (NLP) tasks, yet their efficacy is hampered by inaccuracies and
outdated knowledge. Model editing emerges as a promising solution to address
these challenges. However, existing editing methods struggle to track and
incorporate changes in knowledge associated with edits, which limits the
generalization ability of postedit LLMs in processing edited knowledge. To
tackle these problems, we propose a novel model editing method that leverages
knowledge graphs for enhancing LLM editing, namely GLAME. Specifically, we
first utilize a knowledge graph augmentation module to uncover associated
knowledge that has changed due to editing, obtaining its internal
representations within LLMs. This approach allows knowledge alterations within
LLMs to be reflected through an external graph structure. Subsequently, we
design a graph-based knowledge edit module to integrate structured knowledge
into the model editing. This ensures that the updated parameters reflect not
only the modifications of the edited knowledge but also the changes in other
associated knowledge resulting from the editing process. Comprehensive
experiments conducted on GPT-J and GPT-2 XL demonstrate that GLAME
significantly improves the generalization capabilities of post-edit LLMs in
employing edited knowledge.
| 2,024 | Computation and Language |
User-LLM: Efficient LLM Contextualization with User Embeddings | Large language models (LLMs) have revolutionized natural language processing.
However, effectively incorporating complex and potentially noisy user
interaction data remains a challenge. To address this, we propose User-LLM, a
novel framework that leverages user embeddings to contextualize LLMs. These
embeddings, distilled from diverse user interactions using self-supervised
pretraining, capture latent user preferences and their evolution over time. We
integrate these user embeddings with LLMs through cross-attention and
soft-prompting, enabling LLMs to dynamically adapt to user context. Our
comprehensive experiments on MovieLens, Amazon Review, and Google Local Review
datasets demonstrate significant performance gains across various tasks.
Notably, our approach outperforms text-prompt-based contextualization on long
sequence tasks and tasks that require deep user understanding while being
computationally efficient. We further incorporate Perceiver layers to
streamline the integration between user encoders and LLMs, reducing
computational demands.
| 2,024 | Computation and Language |
Breaking the HISCO Barrier: Automatic Occupational Standardization with
OccCANINE | This paper introduces a new tool, OccCANINE, to automatically transform
occupational descriptions into the HISCO classification system. The manual work
involved in processing and classifying occupational descriptions is
error-prone, tedious, and time-consuming. We finetune a preexisting language
model (CANINE) to do this automatically thereby performing in seconds and
minutes what previously took days and weeks. The model is trained on 14 million
pairs of occupational descriptions and HISCO codes in 13 different languages
contributed by 22 different sources. Our approach is shown to have accuracy,
recall and precision above 90 percent. Our tool breaks the metaphorical HISCO
barrier and makes this data readily available for analysis of occupational
structures with broad applicability in economics, economic history and various
related disciplines.
| 2,024 | Computation and Language |
KorNAT: LLM Alignment Benchmark for Korean Social Values and Common
Knowledge | For Large Language Models (LLMs) to be effectively deployed in a specific
country, they must possess an understanding of the nation's culture and basic
knowledge. To this end, we introduce National Alignment, which measures an
alignment between an LLM and a targeted country from two aspects: social value
alignment and common knowledge alignment. Social value alignment evaluates how
well the model understands nation-specific social values, while common
knowledge alignment examines how well the model captures basic knowledge
related to the nation. We constructed KorNAT, the first benchmark that measures
national alignment with South Korea. For the social value dataset, we obtained
ground truth labels from a large-scale survey involving 6,174 unique Korean
participants. For the common knowledge dataset, we constructed samples based on
Korean textbooks and GED reference materials. KorNAT contains 4K and 6K
multiple-choice questions for social value and common knowledge, respectively.
Our dataset creation process is meticulously designed and based on statistical
sampling theory and was refined through multiple rounds of human review. The
experiment results of seven LLMs reveal that only a few models met our
reference score, indicating a potential for further enhancement. KorNAT has
received government approval after passing an assessment conducted by a
government-affiliated organization dedicated to evaluating dataset quality.
Samples and detailed evaluation protocols of our dataset can be found in
https://selectstar.ai/ko/papers-national-alignment
| 2,024 | Computation and Language |
A Comprehensive Study of Multilingual Confidence Estimation on Large
Language Models | The tendency of Large Language Models to generate hallucinations and exhibit
overconfidence in predictions raises concerns regarding their reliability.
Confidence or uncertainty estimations indicating the extent of trustworthiness
of a model's response are essential to developing reliable AI systems. Current
research primarily focuses on LLM confidence estimations in English, remaining
a void for other widely used languages and impeding the global development of
reliable AI applications. This paper introduces a comprehensive investigation
of Multi-lingual confidence estimation (MlingConf) on LLMs. First, we introduce
an elaborated and expert-checked multilingual QA dataset. Second, we delve into
the performance of confidence estimations and examine how these confidence
scores can enhance LLM performance through self-refinement across diverse
languages. Finally, we propose a cross-lingual confidence estimation method to
achieve more precise confidence scores. The experimental results showcase the
performance of various confidence estimation methods across different languages
as well as present that our proposed cross-lingual confidence estimation
technique significantly enhances confidence estimation and outperforms several
baseline methods.
| 2,024 | Computation and Language |
Data-driven Discovery with Large Generative Models | With the accumulation of data at an unprecedented rate, its potential to fuel
scientific discovery is growing exponentially. This position paper urges the
Machine Learning (ML) community to exploit the capabilities of large generative
models (LGMs) to develop automated systems for end-to-end data-driven discovery
-- a paradigm encompassing the search and verification of hypotheses purely
from a set of provided datasets, without the need for additional data
collection or physical experiments. We first outline several desiderata for an
ideal data-driven discovery system. Then, through DATAVOYAGER, a
proof-of-concept utilizing GPT-4, we demonstrate how LGMs fulfill several of
these desiderata -- a feat previously unattainable -- while also highlighting
important limitations in the current system that open up opportunities for
novel ML research. We contend that achieving accurate, reliable, and robust
end-to-end discovery systems solely through the current capabilities of LGMs is
challenging. We instead advocate for fail-proof tool integration, along with
active user moderation through feedback mechanisms, to foster data-driven
scientific discoveries with efficiency and reproducibility.
| 2,024 | Computation and Language |
Overview of the VLSP 2023 -- ComOM Shared Task: A Data Challenge for
Comparative Opinion Mining from Vietnamese Product Reviews | This paper presents a comprehensive overview of the Comparative Opinion
Mining from Vietnamese Product Reviews shared task (ComOM), held as part of the
10$^{th}$ International Workshop on Vietnamese Language and Speech Processing
(VLSP 2023). The primary objective of this shared task is to advance the field
of natural language processing by developing techniques that proficiently
extract comparative opinions from Vietnamese product reviews. Participants are
challenged to propose models that adeptly extract a comparative "quintuple"
from a comparative sentence, encompassing Subject, Object, Aspect, Predicate,
and Comparison Type Label. We construct a human-annotated dataset comprising
$120$ documents, encompassing $7427$ non-comparative sentences and $2468$
comparisons within $1798$ sentences. Participating models undergo evaluation
and ranking based on the Exact match macro-averaged quintuple F1 score.
| 2,024 | Computation and Language |
FLAME: Self-Supervised Low-Resource Taxonomy Expansion using Large
Language Models | Taxonomies represent an arborescence hierarchical structure that establishes
relationships among entities to convey knowledge within a specific domain. Each
edge in the taxonomy signifies a hypernym-hyponym relationship. Taxonomies find
utility in various real-world applications, such as e-commerce search engines
and recommendation systems. Consequently, there arises a necessity to enhance
these taxonomies over time. However, manually curating taxonomies with neoteric
data presents challenges due to limitations in available human resources and
the exponential growth of data. Therefore, it becomes imperative to develop
automatic taxonomy expansion methods. Traditional supervised taxonomy expansion
approaches encounter difficulties stemming from limited resources, primarily
due to the small size of existing taxonomies. This scarcity of training data
often leads to overfitting. In this paper, we propose FLAME, a novel approach
for taxonomy expansion in low-resource environments by harnessing the
capabilities of large language models that are trained on extensive real-world
knowledge. LLMs help compensate for the scarcity of domain-specific knowledge.
Specifically, FLAME leverages prompting in few-shot settings to extract the
inherent knowledge within the LLMs, ascertaining the hypernym entities within
the taxonomy. Furthermore, it employs reinforcement learning to fine-tune the
large language models, resulting in more accurate predictions. Experiments on
three real-world benchmark datasets demonstrate the effectiveness of FLAME in
real-world scenarios, achieving a remarkable improvement of 18.5% in accuracy
and 12.3% in Wu & Palmer metric over eight baselines. Furthermore, we elucidate
the strengths and weaknesses of FLAME through an extensive case study, error
analysis and ablation studies on the benchmarks.
| 2,024 | Computation and Language |
MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning | Since commonsense information has been recorded significantly less frequently
than its existence, language models pre-trained by text generation have
difficulty to learn sufficient commonsense knowledge. Several studies have
leveraged text retrieval to augment the models' commonsense ability. Unlike
text, images capture commonsense information inherently but little effort has
been paid to effectively utilize them. In this work, we propose a novel
Multi-mOdal REtrieval (MORE) augmentation framework, to leverage both text and
images to enhance the commonsense ability of language models. Extensive
experiments on the Common-Gen task have demonstrated the efficacy of MORE based
on the pre-trained models of both single and multiple modalities.
| 2,024 | Computation and Language |
Unsupervised Text Style Transfer via LLMs and Attention Masking with
Multi-way Interactions | Unsupervised Text Style Transfer (UTST) has emerged as a critical task within
the domain of Natural Language Processing (NLP), aiming to transfer one
stylistic aspect of a sentence into another style without changing its
semantics, syntax, or other attributes. This task is especially challenging
given the intrinsic lack of parallel text pairings. Among existing methods for
UTST tasks, attention masking approach and Large Language Models (LLMs) are
deemed as two pioneering methods. However, they have shortcomings in generating
unsmooth sentences and changing the original contents, respectively. In this
paper, we investigate if we can combine these two methods effectively. We
propose four ways of interactions, that are pipeline framework with tuned
orders; knowledge distillation from LLMs to attention masking model; in-context
learning with constructed parallel examples. We empirically show these
multi-way interactions can improve the baselines in certain perspective of
style strength, content preservation and text fluency. Experiments also
demonstrate that simply conducting prompting followed by attention
masking-based revision can consistently surpass the other systems, including
supervised text style transfer systems. On Yelp-clean and Amazon-clean
datasets, it improves the previously best mean metric by 0.5 and 3.0 absolute
percentages respectively, and achieves new SOTA results.
| 2,024 | Computation and Language |
GCOF: Self-iterative Text Generation for Copywriting Using Large
Language Model | Large language models(LLM) such as ChatGPT have substantially simplified the
generation of marketing copy, yet producing content satisfying domain specific
requirements, such as effectively engaging customers, remains a significant
challenge. In this work, we introduce the Genetic Copy Optimization Framework
(GCOF) designed to enhance both efficiency and engagememnt of marketing copy
creation. We conduct explicit feature engineering within the prompts of LLM.
Additionally, we modify the crossover operator in Genetic Algorithm (GA),
integrating it into the GCOF to enable automatic feature engineering. This
integration facilitates a self-iterative refinement of the marketing copy.
Compared to human curated copy, Online results indicate that copy produced by
our framework achieves an average increase in click-through rate (CTR) of over
$50\%$.
| 2,024 | Computation and Language |
Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning | The surge in Large Language Models (LLMs) has revolutionized natural language
processing, but fine-tuning them for specific tasks often encounters challenges
in balancing performance and preserving general instruction-following
abilities. In this paper, we posit that the distribution gap between task
datasets and the LLMs serves as the primary underlying cause. To address the
problem, we introduce Self-Distillation Fine-Tuning (SDFT), a novel approach
that bridges the distribution gap by guiding fine-tuning with a distilled
dataset generated by the model itself to match its original distribution.
Experimental results on the Llama-2-chat model across various benchmarks
demonstrate that SDFT effectively mitigates catastrophic forgetting while
achieving comparable or superior performance on downstream tasks compared to
the vanilla fine-tuning. Moreover, SDFT demonstrates the potential to maintain
the helpfulness and safety alignment of LLMs. Our code is available at
\url{https://github.com/sail-sg/sdft}.
| 2,024 | Computation and Language |
KInIT at SemEval-2024 Task 8: Fine-tuned LLMs for Multilingual
Machine-Generated Text Detection | SemEval-2024 Task 8 is focused on multigenerator, multidomain, and
multilingual black-box machine-generated text detection. Such a detection is
important for preventing a potential misuse of large language models (LLMs),
the newest of which are very capable in generating multilingual human-like
texts. We have coped with this task in multiple ways, utilizing language
identification and parameter-efficient fine-tuning of smaller LLMs for text
classification. We have further used the per-language classification-threshold
calibration to uniquely combine fine-tuned models predictions with statistical
detection metrics to improve generalization of the system detection
performance. Our submitted method achieved competitive results, ranking at the
fourth place, just under 1 percentage point behind the winner.
| 2,024 | Computation and Language |
CMNER: A Chinese Multimodal NER Dataset based on Social Media | Multimodal Named Entity Recognition (MNER) is a pivotal task designed to
extract named entities from text with the support of pertinent images.
Nonetheless, a notable paucity of data for Chinese MNER has considerably
impeded the progress of this natural language processing task within the
Chinese domain. Consequently, in this study, we compile a Chinese Multimodal
NER dataset (CMNER) utilizing data sourced from Weibo, China's largest social
media platform. Our dataset encompasses 5,000 Weibo posts paired with 18,326
corresponding images. The entities are classified into four distinct
categories: person, location, organization, and miscellaneous. We perform
baseline experiments on CMNER, and the outcomes underscore the effectiveness of
incorporating images for NER. Furthermore, we conduct cross-lingual experiments
on the publicly available English MNER dataset (Twitter2015), and the results
substantiate our hypothesis that Chinese and English multimodal NER data can
mutually enhance the performance of the NER model.
| 2,024 | Computation and Language |
Investigating Multilingual Instruction-Tuning: Do Polyglot Models Demand
for Multilingual Instructions? | The adaption of multilingual pre-trained Large Language Models (LLMs) into
eloquent and helpful assistants is essential to facilitate their use across
different language regions. In that spirit, we are the first to conduct an
extensive study of the performance of multilingual models on parallel,
multi-turn instruction-tuning benchmarks across a selection of the most-spoken
Indo-European languages. We systematically examine the effects of language and
instruction dataset size on a mid-sized, multilingual LLM by instruction-tuning
it on parallel instruction-tuning datasets. Our results demonstrate that
instruction-tuning on parallel instead of monolingual corpora benefits
cross-lingual instruction following capabilities by up to 4.6%. Furthermore, we
show that the Superficial Alignment Hypothesis does not hold in general, as the
investigated multilingual 7B parameter model presents a counter-example
requiring large-scale instruction-tuning datasets. Finally, we conduct a human
annotation study to understand the alignment between human-based and
GPT-4-based evaluation within multilingual chat scenarios.
| 2,024 | Computation and Language |
SaGE: Evaluating Moral Consistency in Large Language Models | Despite recent advancements showcasing the impressive capabilities of Large
Language Models (LLMs) in conversational systems, we show that even
state-of-the-art LLMs are morally inconsistent in their generations,
questioning their reliability (and trustworthiness in general). Prior works in
LLM evaluation focus on developing ground-truth data to measure accuracy on
specific tasks. However, for moral scenarios that often lack universally
agreed-upon answers, consistency in model responses becomes crucial for their
reliability. To address this issue, we propose an information-theoretic measure
called Semantic Graph Entropy (SaGE), grounded in the concept of "Rules of
Thumb" (RoTs) to measure a model's moral consistency. RoTs are abstract
principles learned by a model and can help explain their decision-making
strategies effectively. To this extent, we construct the Moral Consistency
Corpus (MCC), containing 50K moral questions, responses to them by LLMs, and
the RoTs that these models followed. Furthermore, to illustrate the
generalizability of SaGE, we use it to investigate LLM consistency on two
popular datasets -- TruthfulQA and HellaSwag. Our results reveal that
task-accuracy and consistency are independent problems, and there is a dire
need to investigate these issues further.
| 2,024 | Computation and Language |
Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character
Role-Playing Agent | Large Language Models (LLMs) have revolutionized open-domain dialogue agents
but encounter challenges in multi-character role-playing (MCRP) scenarios. To
address the issue, we present Neeko, an innovative framework designed for
efficient multiple characters imitation. Unlike existing methods, Neeko employs
a dynamic low-rank adapter (LoRA) strategy, enabling it to adapt seamlessly to
diverse characters. Our framework breaks down the role-playing process into
agent pre-training, multiple characters playing, and character incremental
learning, effectively handling both seen and unseen roles. This dynamic
approach, coupled with distinct LoRA blocks for each character, enhances
Neeko's adaptability to unique attributes, personalities, and speaking
patterns. As a result, Neeko demonstrates superior performance in MCRP over
most existing methods, offering more engaging and versatile user interaction
experiences. Code and data are available at
https://github.com/weiyifan1023/Neeko.
| 2,024 | Computation and Language |
$\infty$Bench: Extending Long Context Evaluation Beyond 100K Tokens | Processing and reasoning over long contexts is crucial for many practical
applications of Large Language Models (LLMs), such as document comprehension
and agent construction. Despite recent strides in making LLMs process contexts
with more than 100K tokens, there is currently a lack of a standardized
benchmark to evaluate this long-context capability. Existing public benchmarks
typically focus on contexts around 10K tokens, limiting the assessment and
comparison of LLMs in processing longer contexts. In this paper, we propose
$\infty$Bench, the first LLM benchmark featuring an average data length
surpassing 100K tokens. $\infty$Bench comprises synthetic and realistic tasks
spanning diverse domains, presented in both English and Chinese. The tasks in
$\infty$Bench are designed to require well understanding of long dependencies
in contexts, and make simply retrieving a limited number of passages from
contexts not sufficient for these tasks. In our experiments, based on
$\infty$Bench, we evaluate the state-of-the-art proprietary and open-source
LLMs tailored for processing long contexts. The results indicate that existing
long context LLMs still require significant advancements to effectively process
100K+ context. We further present three intriguing analyses regarding the
behavior of LLMs processing long context.
| 2,023 | Computation and Language |
Ouroboros: Speculative Decoding with Large Model Enhanced Drafting | Drafting-then-verifying decoding methods such as speculative decoding are
widely adopted training-free methods to accelerate the inference of large
language models (LLMs). Instead of employing an autoregressive process to
decode tokens sequentially, speculative decoding initially creates drafts with
an efficient small model. Then LLMs are required to conduct verification and
correction in a non-autoregressive fashion to minimize time overhead.
Generating longer drafts can lead to even more significant speedups once
verified, but also incurs substantial trial and error costs if it fails.
Suffering from the high verification failure probability, existing decoding
methods cannot draft too much content for verification at one time, achieving
sub-optimal inference acceleration. In this paper, we introduce Ouroboros,
which constructs a phrase candidate pool from the verification process of LLMs
to provide candidates for draft generation of the small model. Thereby,
Ouroboros can further improve the efficiency and effectiveness of the initial
drafts. The experimental results on typical text generation tasks show that
Ouroboros achieves speedups of up to 1.9x and 2.8x compared to lookahead
decoding and speculative decoding, respectively. The source code of Ouroboros
is available at https://github.com/thunlp/Ouroboros.
| 2,024 | Computation and Language |
Exploiting Adaptive Contextual Masking for Aspect-Based Sentiment
Analysis | Aspect-Based Sentiment Analysis (ABSA) is a fine-grained linguistics problem
that entails the extraction of multifaceted aspects, opinions, and sentiments
from the given text. Both standalone and compound ABSA tasks have been
extensively used in the literature to examine the nuanced information present
in online reviews and social media posts. Current ABSA methods often rely on
static hyperparameters for attention-masking mechanisms, which can struggle
with context adaptation and may overlook the unique relevance of words in
varied situations. This leads to challenges in accurately analyzing complex
sentences containing multiple aspects with differing sentiments. In this work,
we present adaptive masking methods that remove irrelevant tokens based on
context to assist in Aspect Term Extraction and Aspect Sentiment Classification
subtasks of ABSA. We show with our experiments that the proposed methods
outperform the baseline methods in terms of accuracy and F1 scores on four
benchmark online review datasets. Further, we show that the proposed methods
can be extended with multiple adaptations and demonstrate a qualitative
analysis of the proposed approach using sample text for aspect term extraction.
| 2,024 | Computation and Language |
The Da Vinci Code of Large Pre-trained Language Models: Deciphering
Degenerate Knowledge Neurons | This study explores the mechanism of factual knowledge storage in pre-trained
language models (PLMs). Previous research suggests that factual knowledge is
stored within multi-layer perceptron weights, and some storage units exhibit
degeneracy, referred to as Degenerate Knowledge Neurons (DKNs). This paper
provides a comprehensive definition of DKNs that covers both structural and
functional aspects, pioneering the study of structures in PLMs' factual
knowledge storage units. Based on this, we introduce the Neurological Topology
Clustering method, which allows the formation of DKNs in any numbers and
structures, leading to a more accurate DKN acquisition. Furthermore, we
introduce the Neuro-Degeneracy Analytic Analysis Framework, which uniquely
integrates model robustness, evolvability, and complexity for a holistic
assessment of PLMs. Within this framework, our execution of 34 experiments
across 2 PLMs, 4 datasets, and 6 settings highlights the critical role of DKNs.
The code will be available soon.
| 2,024 | Computation and Language |
From Text to CQL: Bridging Natural Language and Corpus Search Engine | Natural Language Processing (NLP) technologies have revolutionized the way we
interact with information systems, with a significant focus on converting
natural language queries into formal query languages such as SQL. However, less
emphasis has been placed on the Corpus Query Language (CQL), a critical tool
for linguistic research and detailed analysis within text corpora. The manual
construction of CQL queries is a complex and time-intensive task that requires
a great deal of expertise, which presents a notable challenge for both
researchers and practitioners. This paper presents the first text-to-CQL task
that aims to automate the translation of natural language into CQL. We present
a comprehensive framework for this task, including a specifically curated
large-scale dataset and methodologies leveraging large language models (LLMs)
for effective text-to-CQL task. In addition, we established advanced evaluation
metrics to assess the syntactic and semantic accuracy of the generated queries.
We created innovative LLM-based conversion approaches and detailed experiments.
The results demonstrate the efficacy of our methods and provide insights into
the complexities of text-to-CQL task.
| 2,024 | Computation and Language |
Unlocking Instructive In-Context Learning with Tabular Prompting for
Relational Triple Extraction | The in-context learning (ICL) for relational triple extraction (RTE) has
achieved promising performance, but still encounters two key challenges: (1)
how to design effective prompts and (2) how to select proper demonstrations.
Existing methods, however, fail to address these challenges appropriately. On
the one hand, they usually recast RTE task to text-to-text prompting formats,
which is unnatural and results in a mismatch between the output format at the
pre-training time and the inference time for large language models (LLMs). On
the other hand, they only utilize surface natural language features and lack
consideration of triple semantics in sample selection. These issues are
blocking improved performance in ICL for RTE, thus we aim to tackle prompt
designing and sample selection challenges simultaneously. To this end, we
devise a tabular prompting for RTE (\textsc{TableIE}) which frames RTE task
into a table generation task to incorporate explicit structured information
into ICL, facilitating conversion of outputs to RTE structures. Then we propose
instructive in-context learning (I$^2$CL) which only selects and annotates a
few samples considering internal triple semantics in massive unlabeled samples.
| 2,024 | Computation and Language |
LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens | Large context window is a desirable feature in large language models (LLMs).
However, due to high fine-tuning costs, scarcity of long texts, and
catastrophic values introduced by new token positions, current extended context
windows are limited to around 128k tokens. This paper introduces LongRoPE that,
for the first time, extends the context window of pre-trained LLMs to an
impressive 2048k tokens, with up to only 1k fine-tuning steps at within 256k
training lengths, while maintaining performance at the original short context
window. This is achieved by three key innovations: (i) we identify and exploit
two forms of non-uniformities in positional interpolation through an efficient
search, providing a better initialization for fine-tuning and enabling an 8x
extension in non-fine-tuning scenarios; (ii) we introduce a progressive
extension strategy that first fine-tunes a 256k length LLM and then conducts a
second positional interpolation on the fine-tuned extended LLM to achieve a
2048k context window; (iii) we readjust LongRoPE on 8k length to recover the
short context window performance. Extensive experiments on LLaMA2 and Mistral
across various tasks demonstrate the effectiveness of our method. Models
extended via LongRoPE retain the original architecture with minor modifications
to the positional embedding, and can reuse most pre-existing optimizations.
| 2,024 | Computation and Language |
Factual Consistency Evaluation of Summarisation in the Era of Large
Language Models | Factual inconsistency with source documents in automatically generated
summaries can lead to misinformation or pose risks. Existing factual
consistency(FC) metrics are constrained by their performance, efficiency, and
explainability. Recent advances in Large language models (LLMs) have
demonstrated remarkable potential in text evaluation but their effectiveness in
assessing FC in summarisation remains underexplored. Prior research has mostly
focused on proprietary LLMs, leaving essential factors that affect their
assessment capabilities unexplored. Additionally, current FC evaluation
benchmarks are restricted to news articles, casting doubt on the generality of
the FC methods tested on them. In this paper, we first address the gap by
introducing TreatFact a dataset of LLM-generated summaries of clinical texts,
annotated for FC by domain experts. Moreover, we benchmark 11 LLMs for FC
evaluation across news and clinical domains and analyse the impact of model
size, prompts, pre-training and fine-tuning data. Our findings reveal that
despite proprietary models prevailing on the task, open-source LLMs lag behind.
Nevertheless, there is potential for enhancing the performance of open-source
LLMs through increasing model size, expanding pre-training data, and developing
well-curated fine-tuning data. Experiments on TreatFact suggest that both
previous methods and LLM-based evaluators are unable to capture factual
inconsistencies in clinical summaries, posing a new challenge for FC
evaluation.
| 2,024 | Computation and Language |
CriticBench: Evaluating Large Language Models as Critic | Critique ability are crucial in the scalable oversight and self-improvement
of Large Language Models (LLMs). While many recent studies explore the critique
ability of LLMs to judge and refine flaws in generations, how to
comprehensively and reliably measure the critique abilities of LLMs is
under-explored. This paper introduces CriticBench, a novel benchmark designed
to comprehensively and reliably evaluate four key critique ability dimensions
of LLMs: feedback, comparison, refinement and meta-feedback. CriticBench
encompasses nine diverse tasks, each assessing the LLMs' ability to critique
responses at varying levels of quality granularity. Our extensive evaluations
of open-source and closed-source LLMs reveal intriguing relationships between
the critique ability and tasks, response qualities, and model scales. Datasets,
resources and evaluation toolkit for CriticBench will be publicly released at
https://github.com/open-compass/CriticBench.
| 2,024 | Computation and Language |
The Geography of Information Diffusion in Online Discourse on Europe and
Migration | The online diffusion of information related to Europe and migration has been
little investigated from an external point of view. However, this is a very
relevant topic, especially if users have had no direct contact with Europe and
its perception depends solely on information retrieved online. In this work we
analyse the information circulating online about Europe and migration after
retrieving a large amount of data from social media (Twitter), to gain new
insights into topics, magnitude, and dynamics of their diffusion. We combine
retweets and hashtags network analysis with geolocation of users, linking thus
data to geography and allowing analysis from an "outside Europe" perspective,
with a special focus on Africa. We also introduce a novel approach based on
cross-lingual quotes, i.e. when content in a language is commented and
retweeted in another language, assuming these interactions are a proxy for
connections between very distant communities. Results show how the majority of
online discussions occurs at a national level, especially when discussing
migration. Language (English) is pivotal for information to become
transnational and reach far. Transnational information flow is strongly
unbalanced, with content mainly produced in Europe and amplified outside.
Conversely Europe-based accounts tend to be self-referential when they discuss
migration-related topics. Football is the most exported topic from Europe
worldwide. Moreover, important nodes in the communities discussing
migration-related topics include accounts of official institutions and
international agencies, together with journalists, news, commentators and
activists.
| 2,024 | Computation and Language |
Beyond Hate Speech: NLP's Challenges and Opportunities in Uncovering
Dehumanizing Language | Dehumanization, characterized as a subtle yet harmful manifestation of hate
speech, involves denying individuals of their human qualities and often results
in violence against marginalized groups. Despite significant progress in
Natural Language Processing across various domains, its application in
detecting dehumanizing language is limited, largely due to the scarcity of
publicly available annotated data for this domain. This paper evaluates the
performance of cutting-edge NLP models, including GPT-4, GPT-3.5, and LLAMA-2,
in identifying dehumanizing language. Our findings reveal that while these
models demonstrate potential, achieving a 70\% accuracy rate in distinguishing
dehumanizing language from broader hate speech, they also display biases. They
are over-sensitive in classifying other forms of hate speech as dehumanization
for a specific subset of target groups, while more frequently failing to
identify clear cases of dehumanization for other target groups. Moreover,
leveraging one of the best-performing models, we automatically annotated a
larger dataset for training more accessible models. However, our findings
indicate that these models currently do not meet the high-quality data
generation threshold necessary for this task.
| 2,024 | Computation and Language |
Kuaiji: the First Chinese Accounting Large Language Model | Large Language Models (LLMs) like ChatGPT and GPT-4 have demonstrated
impressive proficiency in comprehending and generating natural language.
However, they encounter difficulties when tasked with adapting to specialized
domains such as accounting. To address this challenge, we introduce Kuaiji, a
tailored Accounting Large Language Model. Kuaiji is meticulously fine-tuned
using the Baichuan framework, which encompasses continuous pre-training and
supervised fine-tuning processes. Supported by CAtAcctQA, a dataset containing
large genuine accountant-client dialogues, Kuaiji exhibits exceptional accuracy
and response speed. Our contributions encompass the creation of the first
Chinese accounting dataset, the establishment of Kuaiji as a leading
open-source Chinese accounting LLM, and the validation of its efficacy through
real-world accounting scenarios.
| 2,024 | Computation and Language |
Subsets and Splits