Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Rethinking Human-like Translation Strategy: Integrating Drift-Diffusion
Model with Large Language Models for Machine Translation | Large language models (LLMs) have demonstrated promising potential in various
downstream tasks, including machine translation. However, prior work on
LLM-based machine translation has mainly focused on better utilizing training
data, demonstrations, or pre-defined and universal knowledge to improve
performance, with a lack of consideration of decision-making like human
translators. In this paper, we incorporate Thinker with the Drift-Diffusion
Model (Thinker-DDM) to address this issue. We then redefine the Drift-Diffusion
process to emulate human translators' dynamic decision-making under constrained
resources. We conduct extensive experiments under the high-resource,
low-resource, and commonsense translation settings using the WMT22 and CommonMT
datasets, in which Thinker-DDM outperforms baselines in the first two
scenarios. We also perform additional analysis and evaluation on commonsense
translation to illustrate the high effectiveness and efficacy of the proposed
method.
| 2,024 | Computation and Language |
An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient
Generative LLM Inference | The development of state-of-the-art generative large language models (LLMs)
disproportionately relies on English-centric tokenizers, vocabulary and
pre-training data. Despite the fact that some LLMs have multilingual
capabilities, recent studies have shown that their inference efficiency
deteriorates when generating text in languages other than English. This results
in increased inference time and costs. Cross-lingual vocabulary adaptation
methods have been proposed for adapting models to a target language aiming to
improve downstream performance. However, the effectiveness of these methods on
increasing inference efficiency of generative LLMs has yet to be explored. In
this paper, we perform an empirical study of various cross-lingual vocabulary
adaptation methods on five generative LLMs (including monolingual and
multilingual models) across four typologically-diverse languages and four
natural language understanding tasks. We find that cross-lingual vocabulary
adaptation substantially contributes to LLM inference speedups of up to 271.5%.
We also show that adapting LLMs that have been pre-trained on more balanced
multilingual data results in downstream performance comparable to the original
models.
| 2,024 | Computation and Language |
Assessing the Reasoning Abilities of ChatGPT in the Context of Claim
Verification | The reasoning capabilities of LLMs are currently hotly debated. We examine
the issue from the perspective of claim/rumour verification. We propose the
first logical reasoning framework designed to break down any claim or rumor
paired with evidence into the atomic reasoning steps necessary for
verification. Based on our framework, we curate two annotated collections of
such claim/evidence pairs: a synthetic dataset from Wikipedia and a real-world
set stemming from rumours circulating on Twitter. We use them to evaluate the
reasoning capabilities of GPT-3.5-Turbo and GPT-4 (hereinafter referred to as
ChatGPT) within the context of our framework, providing a thorough analysis.
Our results show that ChatGPT struggles in abductive reasoning, although this
can be somewhat mitigated by using manual Chain of Thought (CoT) as opposed to
Zero Shot (ZS) and ZS CoT approaches. Our study contributes to the growing body
of research suggesting that ChatGPT's reasoning processes are unlikely to
mirror human-like reasoning, and that LLMs need to be more rigorously evaluated
in order to distinguish between hype and actual capabilities, especially in
high stake real-world tasks such as claim verification.
| 2,024 | Computation and Language |
Let's Learn Step by Step: Enhancing In-Context Learning Ability with
Curriculum Learning | Demonstration ordering, which is an important strategy for in-context
learning (ICL), can significantly affects the performance of large language
models (LLMs). However, most of the current approaches of ordering require
additional knowledge and similarity calculation. We advocate the few-shot
in-context curriculum learning (ICCL), a simple but effective demonstration
ordering method for ICL, which implies gradually increasing the complexity of
prompt demonstrations during the inference process. Then we design three
experiments to discuss the effectiveness of ICCL, the formation mechanism of
LLM's ICCL capability, and the impact of ordering subjects. Experimental
results demonstrate that ICCL, developed during the instruction-tuning stage,
is effective for open-source LLMs. Moreover, LLMs exhibit a weaker capacity
compared to humans in discerning the difficulty levels of demonstrations. We
release our code at https://github.com/61peng/curri_learning.
| 2,024 | Computation and Language |
Construction of a Syntactic Analysis Map for Yi Shui School through Text
Mining and Natural Language Processing Research | Entity and relationship extraction is a crucial component in natural language
processing tasks such as knowledge graph construction, question answering
system design, and semantic analysis. Most of the information of the Yishui
school of traditional Chinese Medicine (TCM) is stored in the form of
unstructured classical Chinese text. The key information extraction of TCM
texts plays an important role in mining and studying the academic schools of
TCM. In order to solve these problems efficiently using artificial intelligence
methods, this study constructs a word segmentation and entity relationship
extraction model based on conditional random fields under the framework of
natural language processing technology to identify and extract the entity
relationship of traditional Chinese medicine texts, and uses the common
weighting technology of TF-IDF information retrieval and data mining to extract
important key entity information in different ancient books. The dependency
syntactic parser based on neural network is used to analyze the grammatical
relationship between entities in each ancient book article, and it is
represented as a tree structure visualization, which lays the foundation for
the next construction of the knowledge graph of Yishui school and the use of
artificial intelligence methods to carry out the research of TCM academic
schools.
| 2,024 | Computation and Language |
GenRES: Rethinking Evaluation for Generative Relation Extraction in the
Era of Large Language Models | The field of relation extraction (RE) is experiencing a notable shift towards
generative relation extraction (GRE), leveraging the capabilities of large
language models (LLMs). However, we discovered that traditional relation
extraction (RE) metrics like precision and recall fall short in evaluating GRE
methods. This shortfall arises because these metrics rely on exact matching
with human-annotated reference relations, while GRE methods often produce
diverse and semantically accurate relations that differ from the references. To
fill this gap, we introduce GenRES for a multi-dimensional assessment in terms
of the topic similarity, uniqueness, granularity, factualness, and completeness
of the GRE results. With GenRES, we empirically identified that (1)
precision/recall fails to justify the performance of GRE methods; (2)
human-annotated referential relations can be incomplete; (3) prompting LLMs
with a fixed set of relations or entities can cause hallucinations. Next, we
conducted a human evaluation of GRE methods that shows GenRES is consistent
with human preferences for RE quality. Last, we made a comprehensive evaluation
of fourteen leading LLMs using GenRES across document, bag, and sentence level
RE datasets, respectively, to set the benchmark for future research in GRE
| 2,024 | Computation and Language |
ToolSword: Unveiling Safety Issues of Large Language Models in Tool
Learning Across Three Stages | Tool learning is widely acknowledged as a foundational approach or deploying
large language models (LLMs) in real-world scenarios. While current research
primarily emphasizes leveraging tools to augment LLMs, it frequently neglects
emerging safety considerations tied to their application. To fill this gap, we
present $ToolSword$, a comprehensive framework dedicated to meticulously
investigating safety issues linked to LLMs in tool learning. Specifically,
ToolSword delineates six safety scenarios for LLMs in tool learning,
encompassing $malicious$ $queries$ and $jailbreak$ $attacks$ in the input
stage, $noisy$ $misdirection$ and $risky$ $cues$ in the execution stage, and
$harmful$ $feedback$ and $error$ $conflicts$ in the output stage. Experiments
conducted on 11 open-source and closed-source LLMs reveal enduring safety
challenges in tool learning, such as handling harmful queries, employing risky
tools, and delivering detrimental feedback, which even GPT-4 is susceptible to.
Moreover, we conduct further studies with the aim of fostering research on tool
learning safety. The data is released in
https://github.com/Junjie-Ye/ToolSword.
| 2,024 | Computation and Language |
Inference to the Best Explanation in Large Language Models | While Large Language Models (LLMs) have found success in real-world
applications, their underlying explanatory process is still poorly understood.
This paper proposes IBE-Eval, a framework inspired by philosophical accounts on
Inference to the Best Explanation (IBE) to advance the interpretation and
evaluation of LLMs' explanations. IBE-Eval estimates the plausibility of
natural language explanations through a combination of explicit logical and
linguistic features including: consistency, parsimony, coherence, and
uncertainty. Extensive experiments are conducted on Causal Question Answering
(CQA), where \textit{IBE-Eval} is tasked to select the most plausible causal
explanation amongst competing ones generated by LLMs (i.e., GPT 3.5 and Llama
2). The experiments reveal that IBE-Eval can successfully identify the best
explanation with up to 77\% accuracy ($\approx 27\%$ above random), improving
upon a GPT 3.5-as-a-Judge baseline ($\approx+17\%$) while being intrinsically
more efficient and interpretable. Additional analyses suggest that, despite
model-specific variances, LLM-generated explanations tend to conform to IBE
criteria and that IBE-Eval is significantly correlated with human judgment,
opening up opportunities for future development of automated explanation
verification tools.
| 2,024 | Computation and Language |
Distillation Enhanced Generative Retrieval | Generative retrieval is a promising new paradigm in text retrieval that
generates identifier strings of relevant passages as the retrieval target. This
paradigm leverages powerful generative language models, distinct from
traditional sparse or dense retrieval methods. In this work, we identify a
viable direction to further enhance generative retrieval via distillation and
propose a feasible framework, named DGR. DGR utilizes sophisticated ranking
models, such as the cross-encoder, in a teacher role to supply a passage rank
list, which captures the varying relevance degrees of passages instead of
binary hard labels; subsequently, DGR employs a specially designed distilled
RankNet loss to optimize the generative retrieval model, considering the
passage rank order provided by the teacher model as labels. This framework only
requires an additional distillation step to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conduct experiments on four public datasets, and the results indicate that DGR
achieves state-of-the-art performance among the generative retrieval methods.
Additionally, DGR demonstrates exceptional robustness and generalizability with
various teacher models and distillation losses.
| 2,024 | Computation and Language |
How Reliable Are Automatic Evaluation Methods for Instruction-Tuned
LLMs? | Work on instruction-tuned Large Language Models (LLMs) has used automatic
methods based on text overlap and LLM judgments as cost-effective alternatives
to human evaluation. In this paper, we study the reliability of such methods
across a broad range of tasks and in a cross-lingual setting. In contrast to
previous findings, we observe considerable variability in correlations between
automatic methods and human evaluators when scores are differentiated by task
type. Specifically, the widely-used ROUGE-L metric strongly correlates with
human judgments for short-answer English tasks but is unreliable in free-form
generation tasks and cross-lingual transfer. The effectiveness of GPT-4 as an
evaluator depends on including reference answers when prompting for
assessments, which can lead to overly strict evaluations in free-form
generation tasks. In summary, we find that, while automatic evaluation methods
can approximate human judgements under specific conditions, their reliability
is highly context-dependent. Our findings enhance the understanding of how
automatic methods should be applied and interpreted when developing and
evaluating instruction-tuned LLMs.
| 2,024 | Computation and Language |
Enhancing ESG Impact Type Identification through Early Fusion and
Multilingual Models | In the evolving landscape of Environmental, Social, and Corporate Governance
(ESG) impact assessment, the ML-ESG-2 shared task proposes identifying ESG
impact types. To address this challenge, we present a comprehensive system
leveraging ensemble learning techniques, capitalizing on early and late fusion
approaches. Our approach employs four distinct models: mBERT, FlauBERT-base,
ALBERT-base-v2, and a Multi-Layer Perceptron (MLP) incorporating Latent
Semantic Analysis (LSA) and Term Frequency-Inverse Document Frequency (TF-IDF)
features. Through extensive experimentation, we find that our early fusion
ensemble approach, featuring the integration of LSA, TF-IDF, mBERT,
FlauBERT-base, and ALBERT-base-v2, delivers the best performance. Our system
offers a comprehensive ESG impact type identification solution, contributing to
the responsible and sustainable decision-making processes vital in today's
financial and corporate governance landscape.
| 2,024 | Computation and Language |
A Condensed Transition Graph Framework for Zero-shot Link Prediction
with Large Language Models | Zero-shot link prediction (ZSLP) on knowledge graphs aims at automatically
identifying relations between given entities. Existing methods primarily employ
auxiliary information to predict tail entity given head entity and its
relation, yet face challenges due to the occasional unavailability of such
detailed information and the inherent simplicity of predicting tail entities
based on semantic similarities. Even though Large Language Models (LLMs) offer
a promising solution to predict unobserved relations between the head and tail
entity in a zero-shot manner, their performance is still restricted due to the
inability to leverage all the (exponentially many) paths' information between
two entities, which are critical in collectively indicating their relation
types. To address this, in this work, we introduce a Condensed Transition Graph
Framework for Zero-Shot Link Prediction (CTLP), which encodes all the paths'
information in linear time complexity to predict unseen relations between
entities, attaining both efficiency and information preservation. Specifically,
we design a condensed transition graph encoder with theoretical guarantees on
its coverage, expressiveness, and efficiency. It is learned by a transition
graph contrastive learning strategy. Subsequently, we design a soft instruction
tuning to learn and map the all-path embedding to the input of LLMs.
Experimental results show that our proposed CTLP method achieves
state-of-the-art performance on three standard ZSLP datasets
| 2,024 | Computation and Language |
In Search of Needles in a 11M Haystack: Recurrent Memory Finds What LLMs
Miss | This paper addresses the challenge of processing long documents using
generative transformer models. To evaluate different approaches, we introduce
BABILong, a new benchmark designed to assess model capabilities in extracting
and processing distributed facts within extensive texts. Our evaluation, which
includes benchmarks for GPT-4 and RAG, reveals that common methods are
effective only for sequences up to $10^4$ elements. In contrast, fine-tuning
GPT-2 with recurrent memory augmentations enables it to handle tasks involving
up to $11\times 10^6$ elements. This achievement marks a substantial leap, as
it is by far the longest input processed by any neural network model to date,
demonstrating a significant improvement in the processing capabilities for long
sequences.
| 2,024 | Computation and Language |
Quantifying the Persona Effect in LLM Simulations | Large language models (LLMs) have shown remarkable promise in simulating
human language use and behavior. In this study, we delve into the intersection
of persona variables and the capability of LLMs to simulate different
perspectives. We find that persona variables can explain <10\% variance in
annotations in existing subjective NLP datasets. Nonetheless, incorporating
them via prompting in LLMs provides modest improvement. Persona prompting is
most effective on data samples where disagreements among annotators are
frequent yet confined to a limited range. A linear correlation exists: the more
persona variables influence human annotations, the better LLMs predictions are
using persona prompting. However, when the utility of persona variables is low
(i.e., explaining <10\% of human annotations), persona prompting has little
effect. Most subjective NLP datasets fall into this category, casting doubt on
simulating diverse perspectives in the current NLP landscape.
| 2,024 | Computation and Language |
Exploring Hybrid Question Answering via Program-based Prompting | Question answering over heterogeneous data requires reasoning over diverse
sources of data, which is challenging due to the large scale of information and
organic coupling of heterogeneous data. Various approaches have been proposed
to address these challenges. One approach involves training specialized
retrievers to select relevant information, thereby reducing the input length.
Another approach is to transform diverse modalities of data into a single
modality, simplifying the task difficulty and enabling more straightforward
processing. In this paper, we propose HProPro, a novel program-based prompting
framework for the hybrid question answering task. HProPro follows the code
generation and execution paradigm. In addition, HProPro integrates various
functions to tackle the hybrid reasoning scenario. Specifically, HProPro
contains function declaration and function implementation to perform hybrid
information-seeking over data from various sources and modalities, which
enables reasoning over such data without training specialized retrievers or
performing modal transformations. Experimental results on two typical hybrid
question answering benchmarks HybridQA and MultiModalQA demonstrate the
effectiveness of HProPro: it surpasses all baseline systems and achieves the
best performances in the few-shot settings on both datasets.
| 2,024 | Computation and Language |
Time Series Forecasting with LLMs: Understanding and Enhancing Model
Capabilities | Large language models (LLMs) have been applied in many fields with rapid
development in recent years. As a classic machine learning task, time series
forecasting has recently received a boost from LLMs. However, there is a
research gap in the LLMs' preferences in this field. In this paper, by
comparing LLMs with traditional models, many properties of LLMs in time series
prediction are found. For example, our study shows that LLMs excel in
predicting time series with clear patterns and trends but face challenges with
datasets lacking periodicity. We explain our findings through designing prompts
to require LLMs to tell the period of the datasets. In addition, the input
strategy is investigated, and it is found that incorporating external knowledge
and adopting natural language paraphrases positively affects the predictive
performance of LLMs for time series. Overall, this study contributes to insight
into the advantages and limitations of LLMs in time series forecasting under
different conditions.
| 2,024 | Computation and Language |
EcoRank: Budget-Constrained Text Re-ranking Using Large Language Models | Large Language Models (LLMs) have achieved state-of-the-art performance in
text re-ranking. This process includes queries and candidate passages in the
prompts, utilizing pointwise, listwise, and pairwise prompting strategies. A
limitation of these ranking strategies with LLMs is their cost: the process can
become expensive due to API charges, which are based on the number of input and
output tokens. We study how to maximize the re-ranking performance given a
budget, by navigating the vast search spaces of prompt choices, LLM APIs, and
budget splits. We propose a suite of budget-constrained methods to perform text
re-ranking using a set of LLM APIs. Our most efficient method, called EcoRank,
is a two-layered pipeline that jointly optimizes decisions regarding budget
allocation across prompt strategies and LLM APIs. Our experimental results on
four popular QA and passage reranking datasets show that EcoRank outperforms
other budget-aware supervised and unsupervised baselines.
| 2,024 | Computation and Language |
Multi-modal preference alignment remedies regression of visual
instruction tuning on language model | In production, multi-modal large language models (MLLMs) are expected to
support multi-turn queries of interchanging image and text modalities. However,
the current MLLMs trained with visual-question-answering (VQA) datasets could
suffer from degradation, as VQA datasets lack the diversity and complexity of
the original text instruction datasets which the underlying language model had
been trained with. To address this challenging degradation, we first collect a
lightweight (6k entries) VQA preference dataset where answers were annotated by
Gemini for 5 quality metrics in a granular fashion, and investigate standard
Supervised Fine-tuning, rejection sampling, Direct Preference Optimization
(DPO), and SteerLM. Our findings indicate that the with DPO we are able to
surpass instruction-following capabilities of the language model, achieving a
6.73 score on MT-Bench, compared to Vicuna's 6.57 and LLaVA's 5.99 despite
small data scale. This enhancement in textual instruction proficiency
correlates with boosted visual instruction performance (+4.9\% on MM-Vet, +6\%
on LLaVA-Bench), with minimal alignment tax on visual knowledge benchmarks
compared to previous RLHF approach. In conclusion, we propose a
distillation-based multi-modal alignment model with fine-grained annotations on
a small dataset that reconciles the textual and visual performance of MLLMs,
restoring and boosting language capability after visual instruction tuning.
| 2,024 | Computation and Language |
Reviewer2: Optimizing Review Generation Through Prompt Generation | Recent developments in LLMs offer new opportunities for assisting authors in
improving their work. In this paper, we envision a use case where authors can
receive LLM-generated reviews that uncover weak points in the current draft.
While initial methods for automated review generation already exist, these
methods tend to produce reviews that lack detail, and they do not cover the
range of opinions that human reviewers produce. To address this shortcoming, we
propose an efficient two-stage review generation framework called Reviewer2.
Unlike prior work, this approach explicitly models the distribution of possible
aspects that the review may address. We show that this leads to more detailed
reviews that better cover the range of aspects that human reviewers identify in
the draft. As part of the research, we generate a large-scale review dataset of
27k papers and 99k reviews that we annotate with aspect prompts, which we make
available as a resource for future research.
| 2,024 | Computation and Language |
When is Tree Search Useful for LLM Planning? It Depends on the
Discriminator | In this paper, we examine how large language models (LLMs) solve multi-step
problems under a language agent framework with three components: a generator, a
discriminator, and a planning method. We investigate the practical utility of
two advanced planning methods, iterative correction and tree search. We present
a comprehensive analysis of how discrimination accuracy affects the overall
performance of agents when using these two methods or a simpler method,
re-ranking. Experiments on two tasks, text-to-SQL parsing and mathematical
reasoning, show that: (1) advanced planning methods demand discriminators with
at least 90% accuracy to achieve significant improvements over re-ranking; (2)
current LLMs' discrimination abilities have not met the needs of advanced
planning methods to achieve such improvements; (3) with LLM-based
discriminators, advanced planning methods may not adequately balance accuracy
and efficiency. For example, compared to the other two methods, tree search is
at least 10--20 times slower but leads to negligible performance gains, which
hinders its real-world applications. Code and data will be released at
https://github.com/OSU-NLP-Group/llm-planning-eval.
| 2,024 | Computation and Language |
Instruction Diversity Drives Generalization To Unseen Tasks | Instruction tuning -- fine-tuning a large language model (LLM) on pairs of
instructions and desired outcomes -- is an approach that enables pre-trained
language models to perform real-world tasks and follow human instructions. Its
practical success depends on the model learning a broader set of instructions
than those it was trained on. Yet the factors that determine model
generalization to such \emph{unseen tasks} are not well understood. %To
understand the driving factors of generalization, In this paper, we experiment
with string rewrites, a symbolic task that serves as a building block for
Turing complete Markov algorithms while allowing experimental control of
"inputs" and "instructions". We investigate the trade-off between the number of
instructions the model is trained on and the number of training samples
provided for each instruction and observe that the diversity of the instruction
set determines generalization. Generalization emerges once a diverse enough set
of tasks is provided, even though very few examples are provided for each task.
Instruction diversity also ensures robustness with respect to non-uniform
distributions of instructions in the training set.
| 2,024 | Computation and Language |
Taxonomy-based CheckList for Large Language Model Evaluation | As large language models (LLMs) have been used in many downstream tasks, the
internal stereotypical representation may affect the fairness of the outputs.
In this work, we introduce human knowledge into natural language interventions
and study pre-trained language models' (LMs) behaviors within the context of
gender bias. Inspired by CheckList behavioral testing, we present a
checklist-style task that aims to probe and quantify LMs' unethical behaviors
through question-answering (QA). We design three comparison studies to evaluate
LMs from four aspects: consistency, biased tendency, model preference, and
gender preference switch. We probe one transformer-based QA model trained on
SQuAD-v2 dataset and one autoregressive large language model. Our results
indicate that transformer-based QA model's biased tendency positively
correlates with its consistency, whereas LLM shows the opposite relation. Our
proposed task provides the first dataset that involves human knowledge for LLM
bias evaluation.
| 2,024 | Computation and Language |
LLM-Assisted Crisis Management: Building Advanced LLM Platforms for
Effective Emergency Response and Public Collaboration | Emergencies and critical incidents often unfold rapidly, necessitating a
swift and effective response. In this research, we introduce a novel approach
to identify and classify emergency situations from social media posts and
direct emergency messages using an open source Large Language Model, LLAMA2.
The goal is to harness the power of natural language processing and machine
learning to assist public safety telecommunicators and huge crowds during
countrywide emergencies. Our research focuses on developing a language model
that can understand users describe their situation in the 911 call, enabling
LLAMA2 to analyze the content and offer relevant instructions to the
telecommunicator, while also creating workflows to notify government agencies
with the caller's information when necessary. Another benefit this language
model provides is its ability to assist people during a significant emergency
incident when the 911 system is overwhelmed, by assisting the users with simple
instructions and informing authorities with their location and emergency
information.
| 2,024 | Computation and Language |
News Source Credibility Assessment: A Reddit Case Study | In the era of social media platforms, identifying the credibility of online
content is crucial to combat misinformation. We present the CREDiBERT
(CREDibility assessment using Bi-directional Encoder Representations from
Transformers), a source credibility assessment model fine-tuned for Reddit
submissions focusing on political discourse as the main contribution. We adopt
a semi-supervised training approach for CREDiBERT, leveraging Reddit's
community-based structure. By encoding submission content using CREDiBERT and
integrating it into a Siamese neural network, we significantly improve the
binary classification of submission credibility, achieving a 9% increase in F1
score compared to existing methods. Additionally, we introduce a new version of
the post-to-post network in Reddit that efficiently encodes user interactions
to enhance the binary classification task by nearly 8% in F1 score. Finally, we
employ CREDiBERT to evaluate the susceptibility of subreddits with respect to
different topics.
| 2,024 | Computation and Language |
Neural machine translation of clinical procedure codes for medical
diagnosis and uncertainty quantification | A Clinical Decision Support System (CDSS) is designed to enhance clinician
decision-making by combining system-generated recommendations with medical
expertise. Given the high costs, intensive labor, and time-sensitive nature of
medical treatments, there is a pressing need for efficient decision support,
especially in complex emergency scenarios. In these scenarios, where
information can be limited, an advanced CDSS framework that leverages AI
(artificial intelligence) models to effectively reduce diagnostic uncertainty
has utility. Such an AI-enabled CDSS framework with quantified uncertainty
promises to be practical and beneficial in the demanding context of real-world
medical care. In this study, we introduce the concept of Medical Entropy,
quantifying uncertainties in patient outcomes predicted by neural machine
translation based on the ICD-9 code of procedures. Our experimental results not
only show strong correlations between procedure and diagnosis sequences based
on the simple ICD-9 code but also demonstrate the promising capacity to model
trends of uncertainties during hospitalizations through a data-driven approach.
| 2,024 | Computation and Language |
Text2Data: Low-Resource Data Generation with Textual Control | Natural language serves as a common and straightforward control signal for
humans to interact seamlessly with machines. Recognizing the importance of this
interface, the machine learning community is investing considerable effort in
generating data that is semantically coherent with textual instructions. While
strides have been made in text-to-data generation spanning image editing, audio
synthesis, video creation, and beyond, low-resource areas characterized by
expensive annotations or complex data structures, such as molecules, motion
dynamics, and time series, often lack textual labels. This deficiency impedes
supervised learning, thereby constraining the application of advanced
generative models for text-to-data tasks. In response to these challenges in
the low-resource scenario, we propose Text2Data, a novel approach that utilizes
unlabeled data to understand the underlying data distribution through an
unsupervised diffusion model. Subsequently, it undergoes controllable
finetuning via a novel constraint optimization-based learning objective that
ensures controllability and effectively counteracts catastrophic forgetting.
Comprehensive experiments demonstrate that Text2Data is able to achieve
enhanced performance regarding controllability across various modalities,
including molecules, motions and time series, when compared to existing
baselines.
| 2,024 | Computation and Language |
Advances and Limitations in Open Source Arabic-Script OCR: A Case Study | This work presents an accuracy study of the open source OCR engine, Kraken,
on the leading Arabic scholarly journal, al-Abhath. In contrast with other
commercially available OCR engines, Kraken is shown to be capable of producing
highly accurate Arabic-script OCR. The study also assesses the relative
accuracy of typeface-specific and generalized models on the al-Abhath data and
provides a microanalysis of the ``error instances'' and the contextual features
that may have contributed to OCR misrecognition. Building on this analysis, the
paper argues that Arabic-script OCR can be significantly improved through (1) a
more systematic approach to training data production, and (2) the development
of key technological components, especially multi-language models and improved
line segmentation and layout analysis.
Cet article pr{\'e}sente une {\'e}tude d'exactitude du moteur ROC open
source, Krakan, sur la revue acad{\'e}mique arabe de premier rang, al-Abhath.
Contrairement {\`a} d'autres moteurs ROC disponibles sur le march{\'e}, Kraken
se r{\'e}v{\`e}le {\^e}tre capable de produire de la ROC extr{\^e}mement exacte
de l'{\'e}criture arabe. L'{\'e}tude {\'e}value aussi l'exactitude relative des
mod{\`e}les sp{\'e}cifiquement configur{\'e}s {\`a} des polices et celle des
mod{\`e}les g{\'e}n{\'e}ralis{\'e}s sur les donn{\'e}es d'al-Abhath et fournit
une microanalyse des "occurrences d'erreurs", ainsi qu'une microanalyse des
{\'e}l{\'e}ments contextuels qui pourraient avoir contribu{\'e} {\`a} la
m{\'e}reconnaissance ROC. S'appuyant sur cette analyse, cet article fait valoir
que la ROC de l'{\'e}criture arabe peut {\^e}tre consid{\'e}rablement
am{\'e}lior{\'e}e gr{\^a}ce {\`a} (1) une approche plus syst{\'e}matique
d'entra{\^i}nement de la production de donn{\'e}es et (2) gr{\^a}ce au
d{\'e}veloppement de composants technologiques fondamentaux,
notammentl'am{\'e}lioration des mod{\`e}les multilingues, de la segmentation de
ligne et de l'analyse de la mise en page.
| 2,021 | Computation and Language |
CultureLLM: Incorporating Cultural Differences into Large Language
Models | Large language models (LLMs) are reported to be partial to certain cultures
owing to the training data dominance from the English corpora. Since
multilingual cultural data are often expensive to collect, existing efforts
handle this by prompt engineering or culture-specific pre-training. However,
they might overlook the knowledge deficiency of low-resource culture and
require extensive computing resources. In this paper, we propose CultureLLM, a
cost-effective solution to incorporate cultural differences into LLMs.
CultureLLM adopts World Value Survey (WVS) as seed data and generates
semantically equivalent training data via the proposed semantic data
augmentation. Using only 50 seed samples from WVS with augmented data, we
fine-tune culture-specific LLMs and one unified model (CultureLLM-One) for 9
cultures covering rich and low-resource languages. Extensive experiments on 60
culture-related datasets demonstrate that CultureLLM significantly outperforms
various counterparts such as GPT-3.5 (by 8.1%) and Gemini Pro (by 9.5%) with
comparable performance to GPT-4 or even better. Our human study shows that the
generated samples are semantically equivalent to the original samples,
providing an effective solution for LLMs augmentation.
| 2,024 | Computation and Language |
Zero-shot Explainable Mental Health Analysis on Social Media by
incorporating Mental Scales | Traditional discriminative approaches in mental health analysis are known for
their strong capacity but lack interpretability and demand large-scale
annotated data. On the other hand, generative approaches, such as those based
on large language models (LLMs),have the potential to get rid of heavy
annotations and provide explanations. However, their capabilities still fall
short compared to discriminative approaches, and their explanations may be
unreliable due to the fact that the generation of explanation is a black-box
process. Inspired by the psychological assessment practice of using scales to
evaluate mental states, our method incorporates two procedures via LLMs. First,
the patient completes mental health questionnaires, and second, the
psychologist interprets the collected information from the mental health
questions and makes informed decisions. Experimental results show that our
method outperforms other zero-shot methods. Our method can generate more
rigorous explanation based on the outputs of mental questionnaires.
| 2,024 | Computation and Language |
The Unreasonable Effectiveness of Eccentric Automatic Prompts | Large Language Models (LLMs) have demonstrated remarkable problem-solving and
basic mathematics abilities. However, their efficacy is highly contingent on
the formulation of the prompt. This study endeavors to quantify the influence
of incorporating "positive thinking" into the system message of the prompt,
then compare that to systematic prompt optimization. We assess the performance
of 60 combinations of system message snippets, tested with and without Chain of
Thought prompting, across three models with parameters ranging from 7 to 70
billion on the GSM8K dataset. Our findings reveal that results do not
universally generalize across models. In most instances, the inclusion of
"positive thinking" prompts positively affected model performance. Notably,
however, Llama2-70B exhibited an exception when not utilizing Chain of Thought,
as the optimal system message was found to be none at all. Given the
combinatorial complexity, and thus computation time, of experimenting with
hand-tuning prompts for large black-box models, we then compared the
performance of the best "positive thinking" prompt against the output of
systematic prompt optimization. We show that employing an automated prompt
optimizer emerges as the most effective method for enhancing performance, even
when working with smaller open-source models. Additionally, our findings reveal
that the highest-scoring, automatically-optimized prompt exhibits a degree of
peculiarity far beyond expectations.
| 2,024 | Computation and Language |
DAEDRA: A language model for predicting outcomes in passive
pharmacovigilance reporting | Over the recent years, the emergence of large language models (LLMs) has
given rise to a proliferation of domain-specific models that are intended to
reflect the particularities of linguistic context and content as a correlate of
the originating domain. This paper details the conception, design, training and
evaluation of DAEDRA, a LLM designed to detect regulatory-relevant outcomes
(mortality, ER attendance and hospitalisation) in adverse event reports
elicited through passive reporting (PR). While PR is a highly cost-efficient
way of eliciting information from a wide and diverse audience -- typically
including not only physicians and healthcare providers but also patients,
family members and other lay stakeholders --, this diversity makes PR corpora
difficult to analyse. Generic language models may not capture the complex
clinical dimensions while specific clinical or biomedical models may not
perform well on lay reports. To evaluate the utility of a subdomain-specific
language model, an adaptive training approach was adapted, wherein base
language model candidates were evaluated on a subset of the corpus, and the
best performer was trained on the entire corpus. This yielded a small but
significant improvement in $F_1$ (+1%), precision (+2.5%) and recall (+3.8%),
at a relatively low training cost and a single-day training time.
Subdomain-specific LLMs continue to be viable options for better results when
analysing highly specialised corpora.
| 2,024 | Computation and Language |
Relative Preference Optimization: Enhancing LLM Alignment through
Contrasting Responses across Identical and Diverse Prompts | In the field of large language models (LLMs), aligning models with the
diverse preferences of users is a critical challenge. Direct Preference
Optimization (DPO) has played a key role in this area. It works by using pairs
of preferences derived from the same prompts, and it functions without needing
an additional reward model. However, DPO does not fully reflect the complex
nature of human learning, which often involves understanding contrasting
responses to not only identical but also similar questions. To overcome this
shortfall, we propose Relative Preference Optimization (RPO). RPO is designed
to discern between more and less preferred responses derived from both
identical and related prompts. It introduces a contrastive weighting mechanism,
enabling the tuning of LLMs using a broader range of preference data, including
both paired and unpaired sets. This approach expands the learning capabilities
of the model, allowing it to leverage insights from a more varied set of
prompts. Through empirical tests, including dialogue and summarization tasks,
and evaluations using the AlpacaEval2.0 leaderboard, RPO has demonstrated a
superior ability to align LLMs with user preferences and to improve their
adaptability during the training process. The PyTorch code necessary to
reproduce the results presented in the paper will be made available on GitHub
for public access.
| 2,024 | Computation and Language |
Measuring and Controlling Persona Drift in Language Model Dialogs | Prompting is a standard tool for customizing language-model chatbots,
enabling them to take on a specific "persona". An implicit assumption in the
use of prompts is that they will be stable, so the chatbot will continue to
generate text according to the stipulated persona for the duration of a
conversation. We propose a quantitative benchmark to test this assumption,
evaluating persona stability via self-chats between two personalized chatbots.
Testing popular models like LLaMA2-chat-70B, we reveal a significant persona
drift within eight rounds of conversations. An empirical and theoretical
analysis of this phenomenon suggests the transformer attention mechanism plays
a role, due to attention decay over long exchanges. To combat attention decay
and persona drift, we propose a lightweight method called split-softmax, which
compares favorably against two strong baselines.
| 2,024 | Computation and Language |
GLoRe: When, Where, and How to Improve LLM Reasoning via Global and
Local Refinements | State-of-the-art language models can exhibit impressive reasoning refinement
capabilities on math, science or coding tasks. However, recent work
demonstrates that even the best models struggle to identify \textit{when and
where to refine} without access to external feedback. Outcome-based Reward
Models (\textbf{ORMs}), trained to predict correctness of the final answer
indicating when to refine, offer one convenient solution for deciding when to
refine. Process Based Reward Models (\textbf{PRMs}), trained to predict
correctness of intermediate steps, can then be used to indicate where to
refine. But they are expensive to train, requiring extensive human annotations.
In this paper, we propose Stepwise ORMs (\textbf{SORMs}) which are trained,
only on synthetic data, to approximate the expected future reward of the
optimal policy or $V^{\star}$. More specifically, SORMs are trained to predict
the correctness of the final answer when sampling the current policy many times
(rather than only once as in the case of ORMs). Our experiments show that SORMs
can more accurately detect incorrect reasoning steps compared to ORMs, thus
improving downstream accuracy when doing refinements. We then train
\textit{global} refinement models, which take only the question and a draft
solution as input and predict a corrected solution, and \textit{local}
refinement models which also take as input a critique indicating the location
of the first reasoning error. We generate training data for both models
synthetically by reusing data used to train the SORM. We find combining global
and local refinements, using the ORM as a reranker, significantly outperforms
either one individually, as well as a best of three sample baseline. With this
strategy we can improve the accuracy of a LLaMA-2 13B model (already fine-tuned
with RL) on GSM8K from 53\% to 65\% when greedily sampled.
| 2,024 | Computation and Language |
Generalization in Healthcare AI: Evaluation of a Clinical Large Language
Model | Advances in large language models (LLMs) provide new opportunities in
healthcare for improved patient care, clinical decision-making, and enhancement
of physician and administrator workflows. However, the potential of these
models importantly depends on their ability to generalize effectively across
clinical environments and populations, a challenge often underestimated in
early development. To better understand reasons for these challenges and inform
mitigation approaches, we evaluated ClinicLLM, an LLM trained on [HOSPITAL]'s
clinical notes, analyzing its performance on 30-day all-cause readmission
prediction focusing on variability across hospitals and patient
characteristics. We found poorer generalization particularly in hospitals with
fewer samples, among patients with government and unspecified insurance, the
elderly, and those with high comorbidities. To understand reasons for lack of
generalization, we investigated sample sizes for fine-tuning, note content
(number of words per note), patient characteristics (comorbidity level, age,
insurance type, borough), and health system aspects (hospital, all-cause 30-day
readmission, and mortality rates). We used descriptive statistics and
supervised classification to identify features. We found that, along with
sample size, patient age, number of comorbidities, and the number of words in
notes are all important factors related to generalization. Finally, we compared
local fine-tuning (hospital specific), instance-based augmented fine-tuning and
cluster-based fine-tuning for improving generalization. Among these, local
fine-tuning proved most effective, increasing AUC by 0.25% to 11.74% (most
helpful in settings with limited data). Overall, this study provides new
insights for enhancing the deployment of large language models in the
societally important domain of healthcare, and improving their performance for
broader populations.
| 2,024 | Computation and Language |
SportsMetrics: Blending Text and Numerical Data to Understand
Information Fusion in LLMs | Large language models hold significant potential for integrating various data
types, such as text documents and database records, for advanced analytics.
However, blending text and numerical data presents substantial challenges. LLMs
need to process and cross-reference entities and numbers, handle data
inconsistencies and redundancies, and develop planning capabilities such as
building a working memory for managing complex data queries. In this paper, we
introduce four novel tasks centered around sports data analytics to evaluate
the numerical reasoning and information fusion capabilities of LLMs. These
tasks involve providing LLMs with detailed, play-by-play sports game
descriptions, then challenging them with adversarial scenarios such as new game
rules, longer durations, scrambled narratives, and analyzing key statistics in
game summaries. We conduct extensive experiments on NBA and NFL games to assess
the performance of LLMs on these tasks. Our benchmark, SportsMetrics,
introduces a new mechanism for assessing LLMs' numerical reasoning and fusion
skills.
| 2,024 | Computation and Language |
FinTral: A Family of GPT-4 Level Multimodal Financial Large Language
Models | We introduce FinTral, a suite of state-of-the-art multimodal large language
models (LLMs) built upon the Mistral-7b model and tailored for financial
analysis. FinTral integrates textual, numerical, tabular, and image data. We
enhance FinTral with domain-specific pretraining, instruction fine-tuning, and
RLAIF training by exploiting a large collection of textual and visual datasets
we curate for this work. We also introduce an extensive benchmark featuring
nine tasks and 25 datasets for evaluation, including hallucinations in the
financial domain. Our FinTral model trained with direct preference optimization
employing advanced Tools and Retrieval methods, dubbed FinTral-DPO-T&R,
demonstrates an exceptional zero-shot performance. It outperforms ChatGPT-3.5
in all tasks and surpasses GPT-4 in five out of nine tasks, marking a
significant advancement in AI-driven financial technology. We also demonstrate
that FinTral has the potential to excel in real-time analysis and
decision-making in diverse financial contexts.
| 2,024 | Computation and Language |
WilKE: Wise-Layer Knowledge Editor for Lifelong Knowledge Editing | Knowledge editing aims to rectify inaccuracies in large language models
(LLMs) without costly retraining for outdated or erroneous knowledge. However,
current knowledge editing methods primarily focus on single editing, failing to
meet the requirements for lifelong editing. In this paper, lifelong editing is
synonymous with lifelong knowledge editing. This study reveals a performance
degradation encountered by knowledge editing in lifelong editing, characterized
by toxicity buildup and toxicity flash, with the primary cause identified as
pattern unmatch. We introduce a knowledge editing approach named WilKE, which
selects editing layer based on the pattern matching degree of editing knowledge
across different layers. Experimental results demonstrate that, in lifelong
editing, WilKE exhibits an average improvement of 46.2\% and 67.8\% on editing
GPT2-XL and GPT-J relative to state-of-the-art knowledge editing methods.
| 2,024 | Computation and Language |
"Understanding AI": Semantic Grounding in Large Language Models | Do LLMs understand the meaning of the texts they generate? Do they possess a
semantic grounding? And how could we understand whether and what they
understand? I start the paper with the observation that we have recently
witnessed a generative turn in AI, since generative models, including LLMs, are
key for self-supervised learning. To assess the question of semantic grounding,
I distinguish and discuss five methodological ways. The most promising way is
to apply core assumptions of theories of meaning in philosophy of mind and
language to LLMs. Grounding proves to be a gradual affair with a
three-dimensional distinction between functional, social and causal grounding.
LLMs show basic evidence in all three dimensions. A strong argument is that
LLMs develop world models. Hence, LLMs are neither stochastic parrots nor
semantic zombies, but already understand the language they generate, at least
in an elementary sense.
| 2,024 | Computation and Language |
ASGEA: Exploiting Logic Rules from Align-Subgraphs for Entity Alignment | Entity alignment (EA) aims to identify entities across different knowledge
graphs that represent the same real-world objects. Recent embedding-based EA
methods have achieved state-of-the-art performance in EA yet faced
interpretability challenges as they purely rely on the embedding distance and
neglect the logic rules behind a pair of aligned entities. In this paper, we
propose the Align-Subgraph Entity Alignment (ASGEA) framework to exploit logic
rules from Align-Subgraphs. ASGEA uses anchor links as bridges to construct
Align-Subgraphs and spreads along the paths across KGs, which distinguishes it
from the embedding-based methods. Furthermore, we design an interpretable
Path-based Graph Neural Network, ASGNN, to effectively identify and integrate
the logic rules across KGs. We also introduce a node-level multi-modal
attention mechanism coupled with multi-modal enriched anchors to augment the
Align-Subgraph. Our experimental results demonstrate the superior performance
of ASGEA over the existing embedding-based methods in both EA and Multi-Modal
EA (MMEA) tasks.
| 2,024 | Computation and Language |
Exploring Value Biases: How LLMs Deviate Towards the Ideal | Large-Language-Models (LLMs) are deployed in a wide range of applications,
and their response has an increasing social impact. Understanding the
non-deliberate(ive) mechanism of LLMs in giving responses is essential in
explaining their performance and discerning their biases in real-world
applications. This is analogous to human studies, where such inadvertent
responses are referred to as sampling. We study this sampling of LLMs in light
of value bias and show that the sampling of LLMs tends to favour high-value
options. Value bias corresponds to this shift of response from the most likely
towards an ideal value represented in the LLM. In fact, this effect can be
reproduced even with new entities learnt via in-context prompting. We show that
this bias manifests in unexpected places and has implications on relevant
application scenarios, like choosing exemplars. The results show that value
bias is strong in LLMs across different categories, similar to the results
found in human studies.
| 2,024 | Computation and Language |
PAT-Questions: A Self-Updating Benchmark for Present-Anchored Temporal
Question-Answering | Existing work on Temporal Question Answering (TQA) has predominantly focused
on questions anchored to specific timestamps or events (e.g. "Who was the US
president in 1970?"). Little work has studied questions whose temporal context
is relative to the present time (e.g. "Who was the previous US president?"). We
refer to this problem as Present-Anchored Temporal QA (PATQA). PATQA poses
unique challenges: (1) large language models (LLMs) may have outdated
knowledge, (2) complex temporal relationships (e.g. 'before', 'previous') are
hard to reason, (3) multi-hop reasoning may be required, and (4) the gold
answers of benchmarks must be continuously updated. To address these
challenges, we introduce the PAT-Questions benchmark, which includes single and
multi-hop temporal questions. The answers in PAT-Questions can be automatically
refreshed by re-running SPARQL queries on a knowledge graph, if available. We
evaluate several state-of-the-art LLMs and a SOTA temporal reasoning model
(TEMPREASON-T5) on PAT-Questions through direct prompting and
retrieval-augmented generation (RAG). The results highlight the limitations of
existing solutions in PATQA and motivate the need for new methods to improve
PATQA reasoning capabilities.
| 2,024 | Computation and Language |
Retrieval-Augmented Generation: Is Dense Passage Retrieval Retrieving? | Dense passage retrieval (DPR) is the first step in the retrieval augmented
generation (RAG) paradigm for improving the performance of large language
models (LLM). DPR fine-tunes pre-trained networks to enhance the alignment of
the embeddings between queries and relevant textual data. A deeper
understanding of DPR fine-tuning will be required to fundamentally unlock the
full potential of this approach. In this work, we explore DPR-trained models
mechanistically by using a combination of probing, layer activation analysis,
and model editing. Our experiments show that DPR training decentralizes how
knowledge is stored in the network, creating multiple access pathways to the
same information. We also uncover a limitation in this training style: the
internal knowledge of the pre-trained model bounds what the retrieval model can
retrieve. These findings suggest a few possible directions for dense retrieval:
(1) expose the DPR training process to more knowledge so more can be
decentralized, (2) inject facts as decentralized representations, (3) model and
incorporate knowledge uncertainty in the retrieval process, and (4) directly
map internal model knowledge to a knowledge base.
| 2,024 | Computation and Language |
Large Language Models Fall Short: Understanding Complex Relationships in
Detective Narratives | Existing datasets for narrative understanding often fail to represent the
complexity and uncertainty of relationships in real-life social scenarios. To
address this gap, we introduce a new benchmark, Conan, designed for extracting
and analysing intricate character relation graphs from detective narratives.
Specifically, we designed hierarchical relationship categories and manually
extracted and annotated role-oriented relationships from the perspectives of
various characters, incorporating both public relationships known to most
characters and secret ones known to only a few. Our experiments with advanced
Large Language Models (LLMs) like GPT-3.5, GPT-4, and Llama2 reveal their
limitations in inferencing complex relationships and handling longer
narratives. The combination of the Conan dataset and our pipeline strategy is
geared towards understanding the ability of LLMs to comprehend nuanced
relational dynamics in narrative contexts.
| 2,024 | Computation and Language |
Persona-DB: Efficient Large Language Model Personalization for Response
Prediction with Collaborative Data Refinement | The increasing demand for personalized interactions with large language
models (LLMs) calls for the development of methodologies capable of accurately
and efficiently identifying user opinions and preferences. Retrieval
augmentation emerges as an effective strategy, as it can accommodate a vast
number of users without the costs from fine-tuning. Existing research, however,
has largely focused on enhancing the retrieval stage and devoted limited
exploration toward optimizing the representation of the database, a crucial
aspect for tasks such as personalization. In this work, we examine the problem
from a novel angle, focusing on how data can be better represented for more
efficient retrieval in the context of LLM customization. To tackle this
challenge, we introduce Persona-DB, a simple yet effective framework consisting
of a hierarchical construction process to improve generalization across task
contexts and collaborative refinement to effectively bridge knowledge gaps
among users. In the task of response forecasting, Persona-DB demonstrates
superior efficiency in maintaining accuracy with a significantly reduced
retrieval size, a critical advantage in scenarios with extensive histories or
limited context windows. Our experiments also indicate a marked improvement of
over 15% under cold-start scenarios, when users have extremely sparse data.
Furthermore, our analysis reveals the increasing importance of collaborative
knowledge as the retrieval capacity expands.
| 2,024 | Computation and Language |
Bridging Causal Discovery and Large Language Models: A Comprehensive
Survey of Integrative Approaches and Future Directions | Causal discovery (CD) and Large Language Models (LLMs) represent two emerging
fields of study with significant implications for artificial intelligence.
Despite their distinct origins, CD focuses on uncovering cause-effect
relationships from data, and LLMs on processing and generating humanlike text,
the convergence of these domains offers novel insights and methodologies for
understanding complex systems. This paper presents a comprehensive survey of
the integration of LLMs, such as GPT4, into CD tasks. We systematically review
and compare existing approaches that leverage LLMs for various CD tasks and
highlight their innovative use of metadata and natural language to infer causal
structures. Our analysis reveals the strengths and potential of LLMs in both
enhancing traditional CD methods and as an imperfect expert, alongside the
challenges and limitations inherent in current practices. Furthermore, we
identify gaps in the literature and propose future research directions aimed at
harnessing the full potential of LLMs in causality research. To our knowledge,
this is the first survey to offer a unified and detailed examination of the
synergy between LLMs and CD, setting the stage for future advancements in the
field.
| 2,024 | Computation and Language |
AFaCTA: Assisting the Annotation of Factual Claim Detection with
Reliable LLM Annotators | With the rise of generative AI, automated fact-checking methods to combat
misinformation are becoming more and more important. However, factual claim
detection, the first step in a fact-checking pipeline, suffers from two key
issues that limit its scalability and generalizability: (1) inconsistency in
definitions of the task and what a claim is, and (2) the high cost of manual
annotation. To address (1), we review the definitions in related work and
propose a unifying definition of factual claims that focuses on verifiability.
To address (2), we introduce AFaCTA (Automatic Factual Claim deTection
Annotator), a novel framework that assists in the annotation of factual claims
with the help of large language models (LLMs). AFaCTA calibrates its annotation
confidence with consistency along three predefined reasoning paths. Extensive
evaluation and experiments in the domain of political speech reveal that AFaCTA
can efficiently assist experts in annotating factual claims and training
high-quality classifiers, and can work with or without expert supervision. Our
analyses also result in PoliClaim, a comprehensive claim detection dataset
spanning diverse political topics.
| 2,024 | Computation and Language |
Word Embeddings Revisited: Do LLMs Offer Something New? | Learning meaningful word embeddings is key to training a robust language
model. The recent rise of Large Language Models (LLMs) has provided us with
many new word/sentence/document embedding models. Although LLMs have shown
remarkable advancement in various NLP tasks, it is still unclear whether the
performance improvement is merely because of scale or whether underlying
embeddings they produce significantly differ from classical encoding models
like Sentence-BERT (SBERT) or Universal Sentence Encoder (USE). This paper
systematically investigates this issue by comparing classical word embedding
techniques against LLM-based word embeddings in terms of their latent vector
semantics. Our results show that LLMs tend to cluster semantically related
words more tightly than classical models. LLMs also yield higher average
accuracy on the Bigger Analogy Test Set (BATS) over classical methods. Finally,
some LLMs tend to produce word embeddings similar to SBERT, a relatively
lighter classical model.
| 2,024 | Computation and Language |
When LLMs Meet Cunning Questions: A Fallacy Understanding Benchmark for
Large Language Models | Recently, Large Language Models (LLMs) have made remarkable evolutions in
language understanding and generation. Following this, various benchmarks for
measuring all kinds of capabilities of LLMs have sprung up. In this paper, we
challenge the reasoning and understanding abilities of LLMs by proposing a
FaLlacy Understanding Benchmark (FLUB) containing cunning questions that are
easy for humans to understand but difficult for models to grasp. Specifically,
the cunning questions that FLUB focuses on mainly consist of the tricky,
humorous, and misleading questions collected from the real internet
environment. And we design three tasks with increasing difficulty in the FLUB
benchmark to evaluate the fallacy understanding ability of LLMs. Based on FLUB,
we investigate the performance of multiple representative and advanced LLMs,
reflecting our FLUB is challenging and worthy of more future study. Interesting
discoveries and valuable insights are achieved in our extensive experiments and
detailed analyses. We hope that our benchmark can encourage the community to
improve LLMs' ability to understand fallacies.
| 2,024 | Computation and Language |
Language Models as Science Tutors | NLP has recently made exciting progress toward training language models (LMs)
with strong scientific problem-solving skills. However, model development has
not focused on real-life use-cases of LMs for science, including applications
in education that require processing long scientific documents. To address
this, we introduce TutorEval and TutorChat. TutorEval is a diverse
question-answering benchmark consisting of questions about long chapters from
STEM textbooks, written by experts. TutorEval helps measure real-life usability
of LMs as scientific assistants, and it is the first benchmark combining long
contexts, free-form generation, and multi-disciplinary scientific knowledge.
Moreover, we show that fine-tuning base models with existing dialogue datasets
leads to poor performance on TutorEval. Therefore, we create TutorChat, a
dataset of 80,000 long synthetic dialogues about textbooks. We use TutorChat to
fine-tune Llemma models with 7B and 34B parameters. These LM tutors specialized
in math have a 32K-token context window, and they excel at TutorEval while
performing strongly on GSM8K and MATH. Our datasets build on open-source
materials, and we release our models, data, and evaluations.
| 2,024 | Computation and Language |
Whose Emotions and Moral Sentiments Do Language Models Reflect? | Language models (LMs) are known to represent the perspectives of some social
groups better than others, which may impact their performance, especially on
subjective tasks such as content moderation and hate speech detection. To
explore how LMs represent different perspectives, existing research focused on
positional alignment, i.e., how closely the models mimic the opinions and
stances of different groups, e.g., liberals or conservatives. However, human
communication also encompasses emotional and moral dimensions. We define the
problem of affective alignment, which measures how LMs' emotional and moral
tone represents those of different groups. By comparing the affect of responses
generated by 36 LMs to the affect of Twitter messages, we observe significant
misalignment of LMs with both ideological groups. This misalignment is larger
than the partisan divide in the U.S. Even after steering the LMs towards
specific ideological perspectives, the misalignment and liberal tendencies of
the model persist, suggesting a systemic bias within LMs.
| 2,024 | Computation and Language |
Navigating the Dual Facets: A Comprehensive Evaluation of Sequential
Memory Editing in Large Language Models | Memory Editing (ME) has emerged as an efficient method to modify erroneous
facts or inject new facts into Large Language Models (LLMs). Two mainstream ME
methods exist: parameter-modifying ME and parameter-preserving ME (integrating
extra modules while preserving original parameters). Regrettably, previous
studies on ME evaluation have two critical limitations: (i) evaluating LLMs
with single edit only, neglecting the need for continuous editing, and (ii)
evaluations focusing solely on basic factual triples, overlooking broader LLM
capabilities like logical reasoning and reading understanding. This study
addresses these limitations with contributions threefold: (i) We explore how ME
affects a wide range of fundamental capabilities of LLMs under sequential
editing. Experimental results reveal an intriguing phenomenon: Most
parameter-modifying ME consistently degrade performance across all tasks after
a few sequential edits. In contrast, parameter-preserving ME effectively
maintains LLMs' fundamental capabilities but struggles to accurately recall
edited knowledge presented in a different format. (ii) We extend our evaluation
to different editing settings, such as layers to edit, model size, instruction
tuning, etc. Experimental findings indicate several strategies that can
potentially mitigate the adverse effects of ME. (iii) We further explain why
parameter-modifying ME damages LLMs from three dimensions: parameter changes
after editing, language modeling capability, and the in-context learning
capability. Our in-depth study advocates more careful use of ME in real-world
scenarios.
| 2,024 | Computation and Language |
BlendFilter: Advancing Retrieval-Augmented Large Language Models via
Query Generation Blending and Knowledge Filtering | Retrieval-augmented Large Language Models (LLMs) offer substantial benefits
in enhancing performance across knowledge-intensive scenarios. However, these
methods often face challenges with complex inputs and encounter difficulties
due to noisy knowledge retrieval, notably hindering model effectiveness. To
address this issue, we introduce BlendFilter, a novel approach that elevates
retrieval-augmented LLMs by integrating query generation blending with
knowledge filtering. BlendFilter proposes the blending process through its
query generation method, which integrates both external and internal knowledge
augmentation with the original query, ensuring comprehensive information
gathering. Additionally, our distinctive knowledge filtering module capitalizes
on the intrinsic capabilities of the LLM, effectively eliminating extraneous
data. We conduct extensive experiments on three open-domain question answering
benchmarks, and the findings clearly indicate that our innovative BlendFilter
surpasses state-of-the-art baselines significantly.
| 2,024 | Computation and Language |
Speculative Streaming: Fast LLM Inference without Auxiliary Models | Speculative decoding is a prominent technique to speed up the inference of a
large target language model based on predictions of an auxiliary draft model.
While effective, in application-specific settings, it often involves
fine-tuning both draft and target models to achieve high acceptance rates. As
the number of downstream tasks grows, these draft models add significant
complexity to inference systems. We propose Speculative Streaming, a
single-model speculative decoding method that fuses drafting into the target
model by changing the fine-tuning objective from next token prediction to
future n-gram prediction. Speculative Streaming speeds up decoding by 1.8 -
3.1X in a diverse set of tasks, such as Summarization, Structured Queries, and
Meaning Representation, without sacrificing generation quality. Additionally,
Speculative Streaming is parameter-efficient. It achieves on-par/higher
speed-ups than Medusa-style architectures while using ~10000X fewer extra
parameters, making it well-suited for resource-constrained devices.
| 2,024 | Computation and Language |
Contrastive Instruction Tuning | Instruction tuning has been used as a promising approach to improve the
performance of large language models (LLMs) on unseen tasks. However, current
LLMs exhibit limited robustness to unseen instructions, generating inconsistent
outputs when the same instruction is phrased with slightly varied forms or
language styles. This behavior indicates LLMs' lack of robustness to textual
variations and generalizability to unseen instructions, potentially leading to
trustworthiness issues. Accordingly, we propose Contrastive Instruction Tuning,
which maximizes the similarity between the hidden representations of
semantically equivalent instruction-instance pairs while minimizing the
similarity between semantically different ones. To facilitate this approach, we
augment the existing FLAN collection by paraphrasing task instructions.
Experiments on the PromptBench benchmark show that CoIN consistently improves
LLMs' robustness to unseen instructions with variations across character, word,
sentence, and semantic levels by an average of +2.5% in accuracy.
| 2,024 | Computation and Language |
Boosting of Thoughts: Trial-and-Error Problem Solving with Large
Language Models | The reasoning performance of Large Language Models (LLMs) on a wide range of
problems critically relies on chain-of-thought prompting, which involves
providing a few chain of thought demonstrations as exemplars in prompts. Recent
work, e.g., Tree of Thoughts, has pointed out the importance of exploration and
self-evaluation in reasoning step selection for complex problem solving. In
this paper, we present Boosting of Thoughts (BoT), an automated prompting
framework for problem solving with LLMs by iteratively exploring and
self-evaluating many trees of thoughts in order to acquire an ensemble of
trial-and-error reasoning experiences, which will serve as a new form of
prompting to solve the complex problem. Starting from a simple prompt without
requiring examples, BoT iteratively explores and evaluates a large collection
of reasoning steps, and more importantly, uses error analysis obtained from the
LLM on them to explicitly revise prompting, which in turn enhances reasoning
step generation, until a final answer is attained. Our experiments with GPT-4
and Llama2 across extensive complex mathematical problems demonstrate that BoT
consistently achieves higher or comparable problem-solving rates than other
advanced prompting approaches.
| 2,024 | Computation and Language |
Grasping the Essentials: Tailoring Large Language Models for Zero-Shot
Relation Extraction | Relation extraction (RE), a crucial task in NLP, aims to identify semantic
relationships between entities mentioned in texts. Despite significant
advancements in this field, existing models typically rely on extensive
annotated data for training, which can be both costly and time-consuming to
acquire. Moreover, these models often struggle to adapt to new or unseen
relationships. In contrast, few-shot learning settings, which aim to reduce
annotation requirements, may offer incomplete and biased supervision for
understanding target relation semantics, leading to degraded and unstable
performance. To provide the model with accurate and explicit descriptions of
the relations types and meanwhile minimize the annotation requirements, we
study the definition only zero-shot RE setting where only relation definitions
expressed in natural language are used to train a RE model. Motivated by the
strong synthetic data generation power of LLMs, we propose a framework REPaL
which consists of three stages: (1) We utilize LLMs to generate initial seed
instances based on relation definitions and an unlabeled corpora. (2) We
fine-tune a bidirectional Small Language Model (SLM) using these initial seeds
to learn the relations for the target domain. (3) We enhance pattern coverage
and mitigate bias resulting from the limited number of initial seeds by
incorporating feedback acquired from SLM's predictions on unlabeled corpora. To
accomplish this, we leverage the multi-turn conversation ability of LLMs to
generate new instances in follow-up dialogues. Experiments on two datasets show
REPaL achieves better zero-shot performance with large margins over baseline
methods.
| 2,024 | Computation and Language |
Understanding News Thumbnail Representativeness by Counterfactual
Text-Guided Contrastive Language-Image Pretraining | This paper delves into the critical challenge of understanding the
representativeness of news thumbnail images, which often serve as the first
visual engagement for readers when an article is disseminated on social media.
We focus on whether a news image represents the main subject discussed in the
news text. To serve the challenge, we introduce NewsTT, a manually annotated
dataset of news thumbnail image and text pairs. We found that pretrained vision
and language models, such as CLIP and BLIP-2, struggle with this task. Since
news subjects frequently involve named entities or proper nouns, a pretrained
model could not have the ability to match its visual and textual appearances.
To fill the gap, we propose CFT-CLIP, a counterfactual text-guided contrastive
language-image pretraining framework. We hypothesize that learning to contrast
news text with its counterfactual, of which named entities are replaced, can
enhance the cross-modal matching ability in the target task. Evaluation
experiments using NewsTT show that CFT-CLIP outperforms the pretrained models,
such as CLIP and BLIP-2. Our code and data will be made accessible to the
public after the paper is accepted.
| 2,024 | Computation and Language |
PANDA (Pedantic ANswer-correctness Determination and
Adjudication):Improving Automatic Evaluation for Question Answering and Text
Generation | Question answering (QA) can only make progress if we know if an answer is
correct, but for many of the most challenging and interesting QA examples,
current answer correctness (AC) metrics do not align with human judgments,
particularly verbose, free form answers from large language models (LLM). There
are two challenges: a lack of data and that models are too big. LLM based
scorers correlate better with humans, but this expensive task has only been
tested on limited QA datasets. We rectify these issues by providing clear
guidelines for evaluating machine QA adopted from human QA contests. We also
introduce Precise ANswer correctness Determination and Adjudication (PANDA), a
small, efficient, deterministic AC classifier (812 KB) that more accurately
evaluates answer correctness.
| 2,024 | Computation and Language |
KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning
over Knowledge Graph | In this paper, we aim to improve the reasoning ability of large language
models (LLMs) over knowledge graphs (KGs) to answer complex questions. Inspired
by existing methods that design the interaction strategy between LLMs and KG,
we propose an autonomous LLM-based agent framework, called KG-Agent, which
enables a small LLM to actively make decisions until finishing the reasoning
process over KGs. In KG-Agent, we integrate the LLM, multifunctional toolbox,
KG-based executor, and knowledge memory, and develop an iteration mechanism
that autonomously selects the tool then updates the memory for reasoning over
KG. To guarantee the effectiveness, we leverage program language to formulate
the multi-hop reasoning process over the KG, and synthesize a code-based
instruction dataset to fine-tune the base LLM. Extensive experiments
demonstrate that only using 10K samples for tuning LLaMA-7B can outperform
state-of-the-art methods using larger LLMs or more data, on both in-domain and
out-domain datasets. Our code and data will be publicly released.
| 2,024 | Computation and Language |
GenDec: A robust generative Question-decomposition method for Multi-hop
reasoning | Multi-hop QA (MHQA) involves step-by-step reasoning to answer complex
questions and find multiple relevant supporting facts. However, Existing large
language models'(LLMs) reasoning ability in multi-hop question answering
remains exploration, which is inadequate in answering multi-hop questions.
Moreover, it is unclear whether LLMs follow a desired reasoning chain to reach
the right final answer. In this paper, we propose a \textbf{gen}erative
question \textbf{dec}omposition method (GenDec) from the perspective of
explainable QA by generating independent and complete sub-questions based on
incorporating additional extracted evidence for enhancing LLMs' reasoning
ability in RAG. To demonstrate the impact, generalization, and robustness of
Gendec, we conduct two experiments, the first is combining GenDec with small QA
systems on paragraph retrieval and QA tasks. We secondly examine the reasoning
capabilities of various state-of-the-art LLMs including GPT-4 and GPT-3.5
combined with GenDec. We experiment on the HotpotQA, 2WikihopMultiHopQA,
MuSiQue, and PokeMQA datasets.
| 2,024 | Computation and Language |
Token-Ensemble Text Generation: On Attacking the Automatic AI-Generated
Text Detection | The robustness of AI-content detection models against cultivated attacks
(e.g., paraphrasing or word switching) remains a significant concern. This
study proposes a novel token-ensemble generation strategy to challenge the
robustness of current AI-content detection approaches. We explore the ensemble
attack strategy by completing the prompt with the next token generated from
random candidate LLMs. We find the token-ensemble approach significantly drops
the performance of AI-content detection models (The code and test sets will be
released). Our findings reveal that token-ensemble generation poses a vital
challenge to current detection models and underlines the need for advancing
detection technologies to counter sophisticated adversarial strategies.
| 2,024 | Computation and Language |
M4GT-Bench: Evaluation Benchmark for Black-Box Machine-Generated Text
Detection | The advent of Large Language Models (LLMs) has brought an unprecedented surge
in machine-generated text (MGT) across diverse channels. This raises legitimate
concerns about its potential misuse and societal implications. The need to
identify and differentiate such content from genuine human-generated text is
critical in combating disinformation, preserving the integrity of education and
scientific fields, and maintaining trust in communication. In this work, we
address this problem by introducing a new benchmark involving multilingual,
multi-domain and multi-generator for MGT detection -- M4GT-Bench. It is
collected for three task formulations: (1) mono-lingual and multi-lingual
binary MGT detection; (2) multi-way detection identifies which particular model
generates the text; and (3) human-machine mixed text detection, where a word
boundary delimiting MGT from human-written content should be determined. Human
evaluation for Task 2 shows less than random guess performance, demonstrating
the challenges to distinguish unique LLMs. Promising results always occur when
training and test data distribute within the same domain or generators.
| 2,024 | Computation and Language |
KnowTuning: Knowledge-aware Fine-tuning for Large Language Models | Despite their success at many natural language processing (NLP) tasks, large
language models (LLMs) still struggle to effectively leverage knowledge for
knowledge-intensive tasks, manifesting limitations such as generating
incomplete, non-factual, or illogical answers. These limitations stem from
inadequate knowledge awareness of LLMs during vanilla fine-tuning. To address
these problems, we propose a knowledge-aware fine-tuning (KnowTuning) method to
explicitly and implicitly improve the knowledge awareness of LLMs. We devise an
explicit knowledge-aware generation stage to train LLMs to explicitly identify
knowledge triples in answers. We also propose an implicit knowledge-aware
comparison stage to train LLMs to implicitly distinguish between reliable and
unreliable knowledge, in three aspects: completeness, factuality, and
logicality. Extensive experiments on both generic and medical question
answering (QA) datasets confirm the effectiveness of KnowTuning, through
automatic and human evaluations, across various sizes of LLMs. Finally, we
demonstrate that the improvements of KnowTuning generalize to unseen QA
datasets.
| 2,024 | Computation and Language |
A Question Answering Based Pipeline for Comprehensive Chinese EHR
Information Extraction | Electronic health records (EHRs) hold significant value for research and
applications. As a new way of information extraction, question answering (QA)
can extract more flexible information than conventional methods and is more
accessible to clinical researchers, but its progress is impeded by the scarcity
of annotated data. In this paper, we propose a novel approach that
automatically generates training data for transfer learning of QA models. Our
pipeline incorporates a preprocessing module to handle challenges posed by
extraction types that are not readily compatible with extractive QA frameworks,
including cases with discontinuous answers and many-to-one relationships. The
obtained QA model exhibits excellent performance on subtasks of information
extraction in EHRs, and it can effectively handle few-shot or zero-shot
settings involving yes-no questions. Case studies and ablation studies
demonstrate the necessity of each component in our design, and the resulting
model is deemed suitable for practical use.
| 2,024 | Computation and Language |
RENOVI: A Benchmark Towards Remediating Norm Violations in
Socio-Cultural Conversations | Norm violations occur when individuals fail to conform to culturally accepted
behaviors, which may lead to potential conflicts. Remediating norm violations
requires social awareness and cultural sensitivity of the nuances at play. To
equip interactive AI systems with a remediation ability, we offer ReNoVi - a
large-scale corpus of 9,258 multi-turn dialogues annotated with social norms,
as well as define a sequence of tasks to help understand and remediate norm
violations step by step. ReNoVi consists of two parts: 512 human-authored
dialogues (real data), and 8,746 synthetic conversations generated by ChatGPT
through prompt learning. While collecting sufficient human-authored data is
costly, synthetic conversations provide suitable amounts of data to help
mitigate the scarcity of training data, as well as the chance to assess the
alignment between LLMs and humans in the awareness of social norms. We thus
harness the power of ChatGPT to generate synthetic training data for our task.
To ensure the quality of both human-authored and synthetic data, we follow a
quality control protocol during data collection. Our experimental results
demonstrate the importance of remediating norm violations in socio-cultural
conversations, as well as the improvement in performance obtained from
synthetic data.
| 2,024 | Computation and Language |
LaCo: Large Language Model Pruning via Layer Collapse | Large language models (LLMs) based on transformer are witnessing a notable
trend of size expansion, which brings considerable costs to both model training
and inference. However, existing methods such as model quantization, knowledge
distillation, and model pruning are constrained by various issues, including
hardware support limitations, the need for extensive training, and alterations
to the internal structure of the model. In this paper, we propose a concise
layer-wise pruning method called \textit{Layer Collapse (LaCo)}, in which rear
model layers collapse into a prior layer, enabling a rapid reduction in model
size while preserving the model structure. Comprehensive experiments show that
our method maintains an average task performance of over 80\% at pruning ratios
of 25-30\%, significantly outperforming existing state-of-the-art structured
pruning methods. We also conduct post-training experiments to confirm that the
proposed pruning method effectively inherits the parameters of the original
model. Finally, we discuss our motivation from the perspective of layer-wise
similarity and evaluate the performance of the pruned LLMs across various
pruning ratios.
| 2,024 | Computation and Language |
Disclosure and Mitigation of Gender Bias in LLMs | Large Language Models (LLMs) can generate biased responses. Yet previous
direct probing techniques contain either gender mentions or predefined gender
stereotypes, which are challenging to comprehensively collect. Hence, we
propose an indirect probing framework based on conditional generation. This
approach aims to induce LLMs to disclose their gender bias even without
explicit gender or stereotype mentions. We explore three distinct strategies to
disclose explicit and implicit gender bias in LLMs. Our experiments demonstrate
that all tested LLMs exhibit explicit and/or implicit gender bias, even when
gender stereotypes are not present in the inputs. In addition, an increased
model size or model alignment amplifies bias in most cases. Furthermore, we
investigate three methods to mitigate bias in LLMs via Hyperparameter Tuning,
Instruction Guiding, and Debias Tuning. Remarkably, these methods prove
effective even in the absence of explicit genders or stereotypes.
| 2,024 | Computation and Language |
Knowledge Graph Assisted Automatic Sports News Writing | In this paper, we present a novel method for automatically generating sports
news, which employs a unique algorithm that extracts pivotal moments from live
text broadcasts and uses them to create an initial draft of the news. This
draft is further refined by incorporating key details and background
information from a specially designed sports knowledge graph. This graph
contains 5,893 entities, which are classified into three distinct conceptual
categories, interconnected through four relationship types, and characterized
by 27 unique attributes. In addition, we create a multi-stage learning model by
combining convolutional neural networks and a transformer encoder. This model
expresses entity-task interactions using convolutional neural networks and
enriches entity representations in the query set with the transformer encoder.
It also includes a processor to compute matching scores for incomplete triples,
addressing few-shot knowledge graph completion problem. The efficiency of this
approach has been confirmed through both subjective and objective evaluations
of 50 selected test cases, demonstrating its capability in revolutionizing the
creation of sports news.
| 2,024 | Computation and Language |
I Learn Better If You Speak My Language: Enhancing Large Language Model
Fine-Tuning with Style-Aligned Response Adjustments | Fine-tuning large language models (LLMs) with a small data set for particular
tasks is a widely encountered yet complex challenge. The potential for
overfitting on a limited number of examples can negatively impact the model's
ability to generalize and retain its original skills. Our research explores the
impact of the style of ground-truth responses during the fine-tuning process.
We found that matching the ground-truth response style with the LLM's inherent
style results in better learning outcomes. Building on this insight, we
developed a method that minimally alters the LLM's pre-existing responses to
correct errors, using these adjusted responses as training targets. This
technique enables precise corrections in line with the model's native response
style, safeguarding the model's core capabilities and thus avoid overfitting.
Our findings show that this approach not only improves the LLM's task-specific
accuracy but also crucially maintains its original competencies and
effectiveness.
| 2,024 | Computation and Language |
Evaluating LLMs' Mathematical Reasoning in Financial Document Question
Answering | Large Language Models (LLMs), excel in natural language understanding, but
their capability for complex mathematical reasoning with an amalgamation of
structured tables and unstructured text is uncertain. This study explores LLMs'
mathematical reasoning on four financial tabular question-answering datasets:
TATQA, FinQA, ConvFinQA, and Multihiertt. Through extensive experiments with
various models and prompting techniques, we assess how LLMs adapt to complex
tables and mathematical tasks. We focus on sensitivity to table complexity and
performance variations with an increasing number of arithmetic reasoning steps.
The results provide insights into LLMs' capabilities and limitations in
handling complex mathematical scenarios for semi-structured tables. Ultimately,
we introduce a novel prompting technique tailored to semi-structured documents,
matching or outperforming other baselines in performance while providing a
nuanced understanding of LLMs abilities for such a task.
| 2,024 | Computation and Language |
Centroid-Based Efficient Minimum Bayes Risk Decoding | Minimum Bayes risk (MBR) decoding achieved state-of-the-art translation
performance by using COMET, a neural metric that has a high correlation with
human evaluation. However, MBR decoding requires quadratic time since it
computes the expected score between a translation hypothesis and all reference
translations. We propose centroid-based MBR (CBMBR) decoding to improve the
speed of MBR decoding. Our method clusters the reference translations in the
feature space, and then calculates the score using the centroids of each
cluster. The experimental results show that our CBMBR not only improved the
decoding speed of the expected score calculation 6.9 times, but also
outperformed vanilla MBR decoding in translation quality by up to 0.5 COMET in
the WMT'22 En$\leftrightarrow$Ja, En$\leftrightarrow$De, En$\leftrightarrow$Zh,
and WMT'23 En$\leftrightarrow$Ja translation tasks.
| 2,024 | Computation and Language |
Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with
Knowledge Graphs | Large language models (LLMs) demonstrate strong reasoning abilities when
prompted to generate chain-of-thought (CoT) explanations alongside answers.
However, previous research on evaluating LLMs has solely focused on answer
accuracy, neglecting the correctness of the generated CoT. In this paper, we
delve deeper into the CoT reasoning capabilities of LLMs in multi-hop question
answering by utilizing knowledge graphs (KGs). We propose a novel
discriminative and generative CoT evaluation paradigm to assess LLMs' knowledge
of reasoning and the accuracy of the generated CoT. Through experiments
conducted on 5 different families of LLMs across 2 multi-hop question-answering
datasets, we find that LLMs possess sufficient knowledge to perform reasoning.
However, there exists a significant disparity between answer accuracy and
faithfulness of the CoT reasoning generated by LLMs, indicating that they often
arrive at correct answers through incorrect reasoning.
| 2,024 | Computation and Language |
Asclepius: A Spectrum Evaluation Benchmark for Medical Multi-Modal Large
Language Models | The significant breakthroughs of Medical Multi-Modal Large Language Models
(Med-MLLMs) renovate modern healthcare with robust information synthesis and
medical decision support. However, these models are often evaluated on
benchmarks that are unsuitable for the Med-MLLMs due to the intricate nature of
the real-world diagnostic frameworks, which encompass diverse medical
specialties and involve complex clinical decisions. Moreover, these benchmarks
are susceptible to data leakage, since Med-MLLMs are trained on large
assemblies of publicly available data. Thus, an isolated and clinically
representative benchmark is highly desirable for credible Med-MLLMs evaluation.
To this end, we introduce Asclepius, a novel Med-MLLM benchmark that rigorously
and comprehensively assesses model capability in terms of: distinct medical
specialties (cardiovascular, gastroenterology, etc.) and different diagnostic
capacities (perception, disease analysis, etc.). Grounded in 3 proposed core
principles, Asclepius ensures a comprehensive evaluation by encompassing 15
medical specialties, stratifying into 3 main categories and 8 sub-categories of
clinical tasks, and exempting from train-validate contamination. We further
provide an in-depth analysis of 6 Med-MLLMs and compare them with 5 human
specialists, providing insights into their competencies and limitations in
various medical contexts. Our work not only advances the understanding of
Med-MLLMs' capabilities but also sets a precedent for future evaluations and
the safe deployment of these models in clinical environments. We launch and
maintain a leaderboard for community assessment of Med-MLLM capabilities
(https://asclepius-med.github.io/).
| 2,024 | Computation and Language |
Controlled Text Generation for Large Language Model with Dynamic
Attribute Graphs | Controlled Text Generation (CTG) aims to produce texts that exhibit specific
desired attributes. In this study, we introduce a pluggable CTG framework for
Large Language Models (LLMs) named Dynamic Attribute Graphs-based controlled
text generation (DATG). This framework utilizes an attribute scorer to evaluate
the attributes of sentences generated by LLMs and constructs dynamic attribute
graphs. DATG modulates the occurrence of key attribute words and key
anti-attribute words, achieving effective attribute control without
compromising the original capabilities of the model. We conduct experiments
across four datasets in two tasks: toxicity mitigation and sentiment
transformation, employing five LLMs as foundational models. Our findings
highlight a remarkable enhancement in control accuracy, achieving a peak
improvement of 19.29% over baseline methods in the most favorable task across
four datasets. Additionally, we observe a significant decrease in perplexity,
markedly improving text fluency.
| 2,024 | Computation and Language |
Can Large Language Models perform Relation-based Argument Mining? | Argument mining (AM) is the process of automatically extracting arguments,
their components and/or relations amongst arguments and components from text.
As the number of platforms supporting online debate increases, the need for AM
becomes ever more urgent, especially in support of downstream tasks.
Relation-based AM (RbAM) is a form of AM focusing on identifying agreement
(support) and disagreement (attack) relations amongst arguments. RbAM is a
challenging classification task, with existing methods failing to perform
satisfactorily. In this paper, we show that general-purpose Large Language
Models (LLMs), appropriately primed and prompted, can significantly outperform
the best performing (RoBERTa-based) baseline. Specifically, we experiment with
two open-source LLMs (Llama-2 and Mistral) with ten datasets.
| 2,024 | Computation and Language |
LLM can Achieve Self-Regulation via Hyperparameter Aware Generation | In the realm of Large Language Models (LLMs), users commonly employ diverse
decoding strategies and adjust hyperparameters to control the generated text.
However, a critical question emerges: Are LLMs conscious of the existence of
these decoding strategies and capable of regulating themselves? The current
decoding generation process often relies on empirical and heuristic manual
adjustments to hyperparameters based on types of tasks and demands. However,
this process is typically cumbersome, and the decoding hyperparameters may not
always be optimal for each sample. To address the aforementioned challenges, we
propose a novel text generation paradigm termed Hyperparameter Aware Generation
(HAG). By leveraging hyperparameter-aware instruction tuning, the LLM
autonomously determines the optimal decoding strategy and configs based on the
input samples, enabling self-regulation. Our approach eliminates the need for
extensive manual tuning, offering a more autonomous, self-regulate model
behavior. Experimental results spanning six datasets across reasoning,
creativity, translation, and mathematics tasks demonstrate that
hyperparameter-aware instruction tuning empowers the LLMs to self-regulate the
decoding strategy and hyperparameter. HAG extends the current paradigm in the
text generation process, highlighting the feasibility of endowing the LLMs with
self-regulate decoding strategies.
| 2,024 | Computation and Language |
C-ICL: Contrastive In-context Learning for Information Extraction | Recently, there has been increasing interest in exploring the capabilities of
advanced large language models (LLMs) in the field of information extraction
(IE), specifically focusing on tasks related to named entity recognition (NER)
and relation extraction (RE). Although researchers are exploring the use of
few-shot information extraction through in-context learning with LLMs, they
tend to focus only on using correct or positive examples for demonstration,
neglecting the potential value of incorporating incorrect or negative examples
into the learning process. In this paper, we present c-ICL, a novel few-shot
technique that leverages both correct and incorrect sample constructions to
create in-context learning demonstrations. This approach enhances the ability
of LLMs to extract entities and relations by utilizing prompts that incorporate
not only the positive samples but also the reasoning behind them. This method
allows for the identification and correction of potential interface errors.
Specifically, our proposed method taps into the inherent contextual information
and valuable information in hard negative samples and the nearest positive
neighbors to the test and then applies the in-context learning demonstrations
based on LLMs. Our experiments on various datasets indicate that c-ICL
outperforms previous few-shot in-context learning methods, delivering
substantial enhancements in performance across a broad spectrum of related
tasks. These improvements are noteworthy, showcasing the versatility of our
approach in miscellaneous scenarios.
| 2,024 | Computation and Language |
MoRAL: MoE Augmented LoRA for LLMs' Lifelong Learning | Adapting large language models (LLMs) to new domains/tasks and enabling them
to be efficient lifelong learners is a pivotal challenge. In this paper, we
propose MoRAL, i.e., Mixture-of-Experts augmented Low-Rank Adaptation for
Lifelong Learning. MoRAL combines the multi-tasking abilities of MoE with the
fine-tuning abilities of LoRA for effective life-long learning of LLMs. In
contrast to the conventional approaches that use factual triplets as inputs
MoRAL relies on simple question-answer pairs, which is a more practical and
effective strategy for robust and efficient learning. Owing to new data
settings, we introduce a new evaluation benchmark namely: Life Long Learning of
LLM (5L-bench) encompassing a newly curated dataset of question-answer pairs,
and a set of evaluation metrics for rigorous evaluation of MoRAL in open-book
and closed-book settings. Experimental evaluation shows (i) LLMs learn fast in
open-book settings with up to 30.15% improvement in "RA" for Phi-2-2.7B
compared to closed-book (for models fine-tuned with MoRAL); (ii) MoRAL shows
higher performance improvement for models with a greater number of parameters;
(iii) MoRAL is robust to catastrophic forgetting offering better knowledge
retention compared to baselines.
| 2,024 | Computation and Language |
Human-AI Interactions in the Communication Era: Autophagy Makes Large
Models Achieving Local Optima | The increasing significance of large language and multimodal models in
societal information processing has ignited debates on social safety and
ethics. However, few studies have approached the analysis of these limitations
from the comprehensive perspective of human and artificial intelligence system
interactions. This study investigates biases and preferences when humans and
large models are used as key links in communication. To achieve this, we design
a multimodal dataset and three different experiments to evaluate generative
models in their roles as producers and disseminators of information. Our main
findings highlight that synthesized information is more likely to be
incorporated into model training datasets and messaging than human-generated
information. Additionally, large models, when acting as transmitters of
information, tend to modify and lose specific content selectively.
Conceptually, we present two realistic models of autophagic
("self-consumption") loops to account for the suppression of human-generated
information in the exchange of information between humans and AI systems. We
generalize the declining diversity of social information and the bottleneck in
model performance caused by the above trends to the local optima of large
models.
| 2,024 | Computation and Language |
Multi-Perspective Consistency Enhances Confidence Estimation in Large
Language Models | In the deployment of large language models (LLMs), accurate confidence
estimation is critical for assessing the credibility of model predictions.
However, existing methods often fail to overcome the issue of overconfidence on
incorrect answers. In this work, we focus on improving the confidence
estimation of large language models. Considering the fragility of
self-awareness in language models, we introduce a Multi-Perspective Consistency
(MPC) method. We leverage complementary insights from different perspectives
within models (MPC-Internal) and across different models (MPC-Across) to
mitigate the issue of overconfidence arising from a singular viewpoint. The
experimental results on eight publicly available datasets show that our MPC
achieves state-of-the-art performance. Further analyses indicate that MPC can
mitigate the problem of overconfidence and is effectively scalable to other
models.
| 2,024 | Computation and Language |
Can Large Multimodal Models Uncover Deep Semantics Behind Images? | Understanding the deep semantics of images is essential in the era dominated
by social media. However, current research works primarily on the superficial
description of images, revealing a notable deficiency in the systematic
investigation of the inherent deep semantics. In this work, we introduce
DEEPEVAL, a comprehensive benchmark to assess Large Multimodal Models' (LMMs)
capacities of visual deep semantics. DEEPEVAL includes human-annotated dataset
and three progressive subtasks: fine-grained description selection, in-depth
title matching, and deep semantics understanding. Utilizing DEEPEVAL, we
evaluate 9 open-source LMMs and GPT-4V(ision).Our evaluation demonstrates a
substantial gap between the deep semantic comprehension capabilities of
existing LMMs and humans. For example, GPT-4V is 30% behind humans in
understanding deep semantics, even though it achieves human-comparable
performance in image description. Further analysis indicates that the
integration of description texts during the inference process notably enhances
LMMs' ability to perceive deep semantics. Furthermore, our dataset is divided
into multiple categories, and we conducted a more detailed analysis within
these categories.
| 2,024 | Computation and Language |
Grammaticality illusion or ambiguous interpretation? Event-related
potentials reveal the nature of the missing-NP effect in Mandarin
centre-embedded structures | In several languages, omitting a verb phrase (VP) in double centre-embedded
structures creates a grammaticality illusion. Similar illusion also exhibited
in Mandarin missing-NP double centre-embedded structures. However, there is no
consensus on its very nature. Instead of treating it as grammaticality
illusion, we argue that ambiguous interpretations of verbs can best account for
this phenomenon in Mandarin. To further support this hypothesis, we conducted
two electroencephalography (EEG) experiments on quasi double centre-embedded
structures whose complexity is reduced by placing the self-embedding relative
clauses into the sentence's subject position. Experiment 1 showed that similar
phenomenon even exhibited in this structure, evidenced by an absence of P600
effect and a presence of N400 effect. In Experiment 2, providing semantic cues
to reduce ambiguity dispelled this illusion, as evidenced by a P600 effect. We
interpret the results under garden-path theory and propose that word-order
difference may account for this cross-linguistic variation.
| 2,024 | Computation and Language |
Puzzle Solving using Reasoning of Large Language Models: A Survey | Exploring the capabilities of Large Language Models (LLMs) in puzzle solving
unveils critical insights into their potential and challenges in artificial
intelligence, marking a significant step towards understanding their
applicability in complex reasoning tasks. This survey leverages a unique
taxonomy -- dividing puzzles into rule-based and rule-less categories -- to
critically assess LLMs through various methodologies, including prompting
techniques, neuro-symbolic approaches, and fine-tuning. Through a critical
review of relevant datasets and benchmarks, we assess LLMs' performance,
identifying significant challenges in complex puzzle scenarios. Our findings
highlight the disparity between LLM capabilities and human-like reasoning,
particularly in those requiring advanced logical inference. The survey
underscores the necessity for novel strategies and richer datasets to advance
LLMs' puzzle-solving proficiency and contribute to AI's logical reasoning and
creative problem-solving advancements.
| 2,024 | Computation and Language |
OneBit: Towards Extremely Low-bit Large Language Models | Model quantification uses low bit-width values to represent the weight
matrices of models, which is a promising approach to reduce both storage and
computational overheads of deploying highly anticipated LLMs. However, existing
quantization methods suffer severe performance degradation when the bit-width
is extremely reduced, and thus focus on utilizing 4-bit or 8-bit values to
quantize models. This paper boldly quantizes the weight matrices of LLMs to
1-bit, paving the way for the extremely low bit-width deployment of LLMs. For
this target, we introduce a 1-bit quantization-aware training (QAT) framework
named OneBit, including a novel 1-bit parameter representation method to better
quantize LLMs as well as an effective parameter initialization method based on
matrix decomposition to improve the convergence speed of the QAT framework.
Sufficient experimental results indicate that OneBit achieves good performance
(at least 83% of the non-quantized performance) with robust training processes
when only using 1-bit weight matrices.
| 2,024 | Computation and Language |
Dissecting Human and LLM Preferences | As a relative quality comparison of model responses, human and Large Language
Model (LLM) preferences serve as common alignment goals in model fine-tuning
and criteria in evaluation. Yet, these preferences merely reflect broad
tendencies, resulting in less explainable and controllable models with
potential safety risks. In this work, we dissect the preferences of human and
32 different LLMs to understand their quantitative composition, using
annotations from real-world user-model conversations for a fine-grained,
scenario-wise analysis. We find that humans are less sensitive to errors, favor
responses that support their stances, and show clear dislike when models admit
their limits. On the contrary, advanced LLMs like GPT-4-Turbo emphasize
correctness, clarity, and harmlessness more. Additionally, LLMs of similar
sizes tend to exhibit similar preferences, regardless of their training
methods, and fine-tuning for alignment does not significantly alter the
preferences of pretrained-only LLMs. Finally, we show that preference-based
evaluation can be intentionally manipulated. In both training-free and
training-based settings, aligning a model with the preferences of judges boosts
scores, while injecting the least preferred properties lowers them. This
results in notable score shifts: up to 0.59 on MT-Bench (1-10 scale) and 31.94
on AlpacaEval 2.0 (0-100 scale), highlighting the significant impact of this
strategic adaptation. Interactive Demo:
https://huggingface.co/spaces/GAIR/Preference-Dissection-Visualization Dataset:
https://huggingface.co/datasets/GAIR/preference-dissection Code:
https://github.com/GAIR-NLP/Preference-Dissection
| 2,024 | Computation and Language |
MMMModal -- Multi-Images Multi-Audio Multi-turn Multi-Modal | Our contribution introduces a groundbreaking multimodal large language model
designed to comprehend multi-images, multi-audio, and multi-images-multi-audio
within a single multiturn session. Leveraging state-of-the-art models, we
utilize the SigLIP encoder for visual inputs and the Whisper Encoder for audio
inputs. Notably, this multimodal large language model is bilingual, proficient
in understanding both English and Malay simultaneously. We proudly unveil two
versions of this model: TinyLlama with 1.1B parameters, and Mistral with 7B
parameters. With its ability to navigate diverse modalities and languages, our
model represents a significant advancement for the Malaysian context and
beyond.
All models released at
https://huggingface.co/collections/mesolitica/multimodal-malaysian-llm-65c6f893e03f78fa9e5c8859
| 2,024 | Computation and Language |
EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries | The dynamic nature of real-world information necessitates efficient knowledge
editing (KE) in large language models (LLMs) for knowledge updating. However,
current KE approaches, which typically operate on (subject, relation, object)
triples, ignore the contextual information and the relation among different
knowledge. Such editing methods could thus encounter an uncertain editing
boundary, leaving a lot of relevant knowledge in ambiguity: Queries that could
be answered pre-edit cannot be reliably answered afterward. In this work, we
analyze this issue by introducing a theoretical framework for KE that
highlights an overlooked set of knowledge that remains unchanged and aids in
knowledge deduction during editing, which we name as the deduction anchor. We
further address this issue by proposing a novel task of event-based knowledge
editing that pairs facts with event descriptions. This task manifests not only
a closer simulation of real-world editing scenarios but also a more logically
sound setting, implicitly defining the deduction anchor to address the issue of
indeterminate editing boundaries. We empirically demonstrate the superiority of
event-based editing over the existing setting on resolving uncertainty in
edited models, and curate a new benchmark dataset EvEdit derived from the
CounterFact dataset. Moreover, while we observe that the event-based setting is
significantly challenging for existing approaches, we propose a novel approach
Self-Edit that showcases stronger performance, achieving 55.6% consistency
improvement while maintaining the naturalness of generation.
| 2,024 | Computation and Language |
PhaseEvo: Towards Unified In-Context Prompt Optimization for Large
Language Models | Crafting an ideal prompt for Large Language Models (LLMs) is a challenging
task that demands significant resources and expert human input. Existing work
treats the optimization of prompt instruction and in-context learning examples
as distinct problems, leading to sub-optimal prompt performance. This research
addresses this limitation by establishing a unified in-context prompt
optimization framework, which aims to achieve joint optimization of the prompt
instruction and examples. However, formulating such optimization in the
discrete and high-dimensional natural language space introduces challenges in
terms of convergence and computational efficiency. To overcome these issues, we
present PhaseEvo, an efficient automatic prompt optimization framework that
combines the generative capability of LLMs with the global search proficiency
of evolution algorithms. Our framework features a multi-phase design
incorporating innovative LLM-based mutation operators to enhance search
efficiency and accelerate convergence. We conduct an extensive evaluation of
our approach across 35 benchmark tasks. The results demonstrate that PhaseEvo
significantly outperforms the state-of-the-art baseline methods by a large
margin whilst maintaining good efficiency.
| 2,024 | Computation and Language |
Tasks That Language Models Don't Learn | We argue that there are certain properties of language that our current large
language models (LLMs) don't learn. We present an empirical investigation of
visual-auditory properties of language through a series of tasks, termed
H-TEST. This benchmark highlights a fundamental gap between human linguistic
comprehension, which naturally integrates sensory experiences, and the
sensory-deprived processing capabilities of LLMs. In support of our hypothesis,
1. deliberate reasoning (Chain-of-Thought), 2. few-shot examples, or 3.
stronger LLM from the same model family (LLaMA 2 13B -> LLaMA 2 70B) do not
trivially bring improvements in H-TEST performance. Therefore, we make a
particular connection to the philosophical case of Mary, who learns about the
world in a sensory-deprived environment (Jackson, 1986). Our experiments show
that some of the strongest proprietary LLMs stay near random chance baseline
accuracy of 50%, highlighting the limitations of knowledge acquired in the
absence of sensory experience.
| 2,024 | Computation and Language |
What Changed? Converting Representational Interventions to Natural
Language | Interventions targeting the representation space of language models (LMs)
have emerged as effective means to influence model behavior. These methods are
employed, for example, to eliminate or alter the encoding of demographic
information such as gender within the model's representations, creating a
counterfactual representation. However, since the intervention operates within
the representation space, understanding precisely which features it modifies
poses a challenge. We show that representation-space counterfactuals can be
converted into natural language counterfactuals. We demonstrate that this
approach enables us to analyze the linguistic alterations corresponding to a
given representation-space intervention and to interpret the features utilized
for encoding a specific concept. Moreover, the resulting counterfactuals can be
used to mitigate bias in classification.
| 2,024 | Computation and Language |
Reasoning before Comparison: LLM-Enhanced Semantic Similarity Metrics
for Domain Specialized Text Analysis | In this study, we leverage LLM to enhance the semantic analysis and develop
similarity metrics for texts, addressing the limitations of traditional
unsupervised NLP metrics like ROUGE and BLEU. We develop a framework where LLMs
such as GPT-4 are employed for zero-shot text identification and label
generation for radiology reports, where the labels are then used as
measurements for text similarity. By testing the proposed framework on the
MIMIC data, we find that GPT-4 generated labels can significantly improve the
semantic similarity assessment, with scores more closely aligned with clinical
ground truth than traditional NLP metrics. Our work demonstrates the
possibility of conducting semantic analysis of the text data using
semi-quantitative reasoning results by the LLMs for highly specialized domains.
While the framework is implemented for radiology report similarity analysis,
its concept can be extended to other specialized domains as well.
| 2,024 | Computation and Language |
k-SemStamp: A Clustering-Based Semantic Watermark for Detection of
Machine-Generated Text | Recent watermarked generation algorithms inject detectable signatures during
language generation to facilitate post-hoc detection. While token-level
watermarks are vulnerable to paraphrase attacks, SemStamp (Hou et al., 2023)
applies watermark on the semantic representation of sentences and demonstrates
promising robustness. SemStamp employs locality-sensitive hashing (LSH) to
partition the semantic space with arbitrary hyperplanes, which results in a
suboptimal tradeoff between robustness and speed. We propose k-SemStamp, a
simple yet effective enhancement of SemStamp, utilizing k-means clustering as
an alternative of LSH to partition the embedding space with awareness of
inherent semantic structure. Experimental results indicate that k-SemStamp
saliently improves its robustness and sampling efficiency while preserving the
generation quality, advancing a more effective tool for machine-generated text
detection.
| 2,024 | Computation and Language |
Don't Go To Extremes: Revealing the Excessive Sensitivity and
Calibration Limitations of LLMs in Implicit Hate Speech Detection | The fairness and trustworthiness of Large Language Models (LLMs) are
receiving increasing attention. Implicit hate speech, which employs indirect
language to convey hateful intentions, occupies a significant portion of
practice. However, the extent to which LLMs effectively address this issue
remains insufficiently examined. This paper delves into the capability of LLMs
to detect implicit hate speech (Classification Task) and express confidence in
their responses (Calibration Task). Our evaluation meticulously considers
various prompt patterns and mainstream uncertainty estimation methods. Our
findings highlight that LLMs exhibit two extremes: (1) LLMs display excessive
sensitivity towards groups or topics that may cause fairness issues, resulting
in misclassifying benign statements as hate speech. (2) LLMs' confidence scores
for each method excessively concentrate on a fixed range, remaining unchanged
regardless of the dataset's complexity. Consequently, the calibration
performance is heavily reliant on primary classification accuracy. These
discoveries unveil new limitations of LLMs, underscoring the need for caution
when optimizing models to ensure they do not veer towards extremes. This serves
as a reminder to carefully consider sensitivity and confidence in the pursuit
of model fairness.
| 2,024 | Computation and Language |
Multi-dimensional Evaluation of Empathetic Dialog Responses | Empathy is a critical element of effective and satisfactory conversational
communication, yet previous studies in measuring conversational empathy mostly
focus on expressed communicative intents -- in which way empathy is expressed,
ignoring the fact that conversation is also a collaborative practice involving
both speakers and listeners. In contrast, we propose a multi-dimensional
empathy evaluation framework that extends upon existing work to measure both
expressed intents from the speaker's perspective and perceived empathy from the
listener's perspective. Applying the proposed framework to analyzing our
internal customer-service dialogue shows that the two dimensions (expressed
intent types and perceived empathy) are inter-connected, while perceived
empathy has high correlation with the satisfactory level of dialogue sessions.
This proposed framework still requires subjective assessments from trained
annotators, which can be non-trivial to collect. To scale up evaluation without
excessive reliance on carefully annotated data, we explore different modeling
options to automatically measure conversational empathy with (1) prompting
frozen large language models (LLMs) and (2) training language model-based
classifiers. Extensive experiments on both internal and external dialogue
datasets show that measuring conversational empathy remains a challenging task
for prompting frozen LLMs, reflected by less satisfying performance of GPT-4
and Flan family models. On the other hand, our proposed instruction-finetuned
classifiers based on sequence-to-sequence (Seq2Seq) language models is able to
achieve the best performance compared to prior works and competitive baselines.
Finally, we perform comprehensive ablation studies on the performance of
proposed instruction-finetuned classifiers and give recommendations on
potentially adopting them as automatic conversational empathy evaluation
metrics.
| 2,024 | Computation and Language |
Fine-grained and Explainable Factuality Evaluation for Multimodal
Summarization | Multimodal summarization aims to generate a concise summary based on the
input text and image. However, the existing methods potentially suffer from
unfactual output. To evaluate the factuality of multimodal summarization
models, we propose two fine-grained and explainable evaluation frameworks
(FALLACIOUS) for different application scenarios, i.e. reference-based
factuality evaluation framework and reference-free factuality evaluation
framework. Notably, the reference-free factuality evaluation framework doesn't
need ground truth and hence it has a wider application scenario. To evaluate
the effectiveness of the proposed frameworks, we compute the correlation
between our frameworks and the other metrics. The experimental results show the
effectiveness of our proposed method. We will release our code and dataset via
github.
| 2,024 | Computation and Language |
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for
Ultra-Low-Parameter Fine-Tuning of Large Language Models | Various parameter-efficient fine-tuning (PEFT) techniques have been proposed
to enable computationally efficient fine-tuning while maintaining model
performance. However, existing PEFT methods are still limited by the growing
number of trainable parameters with the rapid deployment of Large Language
Models (LLMs). To address this challenge, we present LoRETTA, an
ultra-parameter-efficient framework that significantly reduces trainable
parameters through tensor-train decomposition. Specifically, we propose two
methods, named {LoRETTA}$_{adp}$ and {LoRETTA}$_{rep}$. The former employs
tensorized adapters, offering a high-performance yet lightweight approach for
the fine-tuning of LLMs. The latter emphasizes fine-tuning via weight
parameterization with a set of small tensor factors. LoRETTA achieves
comparable or better performance than most widely used PEFT methods with up to
$100\times$ fewer parameters on the LLaMA-2-7B models. Furthermore, empirical
results demonstrate that the proposed method effectively improves training
efficiency, enjoys better multi-task learning performance, and enhances the
anti-overfitting capability. Plug-and-play codes built upon the Huggingface
framework and PEFT library will be released.
| 2,024 | Computation and Language |
Rethinking the Roles of Large Language Models in Chinese Grammatical
Error Correction | Recently, Large Language Models (LLMs) have been widely studied by
researchers for their roles in various downstream NLP tasks. As a fundamental
task in the NLP field, Chinese Grammatical Error Correction (CGEC) aims to
correct all potential grammatical errors in the input sentences. Previous
studies have shown that LLMs' performance as correctors on CGEC remains
unsatisfactory due to its challenging task focus. To promote the CGEC field to
better adapt to the era of LLMs, we rethink the roles of LLMs in the CGEC task
so that they can be better utilized and explored in CGEC. Considering the rich
grammatical knowledge stored in LLMs and their powerful semantic understanding
capabilities, we utilize LLMs as explainers to provide explanation information
for the CGEC small models during error correction to enhance performance. We
also use LLMs as evaluators to bring more reasonable CGEC evaluations, thus
alleviating the troubles caused by the subjectivity of the CGEC task. In
particular, our work is also an active exploration of how LLMs and small models
better collaborate in downstream tasks. Extensive experiments and detailed
analyses on widely used datasets verify the effectiveness of our thinking
intuition and the proposed methods.
| 2,024 | Computation and Language |
Mitigating Catastrophic Forgetting in Multi-domain Chinese Spelling
Correction by Multi-stage Knowledge Transfer Framework | Chinese Spelling Correction (CSC) aims to detect and correct spelling errors
in given sentences. Recently, multi-domain CSC has gradually attracted the
attention of researchers because it is more practicable. In this paper, we
focus on the key flaw of the CSC model when adapting to multi-domain scenarios:
the tendency to forget previously acquired knowledge upon learning new
domain-specific knowledge (i.e., catastrophic forgetting). To address this, we
propose a novel model-agnostic Multi-stage Knowledge Transfer (MKT) framework,
which utilizes a continuously evolving teacher model for knowledge transfer in
each domain, rather than focusing solely on new domain knowledge. It deserves
to be mentioned that we are the first to apply continual learning methods to
the multi-domain CSC task. Experiments prove the effectiveness of our proposed
method, and further analyses demonstrate the importance of overcoming
catastrophic forgetting for improving the model performance.
| 2,024 | Computation and Language |
EventRL: Enhancing Event Extraction with Outcome Supervision for Large
Language Models | In this study, we present EventRL, a reinforcement learning approach
developed to enhance event extraction for large language models (LLMs). EventRL
utilizes outcome supervision with specific reward functions to tackle prevalent
challenges in LLMs, such as instruction following and hallucination, manifested
as the mismatch of event structure and the generation of undefined event types.
We evaluate EventRL against existing methods like Few-Shot Prompting (FSP)
(based on GPT4) and Supervised Fine-Tuning (SFT) across various LLMs, including
GPT-4, LLaMa, and CodeLLaMa models. Our findings show that EventRL
significantly outperforms these conventional approaches by improving the
performance in identifying and structuring events, particularly in handling
novel event types. The study emphasizes the critical role of reward function
selection and demonstrates the benefits of incorporating code data for better
event extraction. While increasing model size leads to higher accuracy,
maintaining the ability to generalize is essential to avoid overfitting.
| 2,024 | Computation and Language |
Subsets and Splits