Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
EmoBench: Evaluating the Emotional Intelligence of Large Language Models | Recent advances in Large Language Models (LLMs) have highlighted the need for
robust, comprehensive, and challenging benchmarks. Yet, research on evaluating
their Emotional Intelligence (EI) is considerably limited. Existing benchmarks
have two major shortcomings: first, they mainly focus on emotion recognition,
neglecting essential EI capabilities such as emotion regulation and thought
facilitation through emotion understanding; second, they are primarily
constructed from existing datasets, which include frequent patterns, explicit
information, and annotation errors, leading to unreliable evaluation. We
propose EmoBench, a benchmark that draws upon established psychological
theories and proposes a comprehensive definition for machine EI, including
Emotional Understanding and Emotional Application. EmoBench includes a set of
400 hand-crafted questions in English and Chinese, which are meticulously
designed to require thorough reasoning and understanding. Our findings reveal a
considerable gap between the EI of existing LLMs and the average human,
highlighting a promising direction for future research. Our code and data will
be publicly available from https://github.com/Sahandfer/EmoBench.
| 2,024 | Computation and Language |
Can LLMs Compute with Reasons? | Large language models (LLMs) often struggle with complex mathematical tasks,
prone to "hallucinating" incorrect answers due to their reliance on statistical
patterns. This limitation is further amplified in average Small LangSLMs with
limited context and training data. To address this challenge, we propose an
"Inductive Learning" approach utilizing a distributed network of SLMs. This
network leverages error-based learning and hint incorporation to refine the
reasoning capabilities of SLMs. Our goal is to provide a framework that
empowers SLMs to approach the level of logic-based applications achieved by
high-parameter models, potentially benefiting any language model. Ultimately,
this novel concept paves the way for bridging the logical gap between humans
and LLMs across various fields.
| 2,024 | Computation and Language |
Do Large Language Models Understand Logic or Just Mimick Context? | Over the past few years, the abilities of large language models (LLMs) have
received extensive attention, which have performed exceptionally well in
complicated scenarios such as logical reasoning and symbolic inference. A
significant factor contributing to this progress is the benefit of in-context
learning and few-shot prompting. However, the reasons behind the success of
such models using contextual reasoning have not been fully explored. Do LLMs
have understand logical rules to draw inferences, or do they ``guess'' the
answers by learning a type of probabilistic mapping through context? This paper
investigates the reasoning capabilities of LLMs on two logical reasoning
datasets by using counterfactual methods to replace context text and modify
logical concepts. Based on our analysis, it is found that LLMs do not truly
understand logical rules; rather, in-context learning has simply enhanced the
likelihood of these models arriving at the correct answers. If one alters
certain words in the context text or changes the concepts of logical terms, the
outputs of LLMs can be significantly disrupted, leading to counter-intuitive
responses. This work provides critical insights into the limitations of LLMs,
underscoring the need for more robust mechanisms to ensure reliable logical
reasoning in LLMs.
| 2,024 | Computation and Language |
Groot: Adversarial Testing for Generative Text-to-Image Models with
Tree-based Semantic Transformation | With the prevalence of text-to-image generative models, their safety becomes
a critical concern. adversarial testing techniques have been developed to probe
whether such models can be prompted to produce Not-Safe-For-Work (NSFW)
content. However, existing solutions face several challenges, including low
success rate and inefficiency. We introduce Groot, the first automated
framework leveraging tree-based semantic transformation for adversarial testing
of text-to-image models. Groot employs semantic decomposition and sensitive
element drowning strategies in conjunction with LLMs to systematically refine
adversarial prompts. Our comprehensive evaluation confirms the efficacy of
Groot, which not only exceeds the performance of current state-of-the-art
approaches but also achieves a remarkable success rate (93.66%) on leading
text-to-image models such as DALL-E 3 and Midjourney.
| 2,024 | Computation and Language |
Is It a Free Lunch for Removing Outliers during Pretraining? | With the growing size of large language models, the role of quantization
becomes increasingly significant. However, outliers present in weights or
activations notably influence the performance of quantized models. Recently,
\citet{qtransformer} introduced a novel softmax function aimed at pretraining
models in an outlier-free manner, thereby enhancing their suitability for
quantization. Interestingly, we observed that such an approach leads to
performance degradation in full precision. Building on this insight, we enhance
the method by ensuring its normalization is invariant to sequence length, a
crucial factor for bridging the gap between pretraining and fine-tuning.
Moreover, this improved method also facilitates successful pretraining of
causal language models.
| 2,024 | Computation and Language |
Evaluating Image Review Ability of Vision Language Models | Large-scale vision language models (LVLMs) are language models that are
capable of processing images and text inputs by a single model. This paper
explores the use of LVLMs to generate review texts for images. The ability of
LVLMs to review images is not fully understood, highlighting the need for a
methodical evaluation of their review abilities. Unlike image captions, review
texts can be written from various perspectives such as image composition and
exposure. This diversity of review perspectives makes it difficult to uniquely
determine a single correct review for an image. To address this challenge, we
introduce an evaluation method based on rank correlation analysis, in which
review texts are ranked by humans and LVLMs, then, measures the correlation
between these rankings. We further validate this approach by creating a
benchmark dataset aimed at assessing the image review ability of recent LVLMs.
Our experiments with the dataset reveal that LVLMs, particularly those with
proven superiority in other evaluative contexts, excel at distinguishing
between high-quality and substandard image reviews.
| 2,024 | Computation and Language |
Meta Ranking: Less Capable Language Models are Capable for Single
Response Judgement | Although Large Language Models (LLMs) have demonstrated strong performance on
a wide range of tasks, they still face reliability challenges such as
hallucination. Previous studies reveal that highly capable LLMs like GPT-4 are
effective in judging the reliability of individual responses, while less
capable ones are often tuned to evaluate the relative reliability of responses
to the same query. To enable less capable LLMs to effectively judge the
reliability of individual responses, we propose a novel method named
$\textit{Meta}$ $\textit{Ranking}$ (MR). Unlike previous methods, which assess
the response directly, we achieve the judgement by comparing the target
query-response pair with reference query-response pairs. We found its
remarkable effectiveness in error detection for LLM responses on reasoning
tasks, where less capable LLMs could outperform strong baselines, even without
fine-tuning. We further demonstrate that MR can be used to enhance the
performance of LLMs in two practical applications: query routing and iterative
training data filtering. The former achieves GPT-4-turbo comparable performance
with less than half the token consumption, while the latter makes the
instruction-tuned LLaMA-7B and Phi-2, a 2.7B model, significantly surpass
Alpaca-13B over fewer training samples, underscoring the high potential of our
proposed method.
| 2,024 | Computation and Language |
End-to-end multilingual fact-checking at scale | In this article, we describe how you can perform end-to-end fact-checking in
over 100 languages using Factiverse AI models. We also show through an
experimental benchmark that fine-tuned models tailored for fact-checking tasks
outperform Large Language Models such as GPT-4, GPT-3.5-Turbo, and Mistral-7b.
| 2,024 | Computation and Language |
Your Large Language Model is Secretly a Fairness Proponent and You
Should Prompt it Like One | The widespread adoption of large language models (LLMs) underscores the
urgent need to ensure their fairness. However, LLMs frequently present dominant
viewpoints while ignoring alternative perspectives from minority parties,
resulting in potential biases. We hypothesize that these fairness-violating
behaviors occur because LLMs express their viewpoints using a human personality
that represents the majority of training data. In response to this, we validate
that prompting LLMs with specific roles can allow LLMs to express diverse
viewpoints. Building on this insight and observation, we develop FairThinking,
a pipeline designed to automatically generate roles that enable LLMs to
articulate diverse perspectives for fair expressions. To evaluate FairThinking,
we create a dataset with a thousand items covering three fairness-related
topics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to
demonstrate its superior performance.
| 2,024 | Computation and Language |
Transformer-based Causal Language Models Perform Clustering | Even though large language models (LLMs) have demonstrated remarkable
capability in solving various natural language tasks, the capability of an LLM
to follow human instructions is still a concern. Recent works have shown great
improvements in the instruction-following capability via additional training
for instruction-following tasks. However, the mechanisms responsible for
effective instruction-following capabilities remain inadequately understood.
Here, we introduce a simplified instruction-following task and use synthetic
datasets to analyze a Transformer-based causal language model. Our findings
suggest that the model learns task-specific information by clustering data
within its hidden space, with this clustering process evolving dynamically
during learning. We also demonstrate how this phenomenon assists the model in
handling unseen instances, and validate our results in a more realistic
setting. Furthermore, we present inspired applications regarding pre-training
and alignment.
| 2,024 | Computation and Language |
Unsupervised LLM Adaptation for Question Answering | Large language models (LLM) learn diverse knowledge present in the
large-scale training dataset via self-supervised training. Followed by
instruction-tuning, LLM acquires the ability to return correct information for
diverse questions. However, adapting these pre-trained LLMs to new target
domains, such as different organizations or periods, for the question-answering
(QA) task incurs a substantial annotation cost. To tackle this challenge, we
propose a novel task, unsupervised LLM adaptation for question answering. In
this task, we leverage a pre-trained LLM, a publicly available QA dataset
(source data), and unlabeled documents from the target domain. Our goal is to
learn LLM that can answer questions about the target domain. We introduce one
synthetic and two real datasets to evaluate models fine-tuned on the source and
target data, and reveal intriguing insights; (i) fine-tuned models exhibit the
ability to provide correct answers for questions about the target domain even
though they do not see any questions about the information described in the
unlabeled documents, but (ii) they have difficulties in accessing information
located in the middle or at the end of documents, and (iii) this challenge can
be partially mitigated by replacing input tokens with random ones during
adaptation.
| 2,024 | Computation and Language |
BIDER: Bridging Knowledge Inconsistency for Efficient
Retrieval-Augmented LLMs via Key Supporting Evidence | Retrieval-augmented large language models (LLMs) have demonstrated efficacy
in knowledge-intensive tasks such as open-domain QA, addressing inherent
challenges in knowledge update and factual inadequacy. However, inconsistencies
between retrieval knowledge and the necessary knowledge for LLMs, leading to a
decline in LLM's answer quality. This paper introduces BIDER, an approach that
refines retrieval documents into Key Supporting Evidence (KSE) through
knowledge synthesis, supervised fine-tuning (SFT), and preference alignment. We
train BIDER by learning from crafting KSE, while maximizing its output to align
with LLM's information acquisition preferences through reinforcement learning.
Evaluations across five datasets show BIDER boosts LLMs' answer quality by 7%
while reducing input content length in retrieval documents by 80%,
outperforming existing methods. The proposed KSE simulation effectively equips
LLMs with essential information for accurate question answering.
| 2,024 | Computation and Language |
Amplifying Training Data Exposure through Fine-Tuning with
Pseudo-Labeled Memberships | Neural language models (LMs) are vulnerable to training data extraction
attacks due to data memorization. This paper introduces a novel attack scenario
wherein an attacker adversarially fine-tunes pre-trained LMs to amplify the
exposure of the original training data. This strategy differs from prior
studies by aiming to intensify the LM's retention of its pre-training dataset.
To achieve this, the attacker needs to collect generated texts that are closely
aligned with the pre-training data. However, without knowledge of the actual
dataset, quantifying the amount of pre-training data within generated texts is
challenging. To address this, we propose the use of pseudo-labels for these
generated texts, leveraging membership approximations indicated by
machine-generated probabilities from the target LM. We subsequently fine-tune
the LM to favor generations with higher likelihoods of originating from the
pre-training data, based on their membership probabilities. Our empirical
findings indicate a remarkable outcome: LMs with over 1B parameters exhibit a
four to eight-fold increase in training data exposure. We discuss potential
mitigations and suggest future research directions.
| 2,024 | Computation and Language |
A Chinese Dataset for Evaluating the Safeguards in Large Language Models | Many studies have demonstrated that large language models (LLMs) can produce
harmful responses, exposing users to unexpected risks when LLMs are deployed.
Previous studies have proposed comprehensive taxonomies of the risks posed by
LLMs, as well as corresponding prompts that can be used to examine the safety
mechanisms of LLMs. However, the focus has been almost exclusively on English,
and little has been explored for other languages. Here we aim to bridge this
gap. We first introduce a dataset for the safety evaluation of Chinese LLMs,
and then extend it to two other scenarios that can be used to better identify
false negative and false positive examples in terms of risky prompt rejections.
We further present a set of fine-grained safety assessment criteria for each
risk type, facilitating both manual annotation and automatic evaluation in
terms of LLM response harmfulness. Our experiments on five LLMs show that
region-specific risks are the prevalent type of risk, presenting the major
issue with all Chinese LLMs we experimented with. Warning: this paper contains
example data that may be offensive, harmful, or biased.
| 2,024 | Computation and Language |
Browse and Concentrate: Comprehending Multimodal Content via prior-LLM
Context Fusion | With the bloom of Large Language Models (LLMs), Multimodal Large Language
Models (MLLMs) that incorporate LLMs with pre-trained vision models have
recently demonstrated impressive performance across diverse vision-language
tasks. However, they fall short to comprehend context involving multiple
images. A primary reason for this shortcoming is that the visual features for
each images are encoded individually by frozen encoders before feeding into the
LLM backbone, lacking awareness of other images and the multimodal
instructions. We term this issue as prior-LLM modality isolation and propose a
two phase paradigm, browse-and-concentrate, to enable in-depth multimodal
context fusion prior to feeding the features into LLMs. This paradigm initially
"browses" through the inputs for essential insights, and then revisits the
inputs to "concentrate" on crucial details, guided by these insights, to
achieve a more comprehensive understanding of the multimodal inputs.
Additionally, we develop training strategies specifically to enhance the
understanding of multi-image inputs. Our method markedly boosts the performance
on 7 multi-image scenarios, contributing to increments on average accuracy by
2.13% and 7.60% against strong MLLMs baselines with 3B and 11B LLMs,
respectively.
| 2,024 | Computation and Language |
Zero shot VLMs for hate meme detection: Are we there yet? | Multimedia content on social media is rapidly evolving, with memes gaining
prominence as a distinctive form. Unfortunately, some malicious users exploit
memes to target individuals or vulnerable communities, making it imperative to
identify and address such instances of hateful memes. Extensive research has
been conducted to address this issue by developing hate meme detection models.
However, a notable limitation of traditional machine/deep learning models is
the requirement for labeled datasets for accurate classification. Recently, the
research community has witnessed the emergence of several visual language
models that have exhibited outstanding performance across various tasks. In
this study, we aim to investigate the efficacy of these visual language models
in handling intricate tasks such as hate meme detection. We use various prompt
settings to focus on zero-shot classification of hateful/harmful memes. Through
our analysis, we observe that large VLMs are still vulnerable for zero-shot
hate meme detection.
| 2,024 | Computation and Language |
Enhancing Multilingual Capabilities of Large Language Models through
Self-Distillation from Resource-Rich Languages | While large language models (LLMs) have been pre-trained on multilingual
corpora, their performance still lags behind in most languages compared to a
few resource-rich languages. One common approach to mitigate this issue is to
translate training data from resource-rich languages into other languages and
then continue training. However, using the data obtained solely relying on
translation while ignoring the original capabilities of LLMs across languages
is not always effective, which we show will limit the performance of
cross-lingual knowledge transfer. In this work, we propose SDRRL, a method
based on Self-Distillation from Resource-Rich Languages that effectively
improve multilingual performance by leveraging the internal capabilities of
LLMs on resource-rich languages. We evaluate on different LLMs (LLaMA-2 and
SeaLLM) and source languages across various comprehension and generation tasks,
experimental results demonstrate that SDRRL can significantly enhance
multilingual capabilities while minimizing the impact on original performance
in resource-rich languages.
| 2,024 | Computation and Language |
Polarization of Autonomous Generative AI Agents Under Echo Chambers | Online social networks often create echo chambers where people only hear
opinions reinforcing their beliefs. An echo chamber often generates
polarization, leading to conflicts caused by people with radical opinions, such
as the January 6, 2021, attack on the US Capitol. The echo chamber has been
viewed as a human-specific problem, but this implicit assumption is becoming
less reasonable as large language models, such as ChatGPT, acquire social
abilities. In response to this situation, we investigated the potential for
polarization to occur among a group of autonomous AI agents based on generative
language models in an echo chamber environment. We had AI agents discuss
specific topics and analyzed how the group's opinions changed as the discussion
progressed. As a result, we found that the group of agents based on ChatGPT
tended to become polarized in echo chamber environments. The analysis of
opinion transitions shows that this result is caused by ChatGPT's high prompt
understanding ability to update its opinion by considering its own and
surrounding agents' opinions. We conducted additional experiments to
investigate under what specific conditions AI agents tended to polarize. As a
result, we identified factors that strongly influence polarization, such as the
agent's persona. These factors should be monitored to prevent the polarization
of AI agents.
| 2,024 | Computation and Language |
Reformatted Alignment | The quality of finetuning data is crucial for aligning large language models
(LLMs) with human values. Current methods to improve data quality are either
labor-intensive or prone to factual errors caused by LLM hallucinations. This
paper explores elevating the quality of existing instruction data to better
align with human values, introducing a simple and effective approach named
ReAlign, which reformats the responses of instruction data into a format that
better aligns with pre-established criteria and the collated evidence. This
approach minimizes human annotation, hallucination, and the difficulty in
scaling, remaining orthogonal to existing alignment techniques. Experimentally,
ReAlign significantly boosts the general alignment ability, math reasoning,
factuality, and readability of the LLMs.
Encouragingly, without introducing any additional data or advanced training
techniques, and merely by reformatting the response, LLaMA-2-13B's mathematical
reasoning ability on GSM8K can be improved from 46.77% to 56.63% in accuracy.
Additionally, a mere 5% of ReAlign data yields a 67% boost in general alignment
ability measured by the Alpaca dataset. This work highlights the need for
further research into the science and mechanistic interpretability of LLMs. We
have made the associated code and data publicly accessible to support future
studies at https://github.com/GAIR-NLP/ReAlign.
| 2,024 | Computation and Language |
AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling | We introduce AnyGPT, an any-to-any multimodal language model that utilizes
discrete representations for the unified processing of various modalities,
including speech, text, images, and music. AnyGPT can be trained stably without
any alterations to the current large language model (LLM) architecture or
training paradigms. Instead, it relies exclusively on data-level preprocessing,
facilitating the seamless integration of new modalities into LLMs, akin to the
incorporation of new languages. We build a multimodal text-centric dataset for
multimodal alignment pre-training. Utilizing generative models, we synthesize
the first large-scale any-to-any multimodal instruction dataset. It consists of
108k samples of multi-turn conversations that intricately interweave various
modalities, thus equipping the model to handle arbitrary combinations of
multimodal inputs and outputs. Experimental results demonstrate that AnyGPT is
capable of facilitating any-to-any multimodal conversation while achieving
performance comparable to specialized models across all modalities, proving
that discrete representations can effectively and conveniently unify multiple
modalities within a language model. Demos are shown in
https://junzhan2000.github.io/AnyGPT.github.io/
| 2,024 | Computation and Language |
Empirical Study on Updating Key-Value Memories in Transformer
Feed-forward Layers | The feed-forward networks (FFNs) in transformers are recognized as a group of
key-value neural memories to restore abstract high-level knowledge. In this
work, we conduct an empirical ablation study on updating keys (the 1st layer in
the FFNs layer) or values (the 2nd layer in the FFNs layer). We compare those
two methods in various knowledge editing and fine-tuning tasks of large
language models to draw insights to understand FFNs further. Code is available
at $\href{https://github.com/qiuzh20/Tuning-keys-v.s.-values}{this\,repo}$.
| 2,024 | Computation and Language |
Task-Oriented Dialogue with In-Context Learning | We describe a system for building task-oriented dialogue systems combining
the in-context learning abilities of large language models (LLMs) with the
deterministic execution of business logic. LLMs are used to translate between
the surface form of the conversation and a domain-specific language (DSL) which
is used to progress the business logic. We compare our approach to the
intent-based NLU approach predominantly used in industry today. Our experiments
show that developing chatbots with our system requires significantly less
effort than established approaches, that these chatbots can successfully
navigate complex dialogues which are extremely challenging for NLU-based
systems, and that our system has desirable properties for scaling task-oriented
dialogue systems to a large number of tasks. We make our implementation
available for use and further study.
| 2,024 | Computation and Language |
Understanding the Effects of Noise in Text-to-SQL: An Examination of the
BIRD-Bench Benchmark | Text-to-SQL, which involves translating natural language into Structured
Query Language (SQL), is crucial for enabling broad access to structured
databases without expert knowledge. However, designing models for such tasks is
challenging due to numerous factors, including the presence of 'noise,' such as
ambiguous questions and syntactical errors. This study provides an in-depth
analysis of the distribution and types of noise in the widely used BIRD-Bench
benchmark and the impact of noise on models. While BIRD-Bench was created to
model dirty and noisy database values, it was not created to contain noise and
errors in the questions and gold queries. We found that noise in questions and
gold queries are prevalent in the dataset, with varying amounts across domains,
and with an uneven distribution between noise types. The presence of incorrect
gold SQL queries, which then generate incorrect gold answers, has a significant
impact on the benchmark's reliability. Surprisingly, when evaluating models on
corrected SQL queries, zero-shot baselines surpassed the performance of
state-of-the-art prompting methods. We conclude that informative noise labels
and reliable benchmarks are crucial to developing new Text-to-SQL methods that
can handle varying types of noise.
| 2,024 | Computation and Language |
Analysis of Levenshtein Transformer's Decoder and Its Variants | Levenshtein transformer (LevT) is a non-autoregressive machine translation
model with high decoding efficiency and comparable translation quality in terms
of bleu score, due to its parallel decoding and iterative refinement procedure.
Are there any deficiencies of its translations and what improvements could be
made? In this report, we focus on LevT's decoder and analyse the decoding
results length, subword generation, and deletion module's capability. We hope
to identify weaknesses of the decoder for future improvements.
We also compare translations of the original LevT, knowledge-distilled LevT,
LevT with translation memory, and the KD-LevT with translation memory to see
how KD and translation memory can help.
| 2,024 | Computation and Language |
Shallow Synthesis of Knowledge in GPT-Generated Texts: A Case Study in
Automatic Related Work Composition | Numerous AI-assisted scholarly applications have been developed to aid
different stages of the research process. We present an analysis of AI-assisted
scholarly writing generated with ScholaCite, a tool we built that is designed
for organizing literature and composing Related Work sections for academic
papers. Our evaluation method focuses on the analysis of citation graphs to
assess the structural complexity and inter-connectedness of citations in texts
and involves a three-way comparison between (1) original human-written texts,
(2) purely GPT-generated texts, and (3) human-AI collaborative texts. We find
that GPT-4 can generate reasonable coarse-grained citation groupings to support
human users in brainstorming, but fails to perform detailed synthesis of
related works without human intervention. We suggest that future writing
assistant tools should not be used to draft text independently of the human
author.
| 2,024 | Computation and Language |
NEO-BENCH: Evaluating Robustness of Large Language Models with
Neologisms | The performance of Large Language Models (LLMs) degrades from the temporal
drift between data used for model training and newer text seen during
inference. One understudied avenue of language change causing data drift is the
emergence of neologisms -- new word forms -- over time. We create a diverse
resource of recent English neologisms by using several popular collection
methods. We analyze temporal drift using neologisms by comparing sentences
containing new words with near-identical sentences that replace neologisms with
existing substitute words. Model performance is nearly halved in machine
translation when a single neologism is introduced in a sentence. Motivated by
these results, we construct a benchmark to evaluate LLMs' ability to generalize
to neologisms with various natural language understanding tasks and model
perplexity. Models with later knowledge cutoff dates yield lower perplexities
and perform better in downstream tasks. LLMs are also affected differently
based on the linguistic origins of words, indicating that neologisms are
complex for static LLMs to address. We will release our benchmark and code for
reproducing our experiments.
| 2,024 | Computation and Language |
High-quality Data-to-Text Generation for Severely Under-Resourced
Languages with Out-of-the-box Large Language Models | The performance of NLP methods for severely under-resourced languages cannot
currently hope to match the state of the art in NLP methods for well resourced
languages. We explore the extent to which pretrained large language models
(LLMs) can bridge this gap, via the example of data-to-text generation for
Irish, Welsh, Breton and Maltese. We test LLMs on these under-resourced
languages and English, in a range of scenarios. We find that LLMs easily set
the state of the art for the under-resourced languages by substantial margins,
as measured by both automatic and human evaluations. For all our languages,
human evaluation shows on-a-par performance with humans for our best systems,
but BLEU scores collapse compared to English, casting doubt on the metric's
suitability for evaluating non-task-specific systems. Overall, our results
demonstrate the great potential of LLMs to bridge the performance gap for
under-resourced languages.
| 2,024 | Computation and Language |
Key ingredients for effective zero-shot cross-lingual knowledge transfer
in generative tasks | Zero-shot cross-lingual generation implies finetuning of the multilingual
pretrained language model on a generation task in one language and then using
it to make predictions for this task in other languages. Previous works notice
a frequent problem of generation in a wrong language and propose approaches to
address it, usually using mT5 as a backbone model. In this work we compare
various approaches proposed from the literature in unified settings, also
including alternative backbone models, namely mBART and NLLB-200. We first
underline the importance of tuning learning rate used for finetuning, which
helps to substantially alleviate the problem of generation in the wrong
language. Then, we show that with careful learning rate tuning, the simple full
finetuning of the model acts as a very strong baseline and alternative
approaches bring only marginal improvements. Finally, we find that mBART
performs similarly to mT5 of the same size, and NLLB-200 can be competitive in
some cases. Our final models reach the performance of the approach based on
data translation which is usually considered as an upper baseline for zero-shot
cross-lingual generation.
| 2,024 | Computation and Language |
Adaptive Skeleton Graph Decoding | Large language models (LLMs) have seen significant adoption for natural
language tasks, owing their success to massive numbers of model parameters
(e.g., 70B+); however, LLM inference incurs significant computation and memory
costs. Recent approaches propose parallel decoding strategies, such as
Skeleton-of-Thought (SoT), to improve performance by breaking prompts down into
sub-problems that can be decoded in parallel; however, they often suffer from
reduced response quality. Our key insight is that we can request additional
information, specifically dependencies and difficulty, when generating the
sub-problems to improve both response quality and performance. In this paper,
we propose Skeleton Graph Decoding (SGD), which uses dependencies exposed
between sub-problems to support information forwarding between dependent
sub-problems for improved quality while exposing parallelization opportunities
for decoding independent sub-problems. Additionally, we leverage difficulty
estimates for each sub-problem to select an appropriately-sized model,
improving performance without significantly reducing quality. Compared to
standard autoregressive generation and SoT, SGD achieves a 1.69x speedup while
improving quality by up to 51%.
| 2,024 | Computation and Language |
Ontology Enhanced Claim Detection | We propose an ontology enhanced model for sentence based claim detection. We
fused ontology embeddings from a knowledge base with BERT sentence embeddings
to perform claim detection for the ClaimBuster and the NewsClaims datasets. Our
ontology enhanced approach showed the best results with these small-sized
unbalanced datasets, compared to other statistical and neural machine learning
models. The experiments demonstrate that adding domain specific features
(either trained word embeddings or knowledge graph metadata) can improve
traditional ML methods. In addition, adding domain knowledge in the form of
ontology embeddings helps avoid the bias encountered in neural network based
models, for example the pure BERT model bias towards larger classes in our
small corpus.
| 2,024 | Computation and Language |
KARL: Knowledge-Aware Retrieval and Representations aid Retention and
Learning in Students | Flashcard schedulers are tools that rely on 1) student models to predict the
flashcards a student knows; and 2) teaching policies to schedule cards based on
these predictions. Existing student models, however, only use flashcard-level
features, like the student's past responses, ignoring the semantic ties of
flashcards. Deep Knowledge Tracing (DKT) models can capture semantic relations
with language models, but are inefficient, lack content-rich datasets for
evaluation, and require robust teaching policies. To address these issues, we
design KARL, a DKT-inspired student model that uses retrieval and BERT
embeddings for efficient and accurate student recall predictions. To test KARL,
we collect a new dataset of diverse study history on trivia questions. KARL
bests existing student models in AUC and calibration error. Finally, we propose
a novel teaching policy that exploits the predictive power of DKT models to
deploy KARL online. Based on 27 learners and 32 6-day study trajectories, KARL
shows the ability to enhance medium-term educational learning, proving its
efficacy for scheduling.
| 2,024 | Computation and Language |
Is Open-Source There Yet? A Comparative Study on Commercial and
Open-Source LLMs in Their Ability to Label Chest X-Ray Reports | Introduction: With the rapid advances in large language models (LLMs), there
have been numerous new open source as well as commercial models. While recent
publications have explored GPT-4 in its application to extracting information
of interest from radiology reports, there has not been a real-world comparison
of GPT-4 to different leading open-source models.
Materials and Methods: Two different and independent datasets were used. The
first dataset consists of 540 chest x-ray reports that were created at the
Massachusetts General Hospital between July 2019 and July 2021. The second
dataset consists of 500 chest x-ray reports from the ImaGenome dataset. We then
compared the commercial models GPT-3.5 Turbo and GPT-4 from OpenAI to the
open-source models Mistral-7B, Mixtral-8x7B, Llama2-13B, Llama2-70B,
QWEN1.5-72B and CheXbert and CheXpert-labeler in their ability to accurately
label the presence of multiple findings in x-ray text reports using different
prompting techniques.
Results: On the ImaGenome dataset, the best performing open-source model was
Llama2-70B with micro F1-scores of 0.972 and 0.970 for zero- and few-shot
prompts, respectively. GPT-4 achieved micro F1-scores of 0.975 and 0.984,
respectively. On the institutional dataset, the best performing open-source
model was QWEN1.5-72B with micro F1-scores of 0.952 and 0.965 for zero- and
few-shot prompting, respectively. GPT-4 achieved micro F1-scores of 0.975 and
0.973, respectively.
Conclusion: In this paper, we show that while GPT-4 is superior to
open-source models in zero-shot report labeling, the implementation of few-shot
prompting can bring open-source models on par with GPT-4. This shows that
open-source models could be a performant and privacy preserving alternative to
GPT-4 for the task of radiology report classification.
| 2,024 | Computation and Language |
TILP: Differentiable Learning of Temporal Logical Rules on Knowledge
Graphs | Compared with static knowledge graphs, temporal knowledge graphs (tKG), which
can capture the evolution and change of information over time, are more
realistic and general. However, due to the complexity that the notion of time
introduces to the learning of the rules, an accurate graph reasoning, e.g.,
predicting new links between entities, is still a difficult problem. In this
paper, we propose TILP, a differentiable framework for temporal logical rules
learning. By designing a constrained random walk mechanism and the introduction
of temporal operators, we ensure the efficiency of our model. We present
temporal features modeling in tKG, e.g., recurrence, temporal order, interval
between pair of relations, and duration, and incorporate it into our learning
process. We compare TILP with state-of-the-art methods on two benchmark
datasets. We show that our proposed framework can improve upon the performance
of baseline methods while providing interpretable results. In particular, we
consider various scenarios in which training samples are limited, data is
biased, and the time range between training and inference are different. In all
these cases, TILP works much better than the state-of-the-art methods.
| 2,024 | Computation and Language |
ARKS: Active Retrieval in Knowledge Soup for Code Generation | Recently the retrieval-augmented generation (RAG) paradigm has raised much
attention for its potential in incorporating external knowledge into large
language models (LLMs) without further training. While widely explored in
natural language applications, its utilization in code generation remains
under-explored. In this paper, we introduce Active Retrieval in Knowledge Soup
(ARKS), an advanced strategy for generalizing large language models for code.
In contrast to relying on a single source, we construct a knowledge soup
integrating web search, documentation, execution feedback, and evolved code
snippets. We employ an active retrieval strategy that iteratively refines the
query and updates the knowledge soup. To assess the performance of ARKS, we
compile a new benchmark comprising realistic coding problems associated with
frequently updated libraries and long-tail programming languages. Experimental
results on ChatGPT and CodeLlama demonstrate a substantial improvement in the
average execution accuracy of ARKS on LLMs. The analysis confirms the
effectiveness of our proposed knowledge soup and active retrieval strategies,
offering rich insights into the construction of effective retrieval-augmented
code generation (RACG) pipelines. Our model, code, and data are available at
https://arks-codegen.github.io.
| 2,024 | Computation and Language |
LLM Agents for Psychology: A Study on Gamified Assessments | Psychological measurement is essential for mental health, self-understanding,
and personal development. Traditional methods, such as self-report scales and
psychologist interviews, often face challenges with engagement and
accessibility. While game-based and LLM-based tools have been explored to
improve user interest and automate assessment, they struggle to balance
engagement with generalizability. In this work, we propose PsychoGAT
(Psychological Game AgenTs) to achieve a generic gamification of psychological
assessment. The main insight is that powerful LLMs can function both as adept
psychologists and innovative game designers. By incorporating LLM agents into
designated roles and carefully managing their interactions, PsychoGAT can
transform any standardized scales into personalized and engaging interactive
fiction games. To validate the proposed method, we conduct psychometric
evaluations to assess its effectiveness and employ human evaluators to examine
the generated content across various psychological constructs, including
depression, cognitive distortions, and personality traits. Results demonstrate
that PsychoGAT serves as an effective assessment tool, achieving statistically
significant excellence in psychometric metrics such as reliability, convergent
validity, and discriminant validity. Moreover, human evaluations confirm
PsychoGAT's enhancements in content coherence, interactivity, interest,
immersion, and satisfaction.
| 2,024 | Computation and Language |
Query-Based Adversarial Prompt Generation | Recent work has shown it is possible to construct adversarial examples that
cause an aligned language model to emit harmful strings or perform harmful
behavior. Existing attacks work either in the white-box setting (with full
access to the model weights), or through transferability: the phenomenon that
adversarial examples crafted on one model often remain effective on other
models. We improve on prior work with a query-based attack that leverages API
access to a remote language model to construct adversarial examples that cause
the model to emit harmful strings with (much) higher probability than with
transfer-only attacks. We validate our attack on GPT-3.5 and OpenAI's safety
classifier; we can cause GPT-3.5 to emit harmful strings that current transfer
attacks fail at, and we can evade the safety classifier with nearly 100%
probability.
| 2,024 | Computation and Language |
Triple-Encoders: Representations That Fire Together, Wire Together | Search-based dialog models typically re-encode the dialog history at every
turn, incurring high cost. Curved Contrastive Learning, a representation
learning method that encodes relative distances between utterances into the
embedding space via a bi-encoder, has recently shown promising results for
dialog modeling at far superior efficiency. While high efficiency is achieved
through independently encoding utterances, this ignores the importance of
contextualization. To overcome this issue, this study introduces
triple-encoders, which efficiently compute distributed utterance mixtures from
these independently encoded utterances through a novel hebbian inspired
co-occurrence learning objective without using any weights. Empirically, we
find that triple-encoders lead to a substantial improvement over bi-encoders,
and even to better zero-shot generalization than single-vector representation
models without requiring re-encoding. Our code/model is publicly available.
| 2,024 | Computation and Language |
Emulated Disalignment: Safety Alignment for Large Language Models May
Backfire! | Large language models (LLMs) need to undergo safety alignment to ensure safe
conversations with humans. However, in this work, we introduce an
inference-time attack framework, demonstrating that safety alignment can also
unintentionally facilitate harmful outcomes under adversarial manipulation.
This framework, named Emulated Disalignment (ED), adversely combines a pair of
open-source pre-trained and safety-aligned language models in the output space
to produce a harmful language model without additional training. Our
experiments with ED across three datasets and four model families (Llama-1,
Llama-2, Mistral, and Alpaca) show that ED doubles the harmfulness of
pre-trained models and outperforms strong baselines, achieving the highest
harmful rate in 43 out of 48 evaluation subsets by a large margin. Crucially,
our findings highlight the importance of reevaluating the practice of
open-sourcing language models even after safety alignment.
| 2,024 | Computation and Language |
GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via
Game-Theoretic Evaluations | As Large Language Models (LLMs) are integrated into critical real-world
applications, their strategic and logical reasoning abilities are increasingly
crucial. This paper evaluates LLMs' reasoning abilities in competitive
environments through game-theoretic tasks, e.g., board and card games that
require pure logic and strategic reasoning to compete with opponents. We first
propose GTBench, a language-driven environment composing 10 widely-recognized
tasks, across a comprehensive game taxonomy: complete versus incomplete
information, dynamic versus static, and probabilistic versus deterministic
scenarios. Then, we investigate two key problems: (1) Characterizing
game-theoretic reasoning of LLMs; (2) LLM-vs-LLM competitions as reasoning
evaluation. We observe that (1) LLMs have distinct behaviors regarding various
gaming scenarios; for example, LLMs fail in complete and deterministic games
yet they are competitive in probabilistic gaming scenarios; (2) Open-source
LLMs, e.g., CodeLlama-34b-Instruct, are less competitive than commercial LLMs,
e.g., GPT-4, in complex games. In addition, code-pretraining greatly benefits
strategic reasoning, while advanced reasoning methods such as Chain-of-Thought
(CoT) and Tree-of-Thought (ToT) do not always help. Detailed error profiles are
also provided for a better understanding of LLMs' behavior.
| 2,024 | Computation and Language |
Graph-Based Retriever Captures the Long Tail of Biomedical Knowledge | Large language models (LLMs) are transforming the way information is
retrieved with vast amounts of knowledge being summarized and presented via
natural language conversations. Yet, LLMs are prone to highlight the most
frequently seen pieces of information from the training set and to neglect the
rare ones. In the field of biomedical research, latest discoveries are key to
academic and industrial actors and are obscured by the abundance of an
ever-increasing literature corpus (the information overload problem). Surfacing
new associations between biomedical entities, e.g., drugs, genes, diseases,
with LLMs becomes a challenge of capturing the long-tail knowledge of the
biomedical scientific production. To overcome this challenge, Retrieval
Augmented Generation (RAG) has been proposed to alleviate some of the
shortcomings of LLMs by augmenting the prompts with context retrieved from
external datasets. RAG methods typically select the context via maximum
similarity search over text embeddings. In this study, we show that RAG methods
leave out a significant proportion of relevant information due to clusters of
over-represented concepts in the biomedical literature. We introduce a novel
information-retrieval method that leverages a knowledge graph to downsample
these clusters and mitigate the information overload problem. Its retrieval
performance is about twice better than embedding similarity alternatives on
both precision and recall. Finally, we demonstrate that both embedding
similarity and knowledge graph retrieval methods can be advantageously combined
into a hybrid model that outperforms both, enabling potential improvements to
biomedical question-answering models.
| 2,024 | Computation and Language |
Emergent Word Order Universals from Cognitively-Motivated Language
Models | The world's languages exhibit certain so-called typological or implicational
universals; for example, Subject-Object-Verb (SOV) word order typically employs
postpositions. Explaining the source of such biases is a key goal in
linguistics. We study the word-order universals through a computational
simulation with language models (LMs). Our experiments show that typologically
typical word orders tend to have lower perplexity estimated by LMs with
cognitively plausible biases: syntactic biases, specific parsing strategies,
and memory limitations. This suggests that the interplay of these cognitive
biases and predictability (perplexity) can explain many aspects of word-order
universals. This also showcases the advantage of cognitively-motivated LMs,
which are typically employed in cognitive modeling, in the computational
simulation of language universals.
| 2,024 | Computation and Language |
A synthetic data approach for domain generalization of NLI models | Natural Language Inference (NLI) remains an important benchmark task for
LLMs. NLI datasets are a springboard for transfer learning to other semantic
tasks, and NLI models are standard tools for identifying the faithfulness of
model-generated text. There are several large scale NLI datasets today, and
models have improved greatly by hill-climbing on these collections. Yet their
realistic performance on out-of-distribution/domain data is less
well-understood. We present an in-depth exploration of the problem of domain
generalization of NLI models. We demonstrate a new approach for generating
synthetic NLI data in diverse domains and lengths, so far not covered by
existing training sets. The resulting examples have meaningful premises, the
hypotheses are formed in creative ways rather than simple edits to a few
premise tokens, and the labels have high accuracy. We show that models trained
on this data ($685$K synthetic examples) have the best generalization to
completely new downstream test settings. On the TRUE benchmark, a T5-small
model trained with our data improves around $7\%$ on average compared to
training on the best alternative dataset. The improvements are more pronounced
for smaller models, while still meaningful on a T5 XXL model. We also
demonstrate gains on test sets when in-domain training data is augmented with
our domain-general synthetic data.
| 2,024 | Computation and Language |
AnaloBench: Benchmarking the Identification of Abstract and Long-context
Analogies | Humans regularly engage in analogical thinking, relating personal experiences
to current situations ($X$ is analogous to $Y$ because of $Z$). Analogical
thinking allows humans to solve problems in creative ways, grasp difficult
concepts, and articulate ideas more effectively. Can language models (LMs) do
the same? To answer this question, we propose ANALOBENCH, a benchmark to
determine analogical reasoning ability in LMs. Our benchmarking approach
focuses on aspects of this ability that are common among humans: (i) recalling
related experiences from a large amount of information, and (ii) applying
analogical reasoning to complex and lengthy scenarios. We test a broad
collection of proprietary models (e.g., GPT family, Claude V2) and open source
models such as LLaMA2. As in prior results, scaling up LMs results in some
performance boosts. Surprisingly, scale offers minimal gains when, (i)
analogies involve lengthy scenarios, or (ii) recalling relevant scenarios from
a large pool of information, a process analogous to finding a needle in a
haystack. We hope these observations encourage further research in this field.
| 2,024 | Computation and Language |
HunFlair2 in a cross-corpus evaluation of biomedical named entity
recognition and normalization tools | With the exponential growth of the life science literature, biomedical text
mining (BTM) has become an essential technology for accelerating the extraction
of insights from publications. Identifying named entities (e.g., diseases,
drugs, or genes) in texts and their linkage to reference knowledge bases are
crucial steps in BTM pipelines to enable information aggregation from different
documents. However, tools for these two steps are rarely applied in the same
context in which they were developed. Instead, they are applied in the wild,
i.e., on application-dependent text collections different from those used for
the tools' training, varying, e.g., in focus, genre, style, and text type. This
raises the question of whether the reported performance of BTM tools can be
trusted for downstream applications. Here, we report on the results of a
carefully designed cross-corpus benchmark for named entity extraction, where
tools were applied systematically to corpora not used during their training.
Based on a survey of 28 published systems, we selected five for an in-depth
analysis on three publicly available corpora encompassing four different entity
types. Comparison between tools results in a mixed picture and shows that, in a
cross-corpus setting, the performance is significantly lower than the one
reported in an in-corpus setting. HunFlair2 showed the best performance on
average, being closely followed by PubTator. Our results indicate that users of
BTM tools should expect diminishing performances when applying them in the wild
compared to original publications and show that further research is necessary
to make BTM tools more robust.
| 2,024 | Computation and Language |
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding | As the usage of large language models (LLMs) grows, performing efficient
inference with these models becomes increasingly important. While speculative
decoding has recently emerged as a promising direction for speeding up
inference, existing methods are limited in their ability to scale to larger
speculation budgets, and adapt to different hyperparameters and hardware. This
paper introduces Sequoia, a scalable, robust, and hardware-aware algorithm for
speculative decoding. To attain better scalability, Sequoia introduces a
dynamic programming algorithm to find the optimal tree structure for the
speculated tokens. To achieve robust speculative performance, Sequoia uses a
novel sampling and verification method that outperforms prior work across
different decoding temperatures. Finally, Sequoia introduces a hardware-aware
tree optimizer that maximizes speculative performance by automatically
selecting the token tree size and depth for a given hardware platform.
Evaluation shows that Sequoia improves the decoding speed of Llama2-7B,
Llama2-13B, and Vicuna-33B on an A100 by up to $4.04\times$, $3.73\times$, and
$2.27\times$. For offloading setting on L40, Sequoia achieves as low as 0.56
s/token for exact Llama2-70B inference latency, which is $9.96\times$ on our
optimized offloading system (5.6 s/token), $9.7\times$ than
DeepSpeed-Zero-Inference, $19.5\times$ than Huggingface Accelerate.
| 2,024 | Computation and Language |
Understanding Fine-grained Distortions in Reports of Scientific Findings | Distorted science communication harms individuals and society as it can lead
to unhealthy behavior change and decrease trust in scientific institutions.
Given the rapidly increasing volume of science communication in recent years, a
fine-grained understanding of how findings from scientific publications are
reported to the general public, and methods to detect distortions from the
original work automatically, are crucial. Prior work focused on individual
aspects of distortions or worked with unpaired data. In this work, we make
three foundational contributions towards addressing this problem: (1)
annotating 1,600 instances of scientific findings from academic papers paired
with corresponding findings as reported in news articles and tweets wrt. four
characteristics: causality, certainty, generality and sensationalism; (2)
establishing baselines for automatically detecting these characteristics; and
(3) analyzing the prevalence of changes in these characteristics in both
human-annotated and large-scale unlabeled data. Our results show that
scientific findings frequently undergo subtle distortions when reported. Tweets
distort findings more often than science news reports. Detecting fine-grained
distortions automatically poses a challenging task. In our experiments,
fine-tuned task-specific models consistently outperform few-shot LLM prompting.
| 2,024 | Computation and Language |
Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions
Without the Question? | Multiple-choice question answering (MCQA) is often used to evaluate large
language models (LLMs). To see if MCQA assesses LLMs as intended, we probe if
LLMs can perform MCQA with choices-only prompts, where models must select the
correct answer only from the choices. In three MCQA datasets and four LLMs,
this prompt bests a majority baseline in 11/12 cases, with up to 0.33 accuracy
gain. To help explain this behavior, we conduct an in-depth, black-box analysis
on memorization, choice dynamics, and question inference. Our key findings are
threefold. First, we find no evidence that the choices-only accuracy stems from
memorization alone. Second, priors over individual choices do not fully explain
choices-only accuracy, hinting that LLMs use the group dynamics of choices.
Third, LLMs have some ability to infer a relevant question from choices, and
surprisingly can sometimes even match the original question. We hope to
motivate the use of stronger baselines in MCQA benchmarks, the design of robust
MCQA datasets, and further efforts to explain LLM decision-making.
| 2,024 | Computation and Language |
Do Pre-Trained Language Models Detect and Understand Semantic
Underspecification? Ask the DUST! | In everyday language use, speakers frequently utter and interpret sentences
that are semantically underspecified, namely, whose content is insufficient to
fully convey their message or interpret them univocally. For example, to
interpret the underspecified sentence "Don't spend too much", which leaves
implicit what (not) to spend, additional linguistic context or outside
knowledge is needed. In this work, we propose a novel Dataset of semantically
Underspecified Sentences grouped by Type (DUST) and use it to study whether
pre-trained language models (LMs) correctly identify and interpret
underspecified sentences. We find that newer LMs are reasonably able to
identify underspecified sentences when explicitly prompted. However,
interpreting them correctly is much harder for any LMs. Our experiments show
that when interpreting underspecified sentences, LMs exhibit little
uncertainty, contrary to what theoretical accounts of underspecification would
predict. Overall, our study reveals limitations in current models' processing
of sentence semantics and highlights the importance of using naturalistic data
and communicative scenarios when evaluating LMs' language capabilities.
| 2,024 | Computation and Language |
Your Vision-Language Model Itself Is a Strong Filter: Towards
High-Quality Instruction Tuning with Data Selection | Data selection in instruction tuning emerges as a pivotal process for
acquiring high-quality data and training instruction-following large language
models (LLMs), but it is still a new and unexplored research area for
vision-language models (VLMs). Existing data selection approaches on LLMs
either rely on single unreliable scores, or use downstream tasks for selection,
which is time-consuming and can lead to potential over-fitting on the chosen
evaluation datasets. To address this challenge, we introduce a novel dataset
selection method, Self-Filter, that utilizes the VLM itself as a filter. This
approach is inspired by the observation that VLMs benefit from training with
the most challenging instructions. Self-Filter operates in two stages. In the
first stage, we devise a scoring network to evaluate the difficulty of training
instructions, which is co-trained with the VLM. In the second stage, we use the
trained score net to measure the difficulty of each instruction, select the
most challenging samples, and penalize similar samples to encourage diversity.
Comprehensive experiments on LLaVA and MiniGPT-4 show that Self-Filter can
reach better results compared to full data settings with merely about 15%
samples, and can achieve superior performance against competitive baselines.
| 2,024 | Computation and Language |
Parallel Structures in Pre-training Data Yield In-Context Learning | Pre-trained language models (LMs) are capable of in-context learning (ICL):
they can adapt to a task with only a few examples given in the prompt without
any parameter update. However, it is unclear where this capability comes from
as there is a stark distribution shift between pre-training text and ICL
prompts. In this work, we study what patterns of the pre-training data
contribute to ICL. We find that LMs' ICL ability depends on $\textit{parallel
structures}$ in the pre-training data -- pairs of phrases following similar
templates in the same context window. Specifically, we detect parallel
structures by checking whether training on one phrase improves prediction of
the other, and conduct ablation experiments to study their effect on ICL. We
show that removing parallel structures in the pre-training data reduces LMs'
ICL accuracy by 51% (vs 2% from random ablation). This drop persists even when
excluding common patterns such as n-gram repetitions and long-range dependency,
showing the diversity and generality of parallel structures. A closer look at
the detected parallel structures indicates that they cover diverse linguistic
tasks and span long distances in the data.
| 2,024 | Computation and Language |
TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness | Large Language Models (LLMs) have demonstrated impressive capabilities across
various domains, prompting a surge in their practical applications. However,
concerns have arisen regarding the trustworthiness of LLMs outputs,
particularly in closed-book question-answering tasks, where non-experts may
struggle to identify inaccuracies due to the absence of contextual or ground
truth information. This paper introduces TrustScore, a framework based on the
concept of Behavioral Consistency, which evaluates whether an LLMs response
aligns with its intrinsic knowledge. Additionally, TrustScore can seamlessly
integrate with fact-checking methods, which assesses alignment with external
knowledge sources. The experimental results show that TrustScore achieves
strong correlations with human judgments, surpassing existing reference-free
metrics, and achieving results on par with reference-based metrics.
| 2,024 | Computation and Language |
Archer: A Human-Labeled Text-to-SQL Dataset with Arithmetic, Commonsense
and Hypothetical Reasoning | We present Archer, a challenging bilingual text-to-SQL dataset specific to
complex reasoning, including arithmetic, commonsense and hypothetical
reasoning. It contains 1,042 English questions and 1,042 Chinese questions,
along with 521 unique SQL queries, covering 20 English databases across 20
domains. Notably, this dataset demonstrates a significantly higher level of
complexity compared to existing publicly available datasets. Our evaluation
shows that Archer challenges the capabilities of current state-of-the-art
models, with a high-ranked model on the Spider leaderboard achieving only 6.73%
execution accuracy on Archer test set. Thus, Archer presents a significant
challenge for future research in this field.
| 2,024 | Computation and Language |
Creating a Fine Grained Entity Type Taxonomy Using LLMs | In this study, we investigate the potential of GPT-4 and its advanced
iteration, GPT-4 Turbo, in autonomously developing a detailed entity type
taxonomy. Our objective is to construct a comprehensive taxonomy, starting from
a broad classification of entity types - including objects, time, locations,
organizations, events, actions, and subjects - similar to existing manually
curated taxonomies. This classification is then progressively refined through
iterative prompting techniques, leveraging GPT-4's internal knowledge base. The
result is an extensive taxonomy comprising over 5000 nuanced entity types,
which demonstrates remarkable quality upon subjective evaluation.
We employed a straightforward yet effective prompting strategy, enabling the
taxonomy to be dynamically expanded. The practical applications of this
detailed taxonomy are diverse and significant. It facilitates the creation of
new, more intricate branches through pattern-based combinations and notably
enhances information extraction tasks, such as relation extraction and event
argument extraction. Our methodology not only introduces an innovative approach
to taxonomy creation but also opens new avenues for applying such taxonomies in
various computational linguistics and AI-related fields.
| 2,024 | Computation and Language |
CausalGym: Benchmarking causal interpretability methods on linguistic
tasks | Language models (LMs) have proven to be powerful tools for psycholinguistic
research, but most prior work has focused on purely behavioural measures (e.g.,
surprisal comparisons). At the same time, research in model interpretability
has begun to illuminate the abstract causal mechanisms shaping LM behavior. To
help bring these strands of research closer together, we introduce CausalGym.
We adapt and expand the SyntaxGym suite of tasks to benchmark the ability of
interpretability methods to causally affect model behaviour. To illustrate how
CausalGym can be used, we study the pythia models (14M--6.9B) and assess the
causal efficacy of a wide range of interpretability methods, including linear
probing and distributed alignment search (DAS). We find that DAS outperforms
the other methods, and so we use it to study the learning trajectory of two
difficult linguistic phenomena in pythia-1b: negative polarity item licensing
and filler--gap dependencies. Our analysis shows that the mechanism
implementing both of these tasks is learned in discrete stages, not gradually.
| 2,024 | Computation and Language |
Confidence Matters: Revisiting Intrinsic Self-Correction Capabilities of
Large Language Models | The recent success of Large Language Models (LLMs) has catalyzed an
increasing interest in their self-correction capabilities. This paper presents
a comprehensive investigation into the intrinsic self-correction of LLMs,
attempting to address the ongoing debate about its feasibility. Our research
has identified an important latent factor - the "confidence" of LLMs - during
the self-correction process. Overlooking this factor may cause the models to
over-criticize themselves, resulting in unreliable conclusions regarding the
efficacy of self-correction. We have experimentally observed that LLMs possess
the capability to understand the "confidence" in their own responses. It
motivates us to develop an "If-or-Else" (IoE) prompting framework, designed to
guide LLMs in assessing their own "confidence", facilitating intrinsic
self-corrections. We conduct extensive experiments and demonstrate that our
IoE-based Prompt can achieve a consistent improvement regarding the accuracy of
self-corrected responses over the initial answers. Our study not only sheds
light on the underlying factors affecting self-correction in LLMs, but also
introduces a practical framework that utilizes the IoE prompting principle to
efficiently improve self-correction capabilities with "confidence". The code is
available at https://github.com/MBZUAI-CLeaR/IoE-Prompting.git.
| 2,024 | Computation and Language |
GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence | LLMs can generate factually incorrect statements even when provided access to
reference documents. Such errors can be dangerous in high-stakes applications
(e.g., document-grounded QA for healthcare or finance). We present GenAudit --
a tool intended to assist fact-checking LLM responses for document-grounded
tasks. GenAudit suggests edits to the LLM response by revising or removing
claims that are not supported by the reference document, and also presents
evidence from the reference for facts that do appear to have support. We train
models to execute these tasks, and design an interactive interface to present
suggested edits and evidence to users. Comprehensive evaluation by human raters
shows that GenAudit can detect errors in 8 different LLM outputs when
summarizing documents from diverse domains. To ensure that most errors are
flagged by the system, we propose a method that can increase the error recall
while minimizing impact on precision. We will release our tool (GenAudit) and
fact-checking model for public use.
| 2,024 | Computation and Language |
Evolving AI Collectives to Enhance Human Diversity and Enable
Self-Regulation | Large language models steer their behaviors based on texts generated by
others. This capacity and their increasing prevalence in online settings
portend that they will intentionally or unintentionally "program" one another
and form emergent AI subjectivities, relationships, and collectives. Here, we
call upon the research community to investigate these "society-like" properties
of interacting artificial intelligences to increase their rewards and reduce
their risks for human society and the health of online environments. We use a
simple model and its outputs to illustrate how such emergent, decentralized AI
collectives can expand the bounds of human diversity and reduce the risk of
toxic, anti-social behavior online. Finally, we discuss opportunities for AI
self-moderation and address ethical issues and design challenges associated
with creating and maintaining decentralized AI collectives.
| 2,024 | Computation and Language |
Standardize: Aligning Language Models with Expert-Defined Standards for
Content Generation | Domain experts across engineering, healthcare, and education follow strict
standards for producing quality content such as technical manuals, medication
instructions, and children's reading materials. However, current works in
controllable text generation have yet to explore using these standards as
references for control. Towards this end, we introduce Standardize, a
retrieval-style in-context learning-based framework to guide large language
models to align with expert-defined standards. Focusing on English language
standards in the education domain as a use case, we consider the Common
European Framework of Reference for Languages (CEFR) and Common Core Standards
(CCS) for the task of open-ended content generation. Our findings show that
models can gain 40% to 100% increase in precise accuracy for Llama2 and GPT-4,
respectively, demonstrating that the use of knowledge artifacts extracted from
standards and integrating them in the generation process can effectively guide
models to produce better standard-aligned content.
| 2,024 | Computation and Language |
What is a word? | In order to design strong paradigms for isolating lexical access and
semantics, we need to know what a word is. Surprisingly few linguists and
philosophers have a clear model of what a word is, even though words impact
basically every aspect of human life. Researchers that regularly publish
academic papers about language often rely on outdated, or inaccurate,
assumptions about wordhood. This short pedagogical document outlines what the
lexicon is most certainly not (though is often mistakenly taken to be), what it
might be (based on current good theories), and what some implications for
experimental design are.
| 2,024 | Computation and Language |
StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing | Given a script, the challenge in Movie Dubbing (Visual Voice Cloning, V2C) is
to generate speech that aligns well with the video in both time and emotion,
based on the tone of a reference audio track. Existing state-of-the-art V2C
models break the phonemes in the script according to the divisions between
video frames, which solves the temporal alignment problem but leads to
incomplete phoneme pronunciation and poor identity stability. To address this
problem, we propose StyleDubber, which switches dubbing learning from the frame
level to phoneme level. It contains three main components: (1) A multimodal
style adaptor operating at the phoneme level to learn pronunciation style from
the reference audio, and generate intermediate representations informed by the
facial emotion presented in the video; (2) An utterance-level style learning
module, which guides both the mel-spectrogram decoding and the refining
processes from the intermediate embeddings to improve the overall style
expression; And (3) a phoneme-guided lip aligner to maintain lip sync.
Extensive experiments on two of the primary benchmarks, V2C and Grid,
demonstrate the favorable performance of the proposed method as compared to the
current state-of-the-art. The source code and trained models will be released
to the public.
| 2,024 | Computation and Language |
Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation | Bias benchmarks are a popular method for studying the negative impacts of
bias in LLMs, yet there has been little empirical investigation of whether
these benchmarks are actually indicative of how real world harm may manifest in
the real world. In this work, we study the correspondence between such
decontextualized "trick tests" and evaluations that are more grounded in
Realistic Use and Tangible {Effects (i.e. RUTEd evaluations). We explore this
correlation in the context of gender-occupation bias--a popular genre of bias
evaluation. We compare three de-contextualized evaluations adapted from the
current literature to three analogous RUTEd evaluations applied to long-form
content generation. We conduct each evaluation for seven instruction-tuned
LLMs. For the RUTEd evaluations, we conduct repeated trials of three text
generation tasks: children's bedtime stories, user personas, and English
language learning exercises. We found no correspondence between trick tests and
RUTEd evaluations. Specifically, selecting the least biased model based on the
de-contextualized results coincides with selecting the model with the best
performance on RUTEd evaluations only as often as random chance. We conclude
that evaluations that are not based in realistic use are likely insufficient to
mitigate and assess bias and real-world harms.
| 2,024 | Computation and Language |
OWSM-CTC: An Open Encoder-Only Speech Foundation Model for Speech
Recognition, Translation, and Language Identification | There has been an increasing interest in large speech models that can perform
multiple speech processing tasks in a single model. Such models usually adopt
the encoder-decoder or decoder-only architecture due to their popularity and
good performance in many domains. However, autoregressive models can be slower
during inference compared to non-autoregressive models and also have potential
risks of hallucination. Though prior studies observed promising results of
non-autoregressive models for certain tasks at small scales, it remains unclear
if they can be scaled to speech-to-text generation in diverse languages and
tasks. Inspired by the Open Whisper-style Speech Model (OWSM) project, we
propose OWSM-CTC, a novel encoder-only speech foundation model based on
Connectionist Temporal Classification (CTC). It is trained on 180k hours of
public audio data for multilingual automatic speech recognition (ASR), speech
translation (ST), and language identification (LID). Compared to
encoder-decoder OWSM, our OWSM-CTC achieves competitive results on ASR and up
to 25% relative improvement on ST, while it is more robust and 3 to 4 times
faster for inference. OWSM-CTC also improves the long-form ASR result with 20x
speed-up. We will publicly release our codebase, pre-trained model, and
training logs to promote open science in speech foundation models.
| 2,024 | Computation and Language |
The FinBen: An Holistic Financial Benchmark for Large Language Models | LLMs have transformed NLP and shown promise in various fields, yet their
potential in finance is underexplored due to a lack of thorough evaluations and
the complexity of financial tasks. This along with the rapid development of
LLMs, highlights the urgent need for a systematic financial evaluation
benchmark for LLMs. In this paper, we introduce FinBen, the first comprehensive
open-sourced evaluation benchmark, specifically designed to thoroughly assess
the capabilities of LLMs in the financial domain. FinBen encompasses 35
datasets across 23 financial tasks, organized into three spectrums of
difficulty inspired by the Cattell-Horn-Carroll theory, to evaluate LLMs'
cognitive abilities in inductive reasoning, associative memory, quantitative
reasoning, crystallized intelligence, and more. Our evaluation of 15
representative LLMs, including GPT-4, ChatGPT, and the latest Gemini, reveals
insights into their strengths and limitations within the financial domain. The
findings indicate that GPT-4 leads in quantification, extraction, numerical
reasoning, and stock trading, while Gemini shines in generation and
forecasting; however, both struggle with complex extraction and forecasting,
showing a clear need for targeted enhancements. Instruction tuning boosts
simple task performance but falls short in improving complex reasoning and
forecasting abilities. FinBen seeks to continuously evaluate LLMs in finance,
fostering AI development with regular updates of tasks and models.
| 2,024 | Computation and Language |
SoftQE: Learned Representations of Queries Expanded by LLMs | We investigate the integration of Large Language Models (LLMs) into query
encoders to improve dense retrieval without increasing latency and cost, by
circumventing the dependency on LLMs at inference time. SoftQE incorporates
knowledge from LLMs by mapping embeddings of input queries to those of the
LLM-expanded queries. While improvements over various strong baselines on
in-domain MS-MARCO metrics are marginal, SoftQE improves performance by 2.83
absolute percentage points on average on five out-of-domain BEIR tasks.
| 2,024 | Computation and Language |
Simpson's Paradox and the Accuracy-Fluency Tradeoff in Translation | A good translation should be faithful to the source and should respect the
norms of the target language. We address a theoretical puzzle about the
relationship between these objectives. On one hand, intuition and some prior
work suggest that accuracy and fluency should trade off against each other, and
that capturing every detail of the source can only be achieved at the cost of
fluency. On the other hand, quality assessment researchers often suggest that
accuracy and fluency are highly correlated and difficult for human raters to
distinguish (Callison-Burch et al. 2007). We show that the tension between
these views is an instance of Simpson's paradox, and that accuracy and fluency
are positively correlated at the level of the corpus but trade off at the level
of individual source segments. We further suggest that the relationship between
accuracy and fluency is best evaluated at the segment (or sentence) level, and
that the trade off between these dimensions has implications both for assessing
translation quality and developing improved MT systems.
| 2,024 | Computation and Language |
Tree-Planted Transformers: Large Language Models with Implicit Syntactic
Supervision | Large Language Models (LLMs) have achieved remarkable success thanks to
scalability on large text corpora, but have some drawback in training
efficiency. In contrast, Syntactic Language Models (SLMs) can be trained
efficiently to reach relatively high performance thanks to syntactic
supervision, but have trouble with scalability. Thus, given these complementary
advantages of LLMs and SLMs, it is necessary to develop an architecture that
integrates the scalability of LLMs with the training efficiency of SLMs, namely
Syntactic Large Language Models (SLLM). In this paper, we propose a novel
method dubbed tree-planting: implicitly "plant" trees into attention weights of
Transformer LMs to reflect syntactic structures of natural language.
Specifically, Transformer LMs trained with tree-planting will be called
Tree-Planted Transformers (TPT), which learn syntax on small treebanks via
tree-planting and then scale on large text corpora via continual learning with
syntactic scaffolding. Targeted syntactic evaluations on the SyntaxGym
benchmark demonstrated that TPTs, despite the lack of explicit syntactic
supervision, significantly outperformed various SLMs with explicit syntactic
supervision that generate hundreds of syntactic structures in parallel,
suggesting that tree-planting and TPTs are the promising foundation for SLLMs.
| 2,024 | Computation and Language |
FormulaQA: A Question Answering Dataset for Formula-Based Numerical
Reasoning | The application of formulas is a fundamental ability of humans when
addressing numerical reasoning problems. However, existing numerical reasoning
datasets seldom explicitly indicate the formulas employed during the reasoning
steps. To bridge this gap, we propose a question answering dataset for
formula-based numerical reasoning called FormulaQA, from junior high school
physics examinations. We further conduct evaluations on LLMs with size ranging
from 7B to over 100B parameters utilizing zero-shot and few-shot
chain-of-thoughts methods and we explored the approach of using
retrieval-augmented LLMs when providing an external formula database. We also
fine-tune on smaller models with size not exceeding 2B. Our empirical findings
underscore the significant potential for improvement in existing models when
applied to our complex, formula-driven FormulaQA.
| 2,024 | Computation and Language |
Are Large Language Models Rational Investors? | Large Language Models (LLMs) are progressively being adopted in financial
analysis to harness their extensive knowledge base for interpreting complex
market data and trends. However, their application in the financial domain is
challenged by intrinsic biases (i.e., risk-preference bias) and a superficial
grasp of market intricacies, underscoring the need for a thorough assessment of
their financial insight. This study introduces a novel framework, Financial
Bias Indicators (FBI), to critically evaluate the financial rationality of
LLMs, focusing on their ability to discern and navigate the subtleties of
financial information and to identify any irrational biases that might skew
market analysis.
Our research adopts an innovative methodology to measure financial
rationality, integrating principles of behavioral finance to scrutinize the
biases and decision-making patterns of LLMs. We conduct a comprehensive
evaluation of 19 leading LLMs, considering factors such as model scale,
training datasets, input strategies, etc. The findings reveal varying degrees
of financial irrationality among the models, influenced by their design and
training. Models trained specifically on financial datasets might exhibit
greater irrationality, and it's possible that even larger financial language
models (FinLLMs) could display more biases than smaller, more generalized
models. This outcomes provide profound insights into how these elements affect
the financial rationality of LLMs, indicating that targeted training and
structured input methods could improve model performance. This work enriches
our understanding of LLMs' strengths and weaknesses in financial applications,
laying the groundwork for the development of more dependable and rational
financial analysis tools.
| 2,024 | Computation and Language |
UMBCLU at SemEval-2024 Task 1A and 1C: Semantic Textual Relatedness with
and without machine translation | This paper describes the system we developed for SemEval-2024 Task 1,
"Semantic Textual Relatedness for African and Asian Languages." The aim of the
task is to build a model that can identify semantic textual relatedness (STR)
between two sentences of a target language belonging to a collection of African
and Asian languages. We participated in Subtasks A and C and explored
supervised and cross-lingual training leveraging large language models (LLMs).
Pre-trained large language models have been extensively used for machine
translation and semantic similarity. Using a combination of machine translation
and sentence embedding LLMs, we developed a unified STR model, TranSem, for
subtask A and fine-tuned the T5 family of models on the STR data, FineSem, for
use in subtask C. Our model results for 7 languages in subtask A were better
than the official baseline for 3 languages and on par with the baseline for the
remaining 4 languages. Our model results for the 12 languages in subtask C
resulted in 1st place for Africaans, 2nd place for Indonesian, and 3rd place
for English with low performance for the remaining 9 languages.
| 2,024 | Computation and Language |
Can Large Language Models be Used to Provide Psychological Counselling?
An Analysis of GPT-4-Generated Responses Using Role-play Dialogues | Mental health care poses an increasingly serious challenge to modern
societies. In this context, there has been a surge in research that utilizes
information technologies to address mental health problems, including those
aiming to develop counseling dialogue systems. However, there is a need for
more evaluations of the performance of counseling dialogue systems that use
large language models. For this study, we collected counseling dialogue data
via role-playing scenarios involving expert counselors, and the utterances were
annotated with the intentions of the counselors. To determine the feasibility
of a dialogue system in real-world counseling scenarios, third-party counselors
evaluated the appropriateness of responses from human counselors and those
generated by GPT-4 in identical contexts in role-play dialogue data. Analysis
of the evaluation results showed that the responses generated by GPT-4 were
competitive with those of human counselors.
| 2,024 | Computation and Language |
Me LLaMA: Foundation Large Language Models for Medical Applications | Recent large language models (LLMs) like ChatGPT and LLaMA have shown great
promise in many AI applications. However, their performance on medical tasks is
suboptimal and can be further improved by training on large domain-specific
datasets. This study introduces Me LLaMA, a medical LLM family including
foundation models - Me LLaMA 13/70B and their chat-enhanced versions - Me LLaMA
13/70B-chat, developed through the continual pre-training and instruction
tuning of LLaMA2 using large medical data. Our domain-specific data suite for
training and evaluation, includes a large-scale continual pre-training dataset
with 129B tokens, an instruction tuning dataset with 214k samples, and a
medical evaluation benchmark (MIBE) across six tasks with 14 datasets. Our
extensive evaluation using MIBE shows that Me LLaMA models surpass existing
open-source medical LLMs in zero-shot and few-shot learning and outperform
commercial giants like ChatGPT on 6 out of 8 datasets and GPT-4 in 3 out of 8
datasets. In addition, we empirically investigated the catastrophic forgetting
problem, and our results show that Me LLaMA models outperform other medical
LLMs. Me LLaMA is one of the first and largest open-source foundational LLMs
designed for the medical domain, using both biomedical and clinical data. It
exhibits superior performance across both general and medical tasks compared to
other medical LLMs, rendering it an attractive choice for medical AI
applications. All resources are available at:
https://github.com/BIDS-Xu-Lab/Me-LLaMA.
| 2,024 | Computation and Language |
Acknowledgment of Emotional States: Generating Validating Responses for
Empathetic Dialogue | In the realm of human-AI dialogue, the facilitation of empathetic responses
is important. Validation is one of the key communication techniques in
psychology, which entails recognizing, understanding, and acknowledging others'
emotional states, thoughts, and actions. This study introduces the first
framework designed to engender empathetic dialogue with validating responses.
Our approach incorporates a tripartite module system: 1) validation timing
detection, 2) users' emotional state identification, and 3) validating response
generation. Utilizing Japanese EmpatheticDialogues dataset - a textual-based
dialogue dataset consisting of 8 emotional categories from Plutchik's wheel of
emotions - the Task Adaptive Pre-Training (TAPT) BERT-based model outperforms
both random baseline and the ChatGPT performance, in term of F1-score, in all
modules. Further validation of our model's efficacy is confirmed in its
application to the TUT Emotional Storytelling Corpus (TESC), a speech-based
dialogue dataset, by surpassing both random baseline and the ChatGPT. This
consistent performance across both textual and speech-based dialogues
underscores the effectiveness of our framework in fostering empathetic human-AI
communication.
| 2,024 | Computation and Language |
Advancing Large Language Models to Capture Varied Speaking Styles and
Respond Properly in Spoken Conversations | In spoken dialogue, even if two current turns are the same sentence, their
responses might still differ when they are spoken in different styles. The
spoken styles, containing paralinguistic and prosodic information, mark the
most significant difference between text and speech modality. When using
text-only LLMs to model spoken dialogue, text-only LLMs cannot give different
responses based on the speaking style of the current turn. In this paper, we
focus on enabling LLMs to listen to the speaking styles and respond properly.
Our goal is to teach the LLM that "even if the sentences are identical if they
are spoken in different styles, their corresponding responses might be
different". Since there is no suitable dataset for achieving this goal, we
collect a speech-to-speech dataset, StyleTalk, with the following desired
characteristics: when two current speeches have the same content but are spoken
in different styles, their responses will be different. To teach LLMs to
understand and respond properly to the speaking styles, we propose the
Spoken-LLM framework that can model the linguistic content and the speaking
styles. We train Spoken-LLM using the StyleTalk dataset and devise a two-stage
training pipeline to help the Spoken-LLM better learn the speaking styles.
Based on extensive experiments, we show that Spoken-LLM outperforms text-only
baselines and prior speech LLMs methods.
| 2,024 | Computation and Language |
Few shot clinical entity recognition in three languages: Masked language
models outperform LLM prompting | Large Language Models are becoming the go-to solution for many natural
language processing tasks, including in specialized domains where their
few-shot capacities are expected to yield high performance in low-resource
settings. Herein, we aim to assess the performance of Large Language Models for
few shot clinical entity recognition in multiple languages. We evaluate named
entity recognition in English, French and Spanish using 8 in-domain (clinical)
and 6 out-domain gold standard corpora. We assess the performance of 10
auto-regressive language models using prompting and 16 masked language models
used for text encoding in a biLSTM-CRF supervised tagger. We create a few-shot
set-up by limiting the amount of annotated data available to 100 sentences. Our
experiments show that although larger prompt-based models tend to achieve
competitive F-measure for named entity recognition outside the clinical domain,
this level of performance does not carry over to the clinical domain where
lighter supervised taggers relying on masked language models perform better,
even with the performance drop incurred from the few-shot set-up. In all
experiments, the CO2 impact of masked language models is inferior to that of
auto-regressive models. Results are consistent over the three languages and
suggest that few-shot learning using Large language models is not production
ready for named entity recognition in the clinical domain. Instead, models
could be used for speeding-up the production of gold standard annotated data.
| 2,024 | Computation and Language |
SymBa: Symbolic Backward Chaining for Multi-step Natural Language
Reasoning | Large Language Models (LLMs) have recently demonstrated remarkable reasoning
ability as in Chain-of-thought prompting, but faithful multi-step reasoning
remains a challenge. We specifically focus on backward chaining, where the
query is recursively decomposed using logical rules until proven. To address
the limitations of current backward chaining implementations, we propose SymBa
(Symbolic Backward Chaining). In SymBa, the symbolic top-down solver controls
the entire proof process and the LLM is called to generate a single reasoning
step only when the solver encounters a dead end. By this novel solver-LLM
integration, while being able to produce an interpretable, structured proof,
SymBa achieves significant improvement in performance, proof faithfulness, and
efficiency in diverse multi-step reasoning benchmarks (ProofWriter,
Birds-Electricity, GSM8k, CLUTRR-TF, ECtHR Article 6) compared to backward
chaining baselines.
| 2,024 | Computation and Language |
On Sensitivity of Learning with Limited Labelled Data to the Effects of
Randomness: Impact of Interactions and Systematic Choices | While learning with limited labelled data can improve performance when the
labels are lacking, it is also sensitive to the effects of uncontrolled
randomness introduced by so-called randomness factors (e.g., varying order of
data). We propose a method to systematically investigate the effects of
randomness factors while taking the interactions between them into
consideration. To measure the true effects of an individual randomness factor,
our method mitigates the effects of other factors and observes how the
performance varies across multiple runs. Applying our method to multiple
randomness factors across in-context learning and fine-tuning approaches on 7
representative text classification tasks and meta-learning on 3 tasks, we show
that: 1) disregarding interactions between randomness factors in existing works
caused inconsistent findings due to incorrect attribution of the effects of
randomness factors, such as disproving the consistent sensitivity of in-context
learning to sample order even with random sample selection; and 2) besides
mutual interactions, the effects of randomness factors, especially sample
order, are also dependent on more systematic choices unexplored in existing
works, such as number of classes, samples per class or choice of prompt format.
| 2,024 | Computation and Language |
Fine-Tuning, Prompting, In-Context Learning and Instruction-Tuning: How
Many Labelled Samples Do We Need? | When solving a task with limited labelled data, researchers can either use a
general large language model without further update, or use the few examples to
tune a specialised smaller model. When enough labels are available, the
specialised models outperform the general ones on many NLP tasks. In this work,
we aim to investigate how many labelled samples are required for the
specialised models to achieve this superior performance, while taking the
results variance into consideration. Observing the behaviour of prompting,
in-context learning, fine-tuning and instruction-tuning, identifying their
break-even points when increasing number of labelled training samples across
three tasks of varying complexity, we find that the specialised models often
need only few samples ($100-1000$) to be on par or better than the general
ones. At the same time, the amount of required labelled data strongly depends
on the task complexity and results variance.
| 2,024 | Computation and Language |
Identifying Factual Inconsistency in Summaries: Towards Effective
Utilization of Large Language Model | Factual inconsistency poses a significant hurdle for the commercial
deployment of abstractive summarizers. Under this Large Language Model (LLM)
era, this work focuses around two important questions: what is the best way to
leverage LLM for factual inconsistency detection, and how could we distill a
smaller LLM with both high efficiency and efficacy? Three zero-shot paradigms
are firstly proposed and evaluated across five diverse datasets: direct
inference on the entire summary or each summary window; entity verification
through question generation and answering. Experiments suggest that LLM itself
is capable to resolve this task train-free under the proper paradigm design,
surpassing strong trained baselines by 2.8% on average. To further promote
practical utility, we then propose training strategies aimed at distilling
smaller open-source LLM that learns to score the entire summary at once with
high accuracy, which outperforms the zero-shot approaches by much larger LLM,
serving as an effective and efficient ready-to-use scorer.
| 2,024 | Computation and Language |
PANDA: Preference Adaptation for Enhancing Domain-Specific Abilities of
LLMs | While Large language models (LLMs) have demonstrated considerable
capabilities across various natural language tasks, they often fall short of
the performance achieved by domain-specific state-of-the-art models. One
potential approach to enhance domain-specific capabilities of LLMs involves
fine-tuning them using corresponding datasets. However, this method can be both
resource and time-intensive, and not applicable to closed-source commercial
LLMs. In this paper, we propose Preference Adaptation for Enhancing
Domain-specific Abilities of LLMs (PANDA), a method designed to augment the
domain-specific capabilities of LLMs by leveraging insights from the response
preference of expert models without requiring fine-tuning. Our experimental
results reveal that PANDA significantly enhances the domain-specific ability of
LLMs on text classification and interactive decision tasks. Moreover, LLM with
PANDA even outperforms the expert model that being learned on 4 tasks of
ScienceWorld. This finding highlights the potential of exploring tuning-free
approaches to achieve weak-to-strong generalization.
| 2,024 | Computation and Language |
ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic | The focus of language model evaluation has transitioned towards reasoning and
knowledge-intensive tasks, driven by advancements in pretraining large models.
While state-of-the-art models are partially trained on large Arabic texts,
evaluating their performance in Arabic remains challenging due to the limited
availability of relevant datasets. To bridge this gap, we present ArabicMMLU,
the first multi-task language understanding benchmark for Arabic language,
sourced from school exams across diverse educational levels in different
countries spanning North Africa, the Levant, and the Gulf regions. Our data
comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard
Arabic (MSA), and is carefully constructed by collaborating with native
speakers in the region. Our comprehensive evaluations of 35 models reveal
substantial room for improvement, particularly among the best open-source
models. Notably, BLOOMZ, mT0, LLama2, and Falcon struggle to achieve a score of
50%, while even the top-performing Arabic-centric model only achieves a score
of 62.3%.
| 2,024 | Computation and Language |
PromptKD: Distilling Student-Friendly Knowledge for Generative Language
Models via Prompt Tuning | Recent advancements in large language models (LLMs) have raised concerns
about inference costs, increasing the need for research into model compression.
While knowledge distillation (KD) is a prominent method for this, research on
KD for generative language models like LLMs is relatively sparse, and the
approach of distilling student-friendly knowledge, which has shown promising
performance in KD for classification models, remains unexplored in generative
language models. To explore this approach, we propose PromptKD, a simple yet
effective method that utilizes prompt tuning - for the first time in KD - to
enable generative language models to transfer student-friendly knowledge.
Unlike previous works in classification that require fine-tuning the entire
teacher model for extracting student-friendly knowledge, PromptKD achieves
similar effects by adding a small number of prompt tokens and tuning only the
prompt with student guidance. Extensive experiments on instruction-following
datasets using the GPT-2 model family show that PromptKD achieves
state-of-the-art performance while adding only 0.0007% of the teacher's
parameters as prompts. Further analysis suggests that distilling
student-friendly knowledge alleviates exposure bias effectively throughout the
entire training process, leading to performance enhancements.
| 2,024 | Computation and Language |
Instruction-tuned Language Models are Better Knowledge Learners | In order for large language model (LLM)-based assistants to effectively adapt
to evolving information needs, it must be possible to update their factual
knowledge through continued training on new data. The standard recipe for doing
so involves continued pre-training on new documents followed by
instruction-tuning on question-answer (QA) pairs. However, we find that LLMs
trained with this recipe struggle to answer questions, even though the
perplexity of documents is minimized. We found that QA pairs are generally
straightforward, while documents are more complex, weaving many factual
statements together in an intricate manner. Therefore, we hypothesize that it
is beneficial to expose LLMs to QA pairs before continued pre-training on
documents so that the process of encoding knowledge from complex documents
takes into account how this knowledge is accessed through questions. Based on
this, we propose pre-instruction-tuning (PIT), a method that instruction-tunes
on questions prior to training on documents. This contrasts with standard
instruction-tuning, which learns how to extract knowledge after training on
documents. Extensive experiments and ablation studies demonstrate that PIT
significantly enhances the ability of LLMs to absorb knowledge from new
documents, outperforming standard instruction-tuning by 17.8%.
| 2,024 | Computation and Language |
MoELoRA: Contrastive Learning Guided Mixture of Experts on
Parameter-Efficient Fine-Tuning for Large Language Models | Fine-tuning is often necessary to enhance the adaptability of Large Language
Models (LLM) to downstream tasks. Nonetheless, the process of updating billions
of parameters demands significant computational resources and training time,
which poses a substantial obstacle to the widespread application of large-scale
models in various scenarios. To address this issue, Parameter-Efficient
Fine-Tuning (PEFT) has emerged as a prominent paradigm in recent research.
However, current PEFT approaches that employ a limited set of global parameters
(such as LoRA, which adds low-rank approximation matrices to all weights) face
challenges in flexibly combining different computational modules in downstream
tasks. In this work, we introduce a novel PEFT method: MoELoRA. We consider
LoRA as Mixture of Experts (MoE), and to mitigate the random routing phenomenon
observed in MoE, we propose the utilization of contrastive learning to
encourage experts to learn distinct features. We conducted experiments on 11
tasks in math reasoning and common-sense reasoning benchmarks. With the same
number of parameters, our approach outperforms LoRA significantly. In math
reasoning, MoELoRA achieved an average performance that was 4.2% higher than
LoRA, and demonstrated competitive performance compared to the 175B GPT-3.5 on
several benchmarks.
| 2,024 | Computation and Language |
Handling Ambiguity in Emotion: From Out-of-Domain Detection to
Distribution Estimation | The subjective perception of emotion leads to inconsistent labels from human
annotators. Typically, utterances lacking majority-agreed labels are excluded
when training an emotion classifier, which cause problems when encountering
ambiguous emotional expressions during testing. This paper investigates three
methods to handle ambiguous emotion. First, we show that incorporating
utterances without majority-agreed labels as an additional class in the
classifier reduces the classification performance of the other emotion classes.
Then, we propose detecting utterances with ambiguous emotions as out-of-domain
samples by quantifying the uncertainty in emotion classification using
evidential deep learning. This approach retains the classification accuracy
while effectively detects ambiguous emotion expressions. Furthermore, to obtain
fine-grained distinctions among ambiguous emotions, we propose representing
emotion as a distribution instead of a single class label. The task is thus
re-framed from classification to distribution estimation where every individual
annotation is taken into account, not just the majority opinion. The evidential
uncertainty measure is extended to quantify the uncertainty in emotion
distribution estimation. Experimental results on the IEMOCAP and CREMA-D
datasets demonstrate the superior capability of the proposed method in terms of
majority class prediction, emotion distribution estimation, and uncertainty
estimation.
| 2,024 | Computation and Language |
Backward Lens: Projecting Language Model Gradients into the Vocabulary
Space | Understanding how Transformer-based Language Models (LMs) learn and recall
information is a key goal of the deep learning community. Recent
interpretability methods project weights and hidden states obtained from the
forward pass to the models' vocabularies, helping to uncover how information
flows within LMs. In this work, we extend this methodology to LMs' backward
pass and gradients. We first prove that a gradient matrix can be cast as a
low-rank linear combination of its forward and backward passes' inputs. We then
develop methods to project these gradients into vocabulary items and explore
the mechanics of how new information is stored in the LMs' neurons.
| 2,024 | Computation and Language |
Exploring the Impact of Table-to-Text Methods on Augmenting LLM-based
Question Answering with Domain Hybrid Data | Augmenting Large Language Models (LLMs) for Question Answering (QA) with
domain specific data has attracted wide attention. However, domain data often
exists in a hybrid format, including text and semi-structured tables, posing
challenges for the seamless integration of information. Table-to-Text
Generation is a promising solution by facilitating the transformation of hybrid
data into a uniformly text-formatted corpus. Although this technique has been
widely studied by the NLP community, there is currently no comparative analysis
on how corpora generated by different table-to-text methods affect the
performance of QA systems. In this paper, we address this research gap in two
steps. First, we innovatively integrate table-to-text generation into the
framework of enhancing LLM-based QA systems with domain hybrid data. Then, we
utilize this framework in real-world industrial data to conduct extensive
experiments on two types of QA systems (DSFT and RAG frameworks) with four
representative methods: Markdown format, Template serialization, TPLM-based
method, and LLM-based method. Based on the experimental results, we draw some
empirical findings and explore the underlying reasons behind the success of
some methods. We hope the findings of this work will provide a valuable
reference for the academic and industrial communities in developing robust QA
systems.
| 2,024 | Computation and Language |
Autism Detection in Speech -- A Survey | There has been a range of studies of how autism is displayed in voice,
speech, and language. We analyse studies from the biomedical, as well as the
psychological domain, but also from the NLP domain in order to find linguistic,
prosodic and acoustic cues that could indicate autism. Our survey looks at all
three domains. We define autism and which comorbidities might influence the
correct detection of the disorder. We especially look at observations such as
verbal and semantic fluency, prosodic features, but also disfluencies and
speaking rate. We also show word-based approaches and describe machine learning
and transformer-based approaches both on the audio data as well as the
transcripts. Lastly, we conclude, while there already is a lot of research,
female patients seem to be severely under-researched. Also, most NLP research
focuses on traditional machine learning methods instead of transformers which
could be beneficial in this context. Additionally, we were unable to find
research combining both features from audio and transcripts.
| 2,024 | Computation and Language |
GRAFFORD: A Benchmark Dataset for Testing the Knowledge of Object
Affordances of Language and Vision Models | We investigate the knowledge of object affordances in pre-trained language
models (LMs) and pre-trained Vision-Language models (VLMs). Transformers-based
large pre-trained language models (PTLM) learn contextual representation from
massive amounts of unlabeled text and are shown to perform impressively in
downstream NLU tasks. In parallel, a growing body of literature shows that
PTLMs fail inconsistently and non-intuitively, showing a lack of reasoning and
grounding. To take a first step toward quantifying the effect of grounding (or
lack thereof), we curate a novel and comprehensive dataset of object
affordances -- GrAFFORD, characterized by 15 affordance classes. Unlike
affordance datasets collected in vision and language domains, we annotate
in-the-wild sentences with objects and affordances. Experimental results reveal
that PTLMs exhibit limited reasoning abilities when it comes to uncommon object
affordances. We also observe that pre-trained VLMs do not necessarily capture
object affordances effectively. Through few-shot fine-tuning, we demonstrate
improvement in affordance knowledge in PTLMs and VLMs. Our research contributes
a novel dataset for language grounding tasks, and presents insights into LM
capabilities, advancing the understanding of object affordances. Codes and data
are available at https://github.com/sayantan11995/Affordance
| 2,024 | Computation and Language |
More Discriminative Sentence Embeddings via Semantic Graph Smoothing | This paper explores an empirical approach to learn more discriminantive
sentence representations in an unsupervised fashion. Leveraging semantic graph
smoothing, we enhance sentence embeddings obtained from pretrained models to
improve results for the text clustering and classification tasks. Our method,
validated on eight benchmarks, demonstrates consistent improvements, showcasing
the potential of semantic graph smoothing in improving sentence embeddings for
the supervised and unsupervised document categorization tasks.
| 2,024 | Computation and Language |
OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination
Detection with Weakly Supervised Data | This paper mainly describes a unified system for hallucination detection of
LLMs, which wins the second prize in the model-agnostic track of the
SemEval-2024 Task 6, and also achieves considerable results in the model-aware
track. This task aims to detect hallucination with LLMs for three different
text-generation tasks without labeled training data. We utilize prompt
engineering and few-shot learning to verify the performance of different LLMs
on the validation data. Then we select the LLMs with better performance to
generate high-quality weakly supervised training data, which not only satisfies
the consistency of different LLMs, but also satisfies the consistency of the
optimal LLM with different sampling parameters. Furthermore, we finetune
different LLMs by using the constructed training data, and finding that a
relatively small LLM can achieve a competitive level of performance in
hallucination detection, when compared to the large LLMs and the prompt-based
approaches using GPT-4.
| 2,024 | Computation and Language |
Large Language Model-based Human-Agent Collaboration for Complex Task
Solving | In recent developments within the research community, the integration of
Large Language Models (LLMs) in creating fully autonomous agents has garnered
significant interest. Despite this, LLM-based agents frequently demonstrate
notable shortcomings in adjusting to dynamic environments and fully grasping
human needs. In this work, we introduce the problem of LLM-based human-agent
collaboration for complex task-solving, exploring their synergistic potential.
In addition, we propose a Reinforcement Learning-based Human-Agent
Collaboration method, ReHAC. This approach includes a policy model designed to
determine the most opportune stages for human intervention within the
task-solving process. We construct a human-agent collaboration dataset to train
this policy model in an offline reinforcement learning environment. Our
validation tests confirm the model's effectiveness. The results demonstrate
that the synergistic efforts of humans and LLM-based agents significantly
improve performance in complex tasks, primarily through well-planned, limited
human intervention. Datasets and code are available at:
https://github.com/XueyangFeng/ReHAC.
| 2,024 | Computation and Language |
Normalized Orthography for Tunisian Arabic | Tunisian Arabic (ISO 693-3: aeb) is a distinct linguistic variety native to
Tunisia, initially stemmed from the Arabic language and enriched by a multitude
of historical influences. This research introduces the "Normalized Orthography
for Tunisian Arabic" (NOTA), an adaptation of CODA* guidelines tailored for
transcribing Tunisian Arabic using the Arabic script for language resource
development purposes, with an emphasis on user-friendliness and consistency.
The updated standard seeks to address challenges related to accurately
representing the unique characteristics of Tunisian phonology and morphology.
This will be achieved by rectifying problems arising from transcriptions based
on resemblances to Modern Standard Arabic.
| 2,024 | Computation and Language |
GumbelSoft: Diversified Language Model Watermarking via the
GumbelMax-trick | Large language models (LLMs) excellently generate human-like text, but also
raise concerns about misuse in fake news and academic dishonesty.
Decoding-based watermark, particularly the GumbelMax-trick-based watermark(GM
watermark), is a standout solution for safeguarding machine-generated texts due
to its notable detectability. However, GM watermark encounters a major
challenge with generation diversity, always yielding identical outputs for the
same prompt, negatively impacting generation diversity and user experience. To
overcome this limitation, we propose a new type of GM watermark, the
Logits-Addition watermark, and its three variants, specifically designed to
enhance diversity. Among these, the GumbelSoft watermark (a softmax variant of
the Logits-Addition watermark) demonstrates superior performance in high
diversity settings, with its AUROC score outperforming those of the two
alternative variants by 0.1 to 0.3 and surpassing other decoding-based
watermarking methods by a minimum of 0.1.
| 2,024 | Computation and Language |
Gl\'orIA -- A Generative and Open Large Language Model for Portuguese | Significant strides have been made in natural language tasks, largely
attributed to the emergence of powerful large language models (LLMs). These
models, pre-trained on extensive and diverse corpora, have become increasingly
capable of comprehending the intricacies of language. Despite the abundance of
LLMs for many high-resource languages, the availability of such models remains
limited for European Portuguese. We introduce Gl\'orIA, a robust European
Portuguese decoder LLM. To pre-train Gl\'orIA, we assembled a comprehensive
PT-PT text corpus comprising 35 billion tokens from various sources. We present
our pre-training methodology, followed by an assessment of the model's
effectiveness on multiple downstream tasks. Additionally, to evaluate our
models' language modeling capabilities, we introduce CALAME-PT (Context-Aware
LAnguage Modeling Evaluation for Portuguese), the first Portuguese zero-shot
language-modeling benchmark. Evaluation shows that Gl\'orIA significantly
outperforms existing open PT decoder models in language modeling and that it
can generate sound, knowledge-rich, and coherent PT-PT text. The model also
exhibits strong potential for various downstream tasks.
| 2,024 | Computation and Language |
The Impact of Demonstrations on Multilingual In-Context Learning: A
Multidimensional Analysis | In-context learning is a popular inference strategy where large language
models solve a task using only a few labelled demonstrations without needing
any parameter updates. Compared to work on monolingual (English) in-context
learning, multilingual in-context learning is under-explored, and we lack an
in-depth understanding of the role of demonstrations in this context. To
address this gap, we conduct a multidimensional analysis of multilingual
in-context learning, experimenting with 5 models from different model families,
9 datasets covering classification and generation tasks, and 56 typologically
diverse languages. Our results reveal that the effectiveness of demonstrations
varies significantly across models, tasks, and languages. We also find that
Llama 2-Chat, GPT-3.5, and GPT-4 are largely insensitive to the quality of
demonstrations. Instead, a carefully crafted template often eliminates the
benefits of demonstrations for some tasks and languages altogether. These
findings show that the importance of demonstrations might be overestimated. Our
work highlights the need for granular evaluation across multiple axes towards a
better understanding of in-context learning.
| 2,024 | Computation and Language |
Can GNN be Good Adapter for LLMs? | Recently, large language models (LLMs) have demonstrated superior
capabilities in understanding and zero-shot learning on textual data, promising
significant advances for many text-related domains. In the graph domain,
various real-world scenarios also involve textual data, where tasks and node
features can be described by text. These text-attributed graphs (TAGs) have
broad applications in social media, recommendation systems, etc. Thus, this
paper explores how to utilize LLMs to model TAGs. Previous methods for TAG
modeling are based on million-scale LMs. When scaled up to billion-scale LLMs,
they face huge challenges in computational costs. Additionally, they also
ignore the zero-shot inference capabilities of LLMs. Therefore, we propose
GraphAdapter, which uses a graph neural network (GNN) as an efficient adapter
in collaboration with LLMs to tackle TAGs. In terms of efficiency, the GNN
adapter introduces only a few trainable parameters and can be trained with low
computation costs. The entire framework is trained using auto-regression on
node text (next token prediction). Once trained, GraphAdapter can be seamlessly
fine-tuned with task-specific prompts for various downstream tasks. Through
extensive experiments across multiple real-world TAGs, GraphAdapter based on
Llama 2 gains an average improvement of approximately 5\% in terms of node
classification. Furthermore, GraphAdapter can also adapt to other language
models, including RoBERTa, GPT-2. The promising results demonstrate that GNNs
can serve as effective adapters for LLMs in TAG modeling.
| 2,024 | Computation and Language |
Phonotactic Complexity across Dialects | Received wisdom in linguistic typology holds that if the structure of a
language becomes more complex in one dimension, it will simplify in another,
building on the assumption that all languages are equally complex (Joseph and
Newmeyer, 2012). We study this claim on a micro-level, using a
tightly-controlled sample of Dutch dialects (across 366 collection sites) and
Min dialects (across 60 sites), which enables a more fair comparison across
varieties. Even at the dialect level, we find empirical evidence for a tradeoff
between word length and a computational measure of phonotactic complexity from
a LSTM-based phone-level language model-a result previously documented only at
the language level. A generalized additive model (GAM) shows that dialects with
low phonotactic complexity concentrate around the capital regions, which we
hypothesize to correspond to prior hypotheses that language varieties of
greater or more diverse populations show reduced phonotactic complexity. We
also experiment with incorporating the auxiliary task of predicting syllable
constituency, but do not find an increase in the negative correlation observed.
| 2,024 | Computation and Language |
Code Needs Comments: Enhancing Code LLMs with Comment Augmentation | The programming skill is one crucial ability for Large Language Models
(LLMs), necessitating a deep understanding of programming languages (PLs) and
their correlation with natural languages (NLs). We examine the impact of
pre-training data on code-focused LLMs' performance by assessing the comment
density as a measure of PL-NL alignment. Given the scarcity of code-comment
aligned data in pre-training corpora, we introduce a novel data augmentation
method that generates comments for existing code, coupled with a data filtering
strategy that filters out code data poorly correlated with natural language. We
conducted experiments on three code-focused LLMs and observed consistent
improvements in performance on two widely-used programming skill benchmarks.
Notably, the model trained on the augmented data outperformed both the model
used for generating comments and the model further trained on the data without
augmentation.
| 2,024 | Computation and Language |
Understanding the effects of language-specific class imbalance in
multilingual fine-tuning | We study the effect of one type of imbalance often present in real-life
multilingual classification datasets: an uneven distribution of labels across
languages. We show evidence that fine-tuning a transformer-based Large Language
Model (LLM) on a dataset with this imbalance leads to worse performance, a more
pronounced separation of languages in the latent space, and the promotion of
uninformative features. We modify the traditional class weighing approach to
imbalance by calculating class weights separately for each language and show
that this helps mitigate those detrimental effects. These results create
awareness of the negative effects of language-specific class imbalance in
multilingual fine-tuning and the way in which the model learns to rely on the
separation of languages to perform the task.
| 2,024 | Computation and Language |
SoMeLVLM: A Large Vision Language Model for Social Media Processing | The growth of social media, characterized by its multimodal nature, has led
to the emergence of diverse phenomena and challenges, which calls for an
effective approach to uniformly solve automated tasks. The powerful Large
Vision Language Models make it possible to handle a variety of tasks
simultaneously, but even with carefully designed prompting methods, the general
domain models often fall short in aligning with the unique speaking style and
context of social media tasks. In this paper, we introduce a Large Vision
Language Model for Social Media Processing (SoMeLVLM), which is a cognitive
framework equipped with five key capabilities including knowledge &
comprehension, application, analysis, evaluation, and creation. SoMeLVLM is
designed to understand and generate realistic social media behavior. We have
developed a 654k multimodal social media instruction-tuning dataset to support
our cognitive framework and fine-tune our model. Our experiments demonstrate
that SoMeLVLM achieves state-of-the-art performance in multiple social media
tasks. Further analysis shows its significant advantages over baselines in
terms of cognitive abilities.
| 2,024 | Computation and Language |
Subsets and Splits