Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
API Pack: A Massive Multilingual Dataset for API Call Generation
|
We introduce API Pack, a multilingual dataset featuring over one million
instruction-API call pairs aimed at advancing large language models' API call
generation capabilities. Through experiments, we demonstrate API Pack's
efficacy in enhancing models for this specialized task while maintaining their
overall proficiency at general coding. Fine-tuning CodeLlama-13B on just 20,000
Python instances yields over 10% and 5% higher accuracy than GPT-3.5 and GPT-4
respectively in generating unseen API calls. Scaling to 100k examples improves
generalization to new APIs not seen during training. In addition, cross-lingual
API call generation is achieved without needing extensive data per language.
The dataset, fine-tuned models, and overall code base are publicly available at
https://github.com/zguo0525/API-Pack.
| 2,024 |
Computation and Language
|
Answer is All You Need: Instruction-following Text Embedding via
Answering the Question
|
This work aims to build a text embedder that can capture characteristics of
texts specified by user instructions. Despite its tremendous potential to
deploy user-oriented embeddings, none of previous approaches provides a
concrete solution for it. This paper offers a new viewpoint, which treats the
instruction as a question about the input text and encodes the expected answers
to obtain the representation accordingly. Intuitively, texts with the same
(implicit) semantics would share similar answers following the instruction,
thus leading to more similar embeddings. Specifically, we propose InBedder that
instantiates this embed-via-answering idea by only fine-tuning language models
on abstractive question answering tasks. InBedder demonstrates significantly
improved instruction-following capabilities according to our proposed
instruction awareness tests and instruction robustness tests, when applied to
both large language models (LLMs) (e.g., llama-2-7b) and smaller encoder-based
LMs (e.g., roberta-large). Additionally, our qualitative analysis of clustering
outcomes, achieved by applying different instructions to the same corpus,
demonstrates a high degree of interpretability.
| 2,024 |
Computation and Language
|
EntailE: Introducing Textual Entailment in Commonsense Knowledge Graph
Completion
|
Commonsense knowledge graph completion is a new challenge for commonsense
knowledge graph construction and application. In contrast to factual knowledge
graphs such as Freebase and YAGO, commonsense knowledge graphs (CSKGs; e.g.,
ConceptNet) utilize free-form text to represent named entities, short phrases,
and events as their nodes. Such a loose structure results in large and sparse
CSKGs, which makes the semantic understanding of these nodes more critical for
learning rich commonsense knowledge graph embedding. While current methods
leverage semantic similarities to increase the graph density, the semantic
plausibility of the nodes and their relations are under-explored. Previous
works adopt conceptual abstraction to improve the consistency of modeling
(event) plausibility, but they are not scalable enough and still suffer from
data sparsity. In this paper, we propose to adopt textual entailment to find
implicit entailment relations between CSKG nodes, to effectively densify the
subgraph connecting nodes within the same conceptual class, which indicates a
similar level of plausibility. Each node in CSKG finds its top entailed nodes
using a finetuned transformer over natural language inference (NLI) tasks,
which sufficiently capture textual entailment signals. The entailment relation
between these nodes are further utilized to: 1) build new connections between
source triplets and entailed nodes to densify the sparse CSKGs; 2) enrich the
generalization ability of node representations by comparing the node embeddings
with a contrastive loss. Experiments on two standard CSKGs demonstrate that our
proposed framework EntailE can improve the performance of CSKG completion tasks
under both transductive and inductive settings.
| 2,024 |
Computation and Language
|
PAL: Proxy-Guided Black-Box Attack on Large Language Models
|
Large Language Models (LLMs) have surged in popularity in recent months, but
they have demonstrated concerning capabilities to generate harmful content when
manipulated. While techniques like safety fine-tuning aim to minimize harmful
use, recent works have shown that LLMs remain vulnerable to attacks that elicit
toxic responses. In this work, we introduce the Proxy-Guided Attack on LLMs
(PAL), the first optimization-based attack on LLMs in a black-box query-only
setting. In particular, it relies on a surrogate model to guide the
optimization and a sophisticated loss designed for real-world LLM APIs. Our
attack achieves 84% attack success rate (ASR) on GPT-3.5-Turbo and 48% on
Llama-2-7B, compared to 4% for the current state of the art. We also propose
GCG++, an improvement to the GCG attack that reaches 94% ASR on white-box
Llama-2-7B, and the Random-Search Attack on LLMs (RAL), a strong but simple
baseline for query-based attacks. We believe the techniques proposed in this
work will enable more comprehensive safety testing of LLMs and, in the long
term, the development of better security guardrails. The code can be found at
https://github.com/chawins/pal.
| 2,024 |
Computation and Language
|
An Analysis of Language Frequency and Error Correction for Esperanto
|
Current Grammar Error Correction (GEC) initiatives tend to focus on major
languages, with less attention given to low-resource languages like Esperanto.
In this article, we begin to bridge this gap by first conducting a
comprehensive frequency analysis using the Eo-GP dataset, created explicitly
for this purpose. We then introduce the Eo-GEC dataset, derived from authentic
user cases and annotated with fine-grained linguistic details for error
identification. Leveraging GPT-3.5 and GPT-4, our experiments show that GPT-4
outperforms GPT-3.5 in both automated and human evaluations, highlighting its
efficacy in addressing Esperanto's grammatical peculiarities and illustrating
the potential of advanced language models to enhance GEC strategies for less
commonly studied languages.
| 2,024 |
Computation and Language
|
Improving Non-autoregressive Machine Translation with Error Exposure and
Consistency Regularization
|
Being one of the IR-NAT (Iterative-refinemennt-based NAT) frameworks, the
Conditional Masked Language Model (CMLM) adopts the mask-predict paradigm to
re-predict the masked low-confidence tokens. However, CMLM suffers from the
data distribution discrepancy between training and inference, where the
observed tokens are generated differently in the two cases. In this paper, we
address this problem with the training approaches of error exposure and
consistency regularization (EECR). We construct the mixed sequences based on
model prediction during training, and propose to optimize over the masked
tokens under imperfect observation conditions. We also design a consistency
learning method to constrain the data distribution for the masked tokens under
different observing situations to narrow down the gap between training and
inference. The experiments on five translation benchmarks obtains an average
improvement of 0.68 and 0.40 BLEU scores compared to the base models,
respectively, and our CMLMC-EECR achieves the best performance with a
comparable translation quality with the Transformer. The experiments results
demonstrate the effectiveness of our method.
| 2,024 |
Computation and Language
|
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
|
Current Large Language Models (LLMs) are not only limited to some maximum
context length, but also are not able to robustly consume long inputs. To
address these limitations, we propose ReadAgent, an LLM agent system that
increases effective context length up to 20x in our experiments. Inspired by
how humans interactively read long documents, we implement ReadAgent as a
simple prompting system that uses the advanced language capabilities of LLMs to
(1) decide what content to store together in a memory episode, (2) compress
those memory episodes into short episodic memories called gist memories, and
(3) take actions to look up passages in the original text if ReadAgent needs to
remind itself of relevant details to complete a task. We evaluate ReadAgent
against baselines using retrieval methods, using the original long contexts,
and using the gist memories. These evaluations are performed on three
long-document reading comprehension tasks: QuALITY, NarrativeQA, and QMSum.
ReadAgent outperforms the baselines on all three tasks while extending the
effective context window by 3-20x.
| 2,024 |
Computation and Language
|
Do LLMs Know about Hallucination? An Empirical Investigation of LLM's
Hidden States
|
Large Language Models (LLMs) can make up answers that are not real, and this
is known as hallucination. This research aims to see if, how, and to what
extent LLMs are aware of hallucination. More specifically, we check whether and
how an LLM reacts differently in its hidden states when it answers a question
right versus when it hallucinates. To do this, we introduce an experimental
framework which allows examining LLM's hidden states in different hallucination
situations. Building upon this framework, we conduct a series of experiments
with language models in the LLaMA family (Touvron et al., 2023). Our empirical
findings suggest that LLMs react differently when processing a genuine response
versus a fabricated one. We then apply various model interpretation techniques
to help understand and explain the findings better. Moreover, informed by the
empirical observations, we show great potential of using the guidance derived
from LLM's hidden representation space to mitigate hallucination. We believe
this work provides insights into how LLMs produce hallucinated answers and how
to make them occur less often.
| 2,024 |
Computation and Language
|
Align before Attend: Aligning Visual and Textual Features for Multimodal
Hateful Content Detection
|
Multimodal hateful content detection is a challenging task that requires
complex reasoning across visual and textual modalities. Therefore, creating a
meaningful multimodal representation that effectively captures the interplay
between visual and textual features through intermediate fusion is critical.
Conventional fusion techniques are unable to attend to the modality-specific
features effectively. Moreover, most studies exclusively concentrated on
English and overlooked other low-resource languages. This paper proposes a
context-aware attention framework for multimodal hateful content detection and
assesses it for both English and non-English languages. The proposed approach
incorporates an attention layer to meaningfully align the visual and textual
features. This alignment enables selective focus on modality-specific features
before fusing them. We evaluate the proposed approach on two benchmark hateful
meme datasets, viz. MUTE (Bengali code-mixed) and MultiOFF (English).
Evaluation results demonstrate our proposed approach's effectiveness with
F1-scores of $69.7$% and $70.3$% for the MUTE and MultiOFF datasets. The scores
show approximately $2.5$% and $3.2$% performance improvement over the
state-of-the-art systems on these datasets. Our implementation is available at
https://github.com/eftekhar-hossain/Bengali-Hateful-Memes.
| 2,024 |
Computation and Language
|
QuRating: Selecting High-Quality Data for Training Language Models
|
Selecting high-quality pre-training data is important for creating capable
language models, but existing methods rely on simple heuristics. We introduce
QuRating, a method for selecting pre-training data that captures the abstract
qualities of texts which humans intuitively perceive. In this paper, we
investigate four qualities - writing style, required expertise, facts & trivia,
and educational value. We find that LLMs are able to discern these qualities
and observe that they are better at making pairwise judgments of texts than at
rating the quality of a text directly. We train a QuRater model to learn scalar
ratings from pairwise judgments, and use it to annotate a 260B training corpus
with quality ratings for each of the four criteria. In our experiments, we
select 30B tokens according to the different quality ratings and train
1.3B-parameter language models on the selected data. We find that it is
important to balance quality and diversity, as selecting only the highest-rated
documents leads to poor results. When we sample using quality ratings as logits
over documents, our models achieve lower perplexity and stronger in-context
learning performance than baselines. Beyond data selection, we use the quality
ratings to construct a training curriculum which improves performance without
changing the training dataset. We extensively analyze the quality ratings and
discuss their characteristics, biases, and wider implications.
| 2,024 |
Computation and Language
|
AI Hospital: Interactive Evaluation and Collaboration of LLMs as Intern
Doctors for Clinical Diagnosis
|
The incorporation of Large Language Models (LLMs) in healthcare marks a
significant advancement. However, the application has predominantly been
limited to discriminative and question-answering tasks, which does not fully
leverage their interactive potential. To address this limitation, our paper
presents AI Hospital, a framework designed to build a real-time interactive
diagnosis environment. To simulate the procedure, we collect high-quality
medical records to create patient, examiner, and medical director agents. AI
Hospital is then utilized for the interactive evaluation and collaboration of
LLMs. Initially, we create a Multi-View Medical Evaluation (MVME) benchmark
where various LLMs serve as intern doctors for interactive diagnosis.
Subsequently, to improve diagnostic accuracy, we introduce a collaborative
mechanism that involves iterative discussions and a dispute resolution process
under the supervision of the medical director. In our experiments, we validate
the reliability of AI Hospital. The results not only explore the feasibility of
apply LLMs in clinical consultation but also confirm the effectiveness of the
dispute resolution focused collaboration method.
| 2,024 |
Computation and Language
|
Model Compression and Efficient Inference for Large Language Models: A
Survey
|
Transformer based large language models have achieved tremendous success.
However, the significant memory and computational costs incurred during the
inference process make it challenging to deploy large models on
resource-constrained devices. In this paper, we investigate compression and
efficient inference methods for large language models from an algorithmic
perspective. Regarding taxonomy, similar to smaller models, compression and
acceleration algorithms for large language models can still be categorized into
quantization, pruning, distillation, compact architecture design, dynamic
networks. However, Large language models have two prominent characteristics
compared to smaller models: (1) Most of compression algorithms require
finetuning or even retraining the model after compression. The most notable
aspect of large models is the very high cost associated with model finetuning
or training. Therefore, many algorithms for large models, such as quantization
and pruning, start to explore tuning-free algorithms. (2) Large models
emphasize versatility and generalization rather than performance on a single
task. Hence, many algorithms, such as knowledge distillation, focus on how to
preserving their versatility and generalization after compression. Since these
two characteristics were not very pronounced in early large models, we further
distinguish large language models into medium models and ``real'' large models.
Additionally, we also provide an introduction to some mature frameworks for
efficient inference of large models, which can support basic compression or
acceleration algorithms, greatly facilitating model deployment for users.
| 2,024 |
Computation and Language
|
Efficient Language Adaptive Pre-training: Extending State-of-the-Art
Large Language Models for Polish
|
This study explores the potential of fine-tuning foundational English Large
Language Models (LLMs) for generating Polish text. The first step involves
Language Adaptive Pre-training (LAPT) on a high-quality dataset of 3.11 GB,
consisting of 276 million Polish tokens. The LAPT is followed by additional
fine-tuning aimed at solving nine KLEJ challenges. Our trained model
Curie-7B-v1 not only generates Polish text with the lowest perplexity of 3.02
among decoder-based Polish models but also closely rivals the performance of
the best Polish encoder-decoder models with a less than 2% gap on 8 out of 9
tasks. Curie-7B-v1 used approximately 2-3% of a typical dataset size to learn
Polish. The LAPT was completed in less than five days using a consumer GPU,
highlighting the method's efficiency. The proficiency of the model in Polish
was significantly enhanced, demonstrating the viability of this approach for
adding new languages to existing LLMs by training just 1.2% of its parameters.
To contribute to the community's collaborative progress, the model has been
released as open-source.
| 2,024 |
Computation and Language
|
Grounding Language Model with Chunking-Free In-Context Retrieval
|
This paper presents a novel Chunking-Free In-Context (CFIC) retrieval
approach, specifically tailored for Retrieval-Augmented Generation (RAG)
systems. Traditional RAG systems often struggle with grounding responses using
precise evidence text due to the challenges of processing lengthy documents and
filtering out irrelevant content. Commonly employed solutions, such as document
chunking and adapting language models to handle longer contexts, have their
limitations. These methods either disrupt the semantic coherence of the text or
fail to effectively address the issues of noise and inaccuracy in evidence
retrieval.
CFIC addresses these challenges by circumventing the conventional chunking
process. It utilizes the encoded hidden states of documents for in-context
retrieval, employing auto-aggressive decoding to accurately identify the
specific evidence text required for user queries, eliminating the need for
chunking. CFIC is further enhanced by incorporating two decoding strategies,
namely Constrained Sentence Prefix Decoding and Skip Decoding. These strategies
not only improve the efficiency of the retrieval process but also ensure that
the fidelity of the generated grounding text evidence is maintained. Our
evaluations of CFIC on a range of open QA datasets demonstrate its superiority
in retrieving relevant and accurate evidence, offering a significant
improvement over traditional methods. By doing away with the need for document
chunking, CFIC presents a more streamlined, effective, and efficient retrieval
solution, making it a valuable advancement in the field of RAG systems.
| 2,024 |
Computation and Language
|
NutePrune: Efficient Progressive Pruning with Numerous Teachers for
Large Language Models
|
The considerable size of Large Language Models (LLMs) presents notable
deployment challenges, particularly on resource-constrained hardware.
Structured pruning, offers an effective means to compress LLMs, thereby
reducing storage costs and enhancing inference speed for more efficient
utilization. In this work, we study data-efficient and resource-efficient
structure pruning methods to obtain smaller yet still powerful models.
Knowledge Distillation is well-suited for pruning, as the intact model can
serve as an excellent teacher for pruned students. However, it becomes
challenging in the context of LLMs due to memory constraints. To address this,
we propose an efficient progressive Numerous-teacher pruning method
(NutePrune). NutePrune mitigates excessive memory costs by loading only one
intact model and integrating it with various masks and LoRA modules, enabling
it to seamlessly switch between teacher and student roles. This approach allows
us to leverage numerous teachers with varying capacities to progressively guide
the pruned model, enhancing overall performance. Extensive experiments across
various tasks demonstrate the effectiveness of NutePrune. In LLaMA-7B zero-shot
experiments, NutePrune retains 97.17% of the performance of the original model
at 20% sparsity and 95.07% at 25% sparsity.
| 2,024 |
Computation and Language
|
EFUF: Efficient Fine-grained Unlearning Framework for Mitigating
Hallucinations in Multimodal Large Language Models
|
Multimodal large language models (MLLMs) have attracted increasing attention
in the past few years, but they may still generate descriptions that include
objects not present in the corresponding images, a phenomenon known as object
hallucination. To eliminate hallucinations, existing methods manually annotate
paired responses with and without hallucinations, and then employ various
alignment algorithms to improve the alignment capability between images and
text. However, they not only demand considerable computation resources during
the finetuning stage but also require expensive human annotation to construct
paired data needed by the alignment algorithms. To address these issues, we
borrow the idea of unlearning and propose an efficient fine-grained unlearning
framework (EFUF), which can eliminate hallucinations without the need for
paired data. Extensive experiments show that our method consistently reduces
hallucinations while preserving the generation quality with modest
computational overhead. Our code and datasets will be publicly available.
| 2,024 |
Computation and Language
|
Knowledge of Pretrained Language Models on Surface Information of Tokens
|
Do pretrained language models have knowledge regarding the surface
information of tokens? We examined the surface information stored in word or
subword embeddings acquired by pretrained language models from the perspectives
of token length, substrings, and token constitution. Additionally, we evaluated
the ability of models to generate knowledge regarding token surfaces. We
focused on 12 pretrained language models that were mainly trained on English
and Japanese corpora. Experimental results demonstrate that pretrained language
models have knowledge regarding token length and substrings but not token
constitution. Additionally, the results imply that there is a bottleneck on the
decoder side in terms of effectively utilizing acquired knowledge.
| 2,024 |
Computation and Language
|
LAPDoc: Layout-Aware Prompting for Documents
|
Recent advances in training large language models (LLMs) using massive
amounts of solely textual data lead to strong generalization across many
domains and tasks, including document-specific tasks. Opposed to that there is
a trend to train multi-modal transformer architectures tailored for document
understanding that are designed specifically to fuse textual inputs with the
corresponding document layout. This involves a separate fine-tuning step for
which additional training data is required. At present, no document
transformers with comparable generalization to LLMs are available That raises
the question which type of model is to be preferred for document understanding
tasks. In this paper we investigate the possibility to use purely text-based
LLMs for document-specific tasks by using layout enrichment. We explore drop-in
modifications and rule-based methods to enrich purely textual LLM prompts with
layout information. In our experiments we investigate the effects on the
commercial ChatGPT model and the open-source LLM Solar. We demonstrate that
using our approach both LLMs show improved performance on various standard
document benchmarks. In addition, we study the impact of noisy OCR and layout
errors, as well as the limitations of LLMs when it comes to utilizing document
layout. Our results indicate that layout enrichment can improve the performance
of purely text-based LLMs for document understanding by up to 15% compared to
just using plain document text. In conclusion, this approach should be
considered for the best model choice between text-based LLM or multi-modal
document transformers.
| 2,024 |
Computation and Language
|
Camouflage is all you need: Evaluating and Enhancing Language Model
Robustness Against Camouflage Adversarial Attacks
|
Adversarial attacks represent a substantial challenge in Natural Language
Processing (NLP). This study undertakes a systematic exploration of this
challenge in two distinct phases: vulnerability evaluation and resilience
enhancement of Transformer-based models under adversarial attacks.
In the evaluation phase, we assess the susceptibility of three Transformer
configurations, encoder-decoder, encoder-only, and decoder-only setups, to
adversarial attacks of escalating complexity across datasets containing
offensive language and misinformation. Encoder-only models manifest a 14% and
21% performance drop in offensive language detection and misinformation
detection tasks, respectively. Decoder-only models register a 16% decrease in
both tasks, while encoder-decoder models exhibit a maximum performance drop of
14% and 26% in the respective tasks.
The resilience-enhancement phase employs adversarial training, integrating
pre-camouflaged and dynamically altered data. This approach effectively reduces
the performance drop in encoder-only models to an average of 5% in offensive
language detection and 2% in misinformation detection tasks. Decoder-only
models, occasionally exceeding original performance, limit the performance drop
to 7% and 2% in the respective tasks. Although not surpassing the original
performance, Encoder-decoder models can reduce the drop to an average of 6% and
2% respectively.
Results suggest a trade-off between performance and robustness, with some
models maintaining similar performance while gaining robustness. Our study and
adversarial training techniques have been incorporated into an open-source tool
for generating camouflaged datasets. However, methodology effectiveness depends
on the specific camouflage technique and data encountered, emphasizing the need
for continued exploration.
| 2,024 |
Computation and Language
|
Generative Representational Instruction Tuning
|
All text-based language problems can be reduced to either generation or
embedding. Current models only perform well at one or the other. We introduce
generative representational instruction tuning (GRIT) whereby a large language
model is trained to handle both generative and embedding tasks by
distinguishing between them through instructions. Compared to other open
models, our resulting GritLM 7B sets a new state of the art on the Massive Text
Embedding Benchmark (MTEB) and outperforms all models up to its size on a range
of generative tasks. By scaling up further, GritLM 8x7B outperforms all open
generative language models that we tried while still being among the best
embedding models. Notably, we find that GRIT matches training on only
generative or embedding data, thus we can unify both at no performance loss.
Among other benefits, the unification via GRIT speeds up Retrieval-Augmented
Generation (RAG) by > 60% for long documents, by no longer requiring separate
retrieval and generation models. Models, code, etc. are freely available at
https://github.com/ContextualAI/gritlm.
| 2,024 |
Computation and Language
|
DE-COP: Detecting Copyrighted Content in Language Models Training Data
|
How can we detect if copyrighted content was used in the training process of
a language model, considering that the training data is typically undisclosed?
We are motivated by the premise that a language model is likely to identify
verbatim excerpts from its training text. We propose DE-COP, a method to
determine whether a piece of copyrighted content was included in training.
DE-COP's core approach is to probe an LLM with multiple-choice questions, whose
options include both verbatim text and their paraphrases. We construct
BookTection, a benchmark with excerpts from 165 books published prior and
subsequent to a model's training cutoff, along with their paraphrases. Our
experiments show that DE-COP surpasses the prior best method by 9.6% in
detection performance (AUC) on models with logits available. Moreover, DE-COP
also achieves an average accuracy of 72% for detecting suspect books on fully
black-box models where prior methods give $\approx$ 4% accuracy. Our code and
datasets are available at https://github.com/avduarte333/DE-COP_Method
| 2,024 |
Computation and Language
|
Enhancing Large Language Models with Pseudo- and Multisource- Knowledge
Graphs for Open-ended Question Answering
|
Mitigating the hallucinations of Large Language Models (LLMs) and enhancing
them is a crucial task. Although some existing methods employ model
self-enhancement techniques, they fall short of effectively addressing unknown
factual hallucinations. Using Knowledge Graph (KG) enhancement approaches fails
to address the generalization across different KG sources and the enhancement
of open-ended answer questions simultaneously. To tackle these limitations,
there is a framework that combines Pseudo-Graph Generation and Atomic Knowledge
Verification proposed. The enhancement of LLM using KG in an open-ended
question-answering setting is implemented by leveraging the Pseudo-Graph
Generation. Atomic Knowledge Verification utilizes atomic-level knowledge
querying and verification to achieve generalizability under different KG
sources. Compared to the baseline, this approach yields a minimum improvement
of 11.5 in the ROUGE-L score for open-ended questions. For precise questions,
we observe a minimum accuracy improvement of 7.5. Moreover, there is also
demonstration that this framework exhibits generalizability across different KG
sources. In summary, our results pave the way for enhancing LLMs by
incorporating Pseudo- and Multisource-KGs, particularly in the context of
open-ended questions.
| 2,024 |
Computation and Language
|
BUSTER: a "BUSiness Transaction Entity Recognition" dataset
|
Albeit Natural Language Processing has seen major breakthroughs in the last
few years, transferring such advances into real-world business cases can be
challenging. One of the reasons resides in the displacement between popular
benchmarks and actual data. Lack of supervision, unbalanced classes, noisy data
and long documents often affect real problems in vertical domains such as
finance, law and health. To support industry-oriented research, we present
BUSTER, a BUSiness Transaction Entity Recognition dataset. The dataset consists
of 3779 manually annotated documents on financial transactions. We establish
several baselines exploiting both general-purpose and domain-specific language
models. The best performing model is also used to automatically annotate 6196
documents, which we release as an additional silver corpus to BUSTER.
| 2,024 |
Computation and Language
|
A Dataset of Open-Domain Question Answering with Multiple-Span Answers
|
Multi-span answer extraction, also known as the task of multi-span question
answering (MSQA), is critical for real-world applications, as it requires
extracting multiple pieces of information from a text to answer complex
questions. Despite the active studies and rapid progress in English MSQA
research, there is a notable lack of publicly available MSQA benchmark in
Chinese. Previous efforts for constructing MSQA datasets predominantly
emphasized entity-centric contextualization, resulting in a bias towards
collecting factoid questions and potentially overlooking questions requiring
more detailed descriptive responses. To overcome these limitations, we present
CLEAN, a comprehensive Chinese multi-span question answering dataset that
involves a wide range of open-domain subjects with a substantial number of
instances requiring descriptive answers. Additionally, we provide established
models from relevant literature as baselines for CLEAN. Experimental results
and analysis show the characteristics and challenge of the newly proposed CLEAN
dataset for the community. Our dataset, CLEAN, will be publicly released at
zhiyiluo.site/misc/clean_v1.0_ sample.json.
| 2,024 |
Computation and Language
|
Paying Attention to Deflections: Mining Pragmatic Nuances for
Whataboutism Detection in Online Discourse
|
Whataboutism, a potent tool for disrupting narratives and sowing distrust,
remains under-explored in quantitative NLP research. Moreover, past work has
not distinguished its use as a strategy for misinformation and propaganda from
its use as a tool for pragmatic and semantic framing. We introduce new datasets
from Twitter and YouTube, revealing overlaps as well as distinctions between
whataboutism, propaganda, and the tu quoque fallacy. Furthermore, drawing on
recent work in linguistic semantics, we differentiate the `what about' lexical
construct from whataboutism. Our experiments bring to light unique challenges
in its accurate detection, prompting the introduction of a novel method using
attention weights for negative sample mining. We report significant
improvements of 4% and 10% over previous state-of-the-art methods in our
Twitter and YouTube collections, respectively.
| 2,024 |
Computation and Language
|
Multi-Word Tokenization for Sequence Compression
|
Large Language Models have proven highly successful at modelling a variety of
tasks. However, this comes at a steep computational cost that hinders wider
industrial uptake. In this pa005 per, we present MWT: a Multi-Word Tokenizer
that goes beyond word boundaries by representing frequent multi-word
expressions as single tokens. MWTs produce a more compact and efficient
tokenization that yields two benefits: (1) Increase in performance due to a
greater coverage of input data given a fixed sequence length and budget; (2)
Faster and lighter inference due to the ability to reduce the sequence length
with negligible drops in performance. Our results show that MWT is more robust
across shorter sequence lengths, thus allowing for major speedups via early
sequence truncation.
| 2,023 |
Computation and Language
|
Crafting a Good Prompt or Providing Exemplary Dialogues? A Study of
In-Context Learning for Persona-based Dialogue Generation
|
Previous in-context learning (ICL) research has focused on tasks such as
classification, machine translation, text2table, etc., while studies on whether
ICL can improve human-like dialogue generation are scarce. Our work fills this
gap by systematically investigating the ICL capabilities of large language
models (LLMs) in persona-based dialogue generation, conducting extensive
experiments on high-quality real human Chinese dialogue datasets. From
experimental results, we draw three conclusions: 1) adjusting prompt
instructions is the most direct, effective, and economical way to improve
generation quality; 2) randomly retrieving demonstrations (demos) achieves the
best results, possibly due to the greater diversity and the amount of effective
information; counter-intuitively, retrieving demos with a context identical to
the query performs the worst; 3) even when we destroy the multi-turn
associations and single-turn semantics in the demos, increasing the number of
demos still improves dialogue performance, proving that LLMs can learn from
corrupted dialogue demos. Previous explanations of the ICL mechanism, such as
$n$-gram induction head, cannot fully account for this phenomenon.
| 2,024 |
Computation and Language
|
Case Study: Testing Model Capabilities in Some Reasoning Tasks
|
Large Language Models (LLMs) excel in generating personalized content and
facilitating interactive dialogues, showcasing their remarkable aptitude for a
myriad of applications. However, their capabilities in reasoning and providing
explainable outputs, especially within the context of reasoning abilities,
remain areas for improvement. In this study, we delve into the reasoning
abilities of LLMs, highlighting the current challenges and limitations that
hinder their effectiveness in complex reasoning scenarios.
| 2,024 |
Computation and Language
|
Fast Vocabulary Transfer for Language Model Compression
|
Real-world business applications require a trade-off between language model
performance and size. We propose a new method for model compression that relies
on vocabulary transfer. We evaluate the method on various vertical domains and
downstream tasks. Our results indicate that vocabulary transfer can be
effectively used in combination with other compression techniques, yielding a
significant reduction in model size and inference time while marginally
compromising on performance.
| 2,022 |
Computation and Language
|
Bridging the Empirical-Theoretical Gap in Neural Network Formal Language
Learning Using Minimum Description Length
|
Neural networks offer good approximation to many tasks but consistently fail
to reach perfect generalization, even when theoretical work shows that such
perfect solutions can be expressed by certain architectures. Using the task of
formal language learning, we focus on one simple formal language and show that
the theoretically correct solution is in fact not an optimum of commonly used
objectives -- even with regularization techniques that according to common
wisdom should lead to simple weights and good generalization (L1, L2) or other
meta-heuristics (early-stopping, dropout). However, replacing standard targets
with the Minimum Description Length objective (MDL) results in the correct
solution being an optimum.
| 2,024 |
Computation and Language
|
Self-Augmented In-Context Learning for Unsupervised Word Translation
|
Recent work has shown that, while large language models (LLMs) demonstrate
strong word translation or bilingual lexicon induction (BLI) capabilities in
few-shot setups, they still cannot match the performance of 'traditional'
mapping-based approaches in the unsupervised scenario where no seed translation
pairs are available, especially for lower-resource languages. To address this
challenge with LLMs, we propose self-augmented in-context learning (SAIL) for
unsupervised BLI: starting from a zero-shot prompt, SAIL iteratively induces a
set of high-confidence word translation pairs for in-context learning (ICL)
from an LLM, which it then reapplies to the same LLM in the ICL fashion. Our
method shows substantial gains over zero-shot prompting of LLMs on two
established BLI benchmarks spanning a wide range of language pairs, also
outperforming mapping-based baselines across the board. In addition to
achieving state-of-the-art unsupervised BLI performance, we also conduct
comprehensive analyses on SAIL and discuss its limitations.
| 2,024 |
Computation and Language
|
RS-DPO: A Hybrid Rejection Sampling and Direct Preference Optimization
Method for Alignment of Large Language Models
|
Reinforcement learning from human feedback (RLHF) has been extensively
employed to align large language models with user intent. However, proximal
policy optimization (PPO) based RLHF is occasionally unstable requiring
significant hyperparameter finetuning, and computationally expensive to
maximize the estimated reward during alignment. Recently, direct preference
optimization (DPO) is proposed to address those challenges. However, DPO relies
on contrastive responses generated from human annotator and alternative LLM,
instead of the policy model, limiting the effectiveness of the RLHF. In this
paper, we addresses both challenges by systematically combining rejection
sampling (RS) and DPO. Our proposed method, RS-DPO, initiates with the
development of a supervised fine-tuned policy model (SFT). A varied set of k
responses per prompt are sampled directly from the SFT model. RS-DPO identifies
pairs of contrastive samples based on their reward distribution. Finally, we
apply DPO with the contrastive samples to align the model to human preference.
Our experiments indicate that our proposed method effectively fine-tunes LLMs
with limited resource environments, leading to improved alignment with user
intent. Furthermore, it outperforms existing methods, including RS, PPO, and
DPO.
| 2,024 |
Computation and Language
|
Unmemorization in Large Language Models via Self-Distillation and
Deliberate Imagination
|
While displaying impressive generation capabilities across many tasks, Large
Language Models (LLMs) still struggle with crucial issues of privacy violation
and unwanted exposure of sensitive data. This raises an essential question: how
should we prevent such undesired behavior of LLMs while maintaining their
strong generation and natural language understanding (NLU) capabilities? In
this work, we introduce a novel approach termed deliberate imagination in the
context of LLM unlearning. Instead of trying to forget memorized data, we
employ a self-distillation framework, guiding LLMs to deliberately imagine
alternative scenarios. As demonstrated in a wide range of experiments, the
proposed method not only effectively unlearns targeted text but also preserves
the LLMs' capabilities in open-ended generation tasks as well as in NLU tasks.
Our results demonstrate the usefulness of this approach across different models
and sizes, and also with parameter-efficient fine-tuning, offering a novel
pathway to addressing the challenges with private and sensitive data in LLM
applications.
| 2,024 |
Computation and Language
|
Towards Safer Large Language Models through Machine Unlearning
|
The rapid advancement of Large Language Models (LLMs) has demonstrated their
vast potential across various domains, attributed to their extensive
pretraining knowledge and exceptional generalizability. However, LLMs often
encounter challenges in generating harmful content when faced with problematic
prompts. To address this problem, existing work attempted to implement a
gradient ascent based approach to prevent LLMs from producing harmful output.
While these methods can be effective, they frequently impact the model utility
in responding to normal prompts. To address this gap, we introduce Selective
Knowledge negation Unlearning (SKU), a novel unlearning framework for LLMs,
designed to eliminate harmful knowledge while preserving utility on normal
prompts. Specifically, SKU is consisted of two stages: harmful knowledge
acquisition stage and knowledge negation stage. The first stage aims to
identify and acquire harmful knowledge within the model, whereas the second is
dedicated to remove this knowledge. SKU selectively isolates and removes
harmful knowledge in model parameters, ensuring the model's performance remains
robust on normal prompts. Our experiments conducted across various LLM
architectures demonstrate that SKU identifies a good balance point between
removing harmful information and preserving utility.
| 2,024 |
Computation and Language
|
Both Matter: Enhancing the Emotional Intelligence of Large Language
Models without Compromising the General Intelligence
|
Emotional Intelligence (EI), consisting of emotion perception, emotion
cognition and emotion expression, plays the critical roles in improving user
interaction experience for the current large language model (LLM) based
conversational general AI assistants. Previous works mainly focus on raising
the emotion perception ability of them via naive fine-tuning on EI-related
classification or regression tasks. However, this leads to the incomplete
enhancement of EI and catastrophic forgetting of the general intelligence (GI).
To this end, we first introduce \textsc{EiBench}, a large-scale collection of
EI-related tasks in the text-to-text formation with task instructions that
covers all three aspects of EI, which lays a solid foundation for the
comprehensive EI enhancement of LLMs. Then a novel \underline{\textbf{Mo}}dular
\underline{\textbf{E}}motional \underline{\textbf{I}}ntelligence enhancement
method (\textbf{MoEI}), consisting of Modular Parameter Expansion and
intra-inter modulation, is proposed to comprehensively enhance the EI of LLMs
without compromise their GI. Extensive experiments on two representative
LLM-based assistants, Flan-T5 and LLaMA-2-Chat, demonstrate the effectiveness
of MoEI to improving EI while maintain GI.
| 2,024 |
Computation and Language
|
Quantized Embedding Vectors for Controllable Diffusion Language Models
|
Improving the controllability, portability, and inference speed of diffusion
language models (DLMs) is a key challenge in natural language generation. While
recent research has shown significant success in complex text generation with
language models, the memory and computational power are still very demanding
and fall short of expectations, which naturally results in low portability and
instability for the models. To mitigate these issues, numerous well-established
methods were proposed for neural network quantization. To further enhance their
portability of independent deployment as well as improve their stability
evaluated by language perplexity, we propose a novel approach called the
Quantized Embedding Controllable Diffusion Language Model (QE-CDLM). QE-CDLM
builds upon the recent successful controllable DLMs by remodeling the
task-specific embedding space via quantization. This leads to a gradient-based
controller for the generation tasks, and more stable intermediate latent
variables are obtained, which naturally brings in an accelerated convergence as
well as better controllability. Additionally, the adaption fine-tuning method
is employed to reduce tunable weights. Experimental results on five challenging
fine-grained control tasks demonstrate that QE-CDLM compares favorably to
existing methods in terms of quality and feasibility, achieving better
perplexity and lightweight fine-tuning.
| 2,024 |
Computation and Language
|
Selective Reflection-Tuning: Student-Selected Data Recycling for LLM
Instruction-Tuning
|
Instruction tuning is critical to large language models (LLMs) for achieving
better instruction following and task adaptation capabilities but its success
heavily relies on the training data quality. Many recent methods focus on
improving the data quality but often overlook the compatibility of the data
with the student model being finetuned. This paper introduces Selective
Reflection-Tuning, a novel paradigm that synergizes a teacher LLM's reflection
and introspection for improving existing data quality with the data selection
capability of the student LLM, to automatically refine existing
instruction-tuning data. This teacher-student collaboration produces
high-quality and student-compatible instruction-response pairs, resulting in
sample-efficient instruction tuning and LLMs of superior performance. Selective
Reflection-Tuning is a data augmentation and synthesis that generally improves
LLM finetuning and self-improvement without collecting brand-new data. We apply
our method to Alpaca and WizardLM data and achieve much stronger and top-tier
7B and 13B LLMs. Our codes, models, and data will be released at
https://github.com/tianyi-lab/Reflection_Tuning.
| 2,024 |
Computation and Language
|
TOAD: Task-Oriented Automatic Dialogs with Diverse Response Styles
|
In light of recent advances in large language models (LLMs), the expectations
for the next generation of virtual assistants include enhanced naturalness and
adaptability across diverse usage scenarios. However, the creation of
high-quality annotated data for Task-Oriented Dialog (TOD) is recognized to be
slow and costly. To address these challenges, we introduce Task-Oriented
Automatic Dialogs (TOAD), a novel and scalable TOD dataset along with its
automatic generation pipeline. The TOAD dataset simulates realistic app context
interaction and provide a variety of system response style options. Two aspects
of system response styles are considered, verbosity level and users' expression
mirroring. We benchmark TOAD on two response generation tasks and the results
show that modelling more verbose or responses without user expression mirroring
is more challenging.
| 2,024 |
Computation and Language
|
ControlLM: Crafting Diverse Personalities for Language Models
|
As language models continue to scale in size and capability, they display an
array of emerging behaviors, both beneficial and concerning. This heightens the
need to control model behaviors. We hope to be able to control the personality
traits of language models at the inference-time so as to have various character
features, on top of which the requirements of different types of tasks can be
met. Personality is a higher-level and more abstract behavioral representation
for language models. We introduce ControlLM, which leverages differential
activation patterns, derived from contrasting behavioral prompts in the model's
latent space, to influence the model's personality traits at inference. This
approach allows for the precise, real-time adjustment of model behavior. First,
we demonstrate ControlLM's capacity to elicit diverse persona behaviors without
any training, while precision control allows personality traits to closely
match average human values. Subsequently, we showcase improved reasoning and
question answering through selective amplification of beneficial attributes
like conscientiousness and friendliness. We hope that this work will inspire
research on controlling human-like behaviors of language models and provide
insights for future research. Our code is publicly available at:
https://github.com/wengsyx/ControlLM.
| 2,024 |
Computation and Language
|
Knowledge-Infused LLM-Powered Conversational Health Agent: A Case Study
for Diabetes Patients
|
Effective diabetes management is crucial for maintaining health in diabetic
patients. Large Language Models (LLMs) have opened new avenues for diabetes
management, facilitating their efficacy. However, current LLM-based approaches
are limited by their dependence on general sources and lack of integration with
domain-specific knowledge, leading to inaccurate responses. In this paper, we
propose a knowledge-infused LLM-powered conversational health agent (CHA) for
diabetic patients. We customize and leverage the open-source openCHA framework,
enhancing our CHA with external knowledge and analytical capabilities. This
integration involves two key components: 1) incorporating the American Diabetes
Association dietary guidelines and the Nutritionix information and 2) deploying
analytical tools that enable nutritional intake calculation and comparison with
the guidelines. We compare the proposed CHA with GPT4. Our evaluation includes
100 diabetes-related questions on daily meal choices and assessing the
potential risks associated with the suggested diet. Our findings show that the
proposed agent demonstrates superior performance in generating responses to
manage essential nutrients.
| 2,024 |
Computation and Language
|
Data Engineering for Scaling Language Models to 128K Context
|
We study the continual pretraining recipe for scaling language models'
context lengths to 128K, with a focus on data engineering. We hypothesize that
long context modeling, in particular \textit{the ability to utilize information
at arbitrary input locations}, is a capability that is mostly already acquired
through large-scale pretraining, and that this capability can be readily
extended to contexts substantially longer than seen during training~(e.g., 4K
to 128K) through lightweight continual pretraining on appropriate data mixture.
We investigate the \textit{quantity} and \textit{quality} of the data for
continual pretraining: (1) for quantity, we show that 500 million to 5 billion
tokens are enough to enable the model to retrieve information anywhere within
the 128K context; (2) for quality, our results equally emphasize \textit{domain
balance} and \textit{length upsampling}. Concretely, we find that naively
upsampling longer data on certain domains like books, a common practice of
existing work, gives suboptimal performance, and that a balanced domain mixture
is important. We demonstrate that continual pretraining of the full model on
1B-5B tokens of such data is an effective and affordable strategy for scaling
the context length of language models to 128K. Our recipe outperforms strong
open-source long-context models and closes the gap to frontier models like
GPT-4 128K.
| 2,024 |
Computation and Language
|
Unlocking Structure Measuring: Introducing PDD, an Automatic Metric for
Positional Discourse Coherence
|
Recent large language models (LLMs) have shown remarkable performance in
aligning generated text with user intentions across various tasks. When it
comes to long-form text generation, there has been a growing interest in
generation from a discourse coherence perspective. However, existing lexical or
semantic metrics such as BLEU, ROUGE, BertScore cannot effectively capture the
discourse coherence. The development of discourse-specific automatic evaluation
methods for assessing the output of LLMs warrants greater focus and
exploration. In this paper, we present a novel automatic metric designed to
quantify the discourse divergence between two long-form articles. Extensive
experiments on three datasets from representative domains demonstrate that our
metric aligns more closely with human preferences and GPT-4 coherence
evaluation, outperforming existing evaluation methods.
| 2,024 |
Computation and Language
|
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
|
Recent work has shown the immense potential of synthetically generated
datasets for training large language models (LLMs), especially for acquiring
targeted skills. Current large-scale math instruction tuning datasets such as
MetaMathQA (Yu et al., 2024) and MAmmoTH (Yue et al., 2024) are constructed
using outputs from closed-source LLMs with commercially restrictive licenses. A
key reason limiting the use of open-source LLMs in these data generation
pipelines has been the wide gap between the mathematical skills of the best
closed-source LLMs, such as GPT-4, and the best open-source LLMs. Building on
the recent progress in open-source LLMs, our proposed prompting novelty, and
some brute-force scaling, we construct OpenMathInstruct-1, a math instruction
tuning dataset with 1.8M problem-solution pairs. The dataset is constructed by
synthesizing code-interpreter solutions for GSM8K and MATH, two popular math
reasoning benchmarks, using the recently released and permissively licensed
Mixtral model. Our best model, OpenMath-CodeLlama-70B, trained on a subset of
OpenMathInstruct-1, achieves a score of 84.6% on GSM8K and 50.7% on MATH, which
is competitive with the best gpt-distilled models. We release our code, models,
and the OpenMathInstruct-1 dataset under a commercially permissive license.
| 2,024 |
Computation and Language
|
TDAG: A Multi-Agent Framework based on Dynamic Task Decomposition and
Agent Generation
|
The emergence of Large Language Models (LLMs) like ChatGPT has inspired the
development of LLM-based agents capable of addressing complex, real-world
tasks. However, these agents often struggle during task execution due to
methodological constraints, such as error propagation and limited adaptability.
To address this issue, we propose a multi-agent framework based on dynamic Task
Decomposition and Agent Generation (TDAG). This framework dynamically
decomposes complex tasks into smaller subtasks and assigns each to a
specifically generated subagent, thereby enhancing adaptability in diverse and
unpredictable real-world tasks. Simultaneously, existing benchmarks often lack
the granularity needed to evaluate incremental progress in complex, multi-step
tasks. In response, we introduce ItineraryBench in the context of travel
planning, featuring interconnected, progressively complex tasks with a
fine-grained evaluation system. ItineraryBench is designed to assess agents'
abilities in memory, planning, and tool usage across tasks of varying
complexity. Our experimental results reveal that TDAG significantly outperforms
established baselines, showcasing its superior adaptability and context
awareness in complex task scenarios.
| 2,024 |
Computation and Language
|
Uncertainty Decomposition and Quantification for In-Context Learning of
Large Language Models
|
In-context learning has emerged as a groundbreaking ability of Large Language
Models (LLMs) and revolutionized various fields by providing a few
task-relevant demonstrations in the prompt. However, trustworthy issues with
LLM's response, such as hallucination, have also been actively discussed.
Existing works have been devoted to quantifying the uncertainty in LLM's
response, but they often overlook the complex nature of LLMs and the uniqueness
of in-context learning. In this work, we delve into the predictive uncertainty
of LLMs associated with in-context learning, highlighting that such
uncertainties may stem from both the provided demonstrations (aleatoric
uncertainty) and ambiguities tied to the model's configurations (epistemic
uncertainty). We propose a novel formulation and corresponding estimation
method to quantify both types of uncertainties. The proposed method offers an
unsupervised way to understand the prediction of in-context learning in a
plug-and-play fashion. Extensive experiments are conducted to demonstrate the
effectiveness of the decomposition. The code and data are available at:
\url{https://github.com/lingchen0331/UQ_ICL}.
| 2,024 |
Computation and Language
|
A Trembling House of Cards? Mapping Adversarial Attacks against Language
Agents
|
Language agents powered by large language models (LLMs) have seen exploding
development. Their capability of using language as a vehicle for thought and
communication lends an incredible level of flexibility and versatility. People
have quickly capitalized on this capability to connect LLMs to a wide range of
external components and environments: databases, tools, the Internet, robotic
embodiment, etc. Many believe an unprecedentedly powerful automation technology
is emerging. However, new automation technologies come with new safety risks,
especially for intricate systems like language agents. There is a surprisingly
large gap between the speed and scale of their development and deployment and
our understanding of their safety risks. Are we building a house of cards? In
this position paper, we present the first systematic effort in mapping
adversarial attacks against language agents. We first present a unified
conceptual framework for agents with three major components: Perception, Brain,
and Action. Under this framework, we present a comprehensive discussion and
propose 12 potential attack scenarios against different components of an agent,
covering different attack strategies (e.g., input manipulation, adversarial
demonstrations, jailbreaking, backdoors). We also draw connections to
successful attack strategies previously applied to LLMs. We emphasize the
urgency to gain a thorough understanding of language agent risks before their
widespread deployment.
| 2,024 |
Computation and Language
|
Chain-of-Thought Reasoning Without Prompting
|
In enhancing the reasoning capabilities of large language models (LLMs),
prior research primarily focuses on specific prompting techniques such as
few-shot or zero-shot chain-of-thought (CoT) prompting. These methods, while
effective, often involve manually intensive prompt engineering. Our study takes
a novel approach by asking: Can LLMs reason effectively without prompting? Our
findings reveal that, intriguingly, CoT reasoning paths can be elicited from
pre-trained LLMs by simply altering the \textit{decoding} process. Rather than
conventional greedy decoding, we investigate the top-$k$ alternative tokens,
uncovering that CoT paths are frequently inherent in these sequences. This
approach not only bypasses the confounders of prompting but also allows us to
assess the LLMs' \textit{intrinsic} reasoning abilities. Moreover, we observe
that the presence of a CoT in the decoding path correlates with a higher
confidence in the model's decoded answer. This confidence metric effectively
differentiates between CoT and non-CoT paths. Extensive empirical studies on
various reasoning benchmarks show that the proposed CoT-decoding substantially
outperforms the standard greedy decoding.
| 2,024 |
Computation and Language
|
How to Discern Important Urgent News?
|
We found that a simple property of clusters in a clustered dataset of news
correlate strongly with importance and urgency of news (IUN) as assessed by
LLM. We verified our finding across different news datasets, dataset sizes,
clustering algorithms and embeddings. The found correlation should allow using
clustering (as an alternative to LLM) for identifying the most important urgent
news, or for filtering out unimportant articles.
| 2,024 |
Computation and Language
|
The optimal placement of the head in the noun phrase. The case of
demonstrative, numeral, adjective and noun
|
The word order of a sentence is shaped by multiple principles. The principle
of syntactic dependency distance minimization is in conflict with the principle
of surprisal minimization (or predictability maximization) in single head
syntactic dependency structures: while the former predicts that the head should
be placed at the center of the linear arrangement, the latter predicts that the
head should be placed at one of the ends (either first or last). A critical
question is when surprisal minimization (or predictability maximization) should
surpass syntactic dependency distance minimization. In the context of single
head structures, it has been predicted that this is more likely to happen when
two conditions are met, i.e. (a) fewer words are involved and (b) words are
shorter. Here we test the prediction on the noun phrase when it is composed of
a demonstrative, a numeral, an adjective and a noun. We find that, across
preferred orders in languages, the noun tends to be placed at one of the ends,
confirming the theoretical prediction. We also show evidence of anti locality
effects: syntactic dependency distances in preferred orders are longer than
expected by chance.
| 2,024 |
Computation and Language
|
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of
Language Models
|
Prompt learning is susceptible to intrinsic bias present in pre-trained
language models (LMs), resulting in sub-optimal performance of prompt-based
zero/few-shot learning. In this work, we propose a null-input prompting method
to calibrate intrinsic bias encoded in pre-trained LMs. Different from prior
efforts that address intrinsic bias primarily for social fairness and often
involve excessive computational cost, our objective is to explore enhancing
LMs' performance in downstream zero/few-shot learning while emphasizing the
efficiency of intrinsic bias calibration. Specifically, we leverage a diverse
set of auto-selected null-meaning inputs generated from GPT-4 to prompt
pre-trained LMs for intrinsic bias probing. Utilizing the bias-reflected
probability distribution, we formulate a distribution disparity loss for bias
calibration, where we exclusively update bias parameters ($0.1\%$ of total
parameters) of LMs towards equal probability distribution. Experimental results
show that the calibration promotes an equitable starting point for LMs while
preserving language modeling abilities. Across a wide range of datasets,
including sentiment analysis and topic classification, our method significantly
improves zero/few-shot learning performance of LMs for both in-context learning
and prompt-based fine-tuning (on average $9\%$ and $2\%$, respectively).
| 2,024 |
Computation and Language
|
BioMistral: A Collection of Open-Source Pretrained Large Language Models
for Medical Domains
|
Large Language Models (LLMs) have demonstrated remarkable versatility in
recent years, offering potential applications across specialized domains such
as healthcare and medicine. Despite the availability of various open-source
LLMs tailored for health contexts, adapting general-purpose LLMs to the medical
domain presents significant challenges. In this paper, we introduce BioMistral,
an open-source LLM tailored for the biomedical domain, utilizing Mistral as its
foundation model and further pre-trained on PubMed Central. We conduct a
comprehensive evaluation of BioMistral on a benchmark comprising 10 established
medical question-answering (QA) tasks in English. We also explore lightweight
models obtained through quantization and model merging approaches. Our results
demonstrate BioMistral's superior performance compared to existing open-source
medical models and its competitive edge against proprietary counterparts.
Finally, to address the limited availability of data beyond English and to
assess the multilingual generalization of medical LLMs, we automatically
translated and evaluated this benchmark into 7 other languages. This marks the
first large-scale multilingual evaluation of LLMs in the medical domain.
Datasets, multilingual evaluation benchmarks, scripts, and all the models
obtained during our experiments are freely released.
| 2,024 |
Computation and Language
|
DataDreamer: A Tool for Synthetic Data Generation and Reproducible LLM
Workflows
|
Large language models (LLMs) have become a dominant and important tool for
NLP researchers in a wide range of tasks. Today, many researchers use LLMs in
synthetic data generation, task evaluation, fine-tuning, distillation, and
other model-in-the-loop research workflows. However, challenges arise when
using these models that stem from their scale, their closed source nature, and
the lack of standardized tooling for these new and emerging workflows. The
rapid rise to prominence of these models and these unique challenges has had
immediate adverse impacts on open science and on the reproducibility of work
that uses them. In this paper, we introduce DataDreamer, an open source Python
library that allows researchers to write simple code to implement powerful LLM
workflows. DataDreamer also helps researchers adhere to best practices that we
propose to encourage open science and reproducibility. The library and
documentation are available at https://github.com/datadreamer-dev/DataDreamer .
| 2,024 |
Computation and Language
|
Chain of Logic: Rule-Based Reasoning with Large Language Models
|
Rule-based reasoning, a fundamental type of legal reasoning, enables us to
draw conclusions by accurately applying a rule to a set of facts. We explore
causal language models as rule-based reasoners, specifically with respect to
compositional rules - rules consisting of multiple elements which form a
complex logical expression. Reasoning about compositional rules is challenging
because it requires multiple reasoning steps, and attending to the logical
relationships between elements. We introduce a new prompting method, Chain of
Logic, which elicits rule-based reasoning through decomposition (solving
elements as independent threads of logic), and recomposition (recombining these
sub-answers to resolve the underlying logical expression). This method was
inspired by the IRAC (Issue, Rule, Application, Conclusion) framework, a
sequential reasoning approach used by lawyers. We evaluate chain of logic
across eight rule-based reasoning tasks involving three distinct compositional
rules from the LegalBench benchmark and demonstrate it consistently outperforms
other prompting methods, including chain of thought and self-ask, using
open-source and commercial language models.
| 2,024 |
Computation and Language
|
Understanding Survey Paper Taxonomy about Large Language Models via
Graph Representation Learning
|
As new research on Large Language Models (LLMs) continues, it is difficult to
keep up with new research and models. To help researchers synthesize the new
research many have written survey papers, but even those have become numerous.
In this paper, we develop a method to automatically assign survey papers to a
taxonomy. We collect the metadata of 144 LLM survey papers and explore three
paradigms to classify papers within the taxonomy. Our work indicates that
leveraging graph structure information on co-category graphs can significantly
outperform the language models in two paradigms; pre-trained language models'
fine-tuning and zero-shot/few-shot classifications using LLMs. We find that our
model surpasses an average human recognition level and that fine-tuning LLMs
using weak labels generated by a smaller model, such as the GCN in this study,
can be more effective than using ground-truth labels, revealing the potential
of weak-to-strong generalization in the taxonomy classification task.
| 2,024 |
Computation and Language
|
Measuring and Reducing LLM Hallucination without Gold-Standard Answers
via Expertise-Weighting
|
LLM hallucination, i.e. generating factually incorrect yet seemingly
convincing answers, is currently a major threat to the trustworthiness and
reliability of LLMs. The first step towards solving this complicated problem is
to measure it. However, existing hallucination metrics require to have a
benchmark dataset with gold-standard answers, i.e. "best" or "correct" answers
written by humans. Such requirement makes hallucination measurement costly and
prone to human errors. In this work, we propose Factualness Evaluations via
Weighting LLMs (FEWL), the first hallucination metric that is specifically
designed for the scenario when gold-standard answers are absent. FEWL leverages
the answers from off-the-shelf LLMs that serve as a proxy of gold-standard
answers. The key challenge is how to quantify the expertise of reference LLMs
resourcefully. We show FEWL has certain theoretical guarantees and demonstrate
empirically it gives more accurate hallucination measures than naively using
reference LLMs. We also show how to leverage FEWL to reduce hallucination
through both in-context learning and supervised finetuning. Last, we build a
large-scale benchmark dataset to facilitate LLM hallucination research.
| 2,024 |
Computation and Language
|
Pushing the Limits of Zero-shot End-to-End Speech Translation
|
Data scarcity and the modality gap between the speech and text modalities are
two major obstacles of end-to-end Speech Translation (ST) systems, thus
hindering their performance. Prior work has attempted to mitigate these
challenges by leveraging external MT data and optimizing distance metrics that
bring closer the speech-text representations. However, achieving competitive
results typically requires some ST data. For this reason, we introduce
ZeroSwot, a method for zero-shot ST that bridges the modality gap without any
paired ST data. Leveraging a novel CTC compression and Optimal Transport, we
train a speech encoder using only ASR data, to align with the representation
space of a massively multilingual MT model. The speech encoder seamlessly
integrates with the MT model at inference, enabling direct translation from
speech to text, across all languages supported by the MT model. Our experiments
show that we can effectively close the modality gap without ST data, while our
results on MuST-C and CoVoST demonstrate our method's superiority over not only
previous zero-shot models, but also supervised ones, achieving state-of-the-art
results.
| 2,024 |
Computation and Language
|
Understanding In-Context Learning with a Pelican Soup Framework
|
Many existing theoretical analyses of in-context learning for natural
language processing are based on latent variable models that leaves gaps
between theory and practice. We aim to close these gaps by proposing a
theoretical framework, the Pelican Soup Framework. In this framework, we
introduce (1) the notion of a common sense knowledge base, (2) a general
formalism for natural language classification tasks, and the notion of (3)
meaning association. Under this framework, we can establish a
$\mathcal{O}(1/T)$ loss bound for in-context learning, where $T$ is the number
of example-label pairs in the demonstration. Compared with previous works, our
bound reflects the effect of the choice of verbalizers and the effect of
instruction tuning. An additional notion of \textit{atom concepts} makes our
framework possible to explain the generalization to tasks unseen in the
language model training data. Finally, we propose a toy setup, Calcutec, and a
digit addition task that mimics types of distribution shifts a model needs to
overcome to perform in-context learning. We also experiment with GPT2-Large on
real-world NLP tasks. Our empirical results demonstrate the efficacy of our
framework to explain in-context learning.
| 2,024 |
Computation and Language
|
DELL: Generating Reactions and Explanations for LLM-Based Misinformation
Detection
|
Large language models are limited by challenges in factuality and
hallucinations to be directly employed off-the-shelf for judging the veracity
of news articles, where factual accuracy is paramount. In this work, we propose
DELL that identifies three key stages in misinformation detection where LLMs
could be incorporated as part of the pipeline: 1) LLMs could \emph{generate
news reactions} to represent diverse perspectives and simulate user-news
interaction networks; 2) LLMs could \emph{generate explanations} for proxy
tasks (e.g., sentiment, stance) to enrich the contexts of news articles and
produce experts specializing in various aspects of news understanding; 3) LLMs
could \emph{merge task-specific experts} and provide an overall prediction by
incorporating the predictions and confidence scores of varying experts.
Extensive experiments on seven datasets with three LLMs demonstrate that DELL
outperforms state-of-the-art baselines by up to 16.8\% in macro f1-score.
Further analysis reveals that the generated reactions and explanations are
greatly helpful in misinformation detection, while our proposed LLM-guided
expert merging helps produce better-calibrated predictions.
| 2,024 |
Computation and Language
|
Evaluating and Improving Continual Learning in Spoken Language
Understanding
|
Continual learning has emerged as an increasingly important challenge across
various tasks, including Spoken Language Understanding (SLU). In SLU, its
objective is to effectively handle the emergence of new concepts and evolving
environments. The evaluation of continual learning algorithms typically
involves assessing the model's stability, plasticity, and generalizability as
fundamental aspects of standards. However, existing continual learning metrics
primarily focus on only one or two of the properties. They neglect the overall
performance across all tasks, and do not adequately disentangle the plasticity
versus stability/generalizability trade-offs within the model. In this work, we
propose an evaluation methodology that provides a unified evaluation on
stability, plasticity, and generalizability in continual learning. By employing
the proposed metric, we demonstrate how introducing various knowledge
distillations can improve different aspects of these three properties of the
SLU model. We further show that our proposed metric is more sensitive in
capturing the impact of task ordering in continual learning, making it better
suited for practical use-case scenarios.
| 2,024 |
Computation and Language
|
Smaller Language Models are capable of selecting Instruction-Tuning
Training Data for Larger Language Models
|
Instruction-tuning language models has become a crucial step in aligning them
for general use. Typically, this process involves extensive training on large
datasets, incurring high training costs. In this paper, we introduce a novel
training data selection based on the learning percentage of the samples. We
assert that current language models possess the capability to autonomously
select high-quality training data, leading to comparable or improved
performance compared to training on the entire dataset. Our experiments span
different-sized models, revealing that this characteristic holds for models
ranging from 1B (small) to 13B (large) in size. Moreover, we demonstrate an
interesting finding that the data hardness transfers across model sizes, and a
smaller 350M model can effectively curate high-quality training data with hard
samples for a larger 13B model, resulting in an equally or superior
instruction-tuned model compared to training on the complete dataset. Utilizing
open-sourced OPT and Llama-2 models up to 13B in size, two publicly available
instruction-tuning training datasets and evaluated by both automatic metrics &
humans, our paper introduces a novel approach to training data selection,
showcasing a more efficient alternative.
| 2,024 |
Computation and Language
|
I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large
Language Models
|
We explored cultural biases-individualism vs. collectivism-in ChatGPT across
three Western languages (i.e., English, German, and French) and three Eastern
languages (i.e., Chinese, Japanese, and Korean). When ChatGPT adopted an
individualistic persona in Western languages, its collectivism scores (i.e.,
out-group values) exhibited a more negative trend, surpassing their positive
orientation towards individualism (i.e., in-group values). Conversely, when a
collectivistic persona was assigned to ChatGPT in Eastern languages, a similar
pattern emerged with more negative responses toward individualism (i.e.,
out-group values) as compared to collectivism (i.e., in-group values). The
results indicate that when imbued with a particular social identity, ChatGPT
discerns in-group and out-group, embracing in-group values while eschewing
out-group values. Notably, the negativity towards the out-group, from which
prejudices and discrimination arise, exceeded the positivity towards the
in-group. The experiment was replicated in the political domain, and the
results remained consistent. Furthermore, this replication unveiled an
intrinsic Democratic bias in Large Language Models (LLMs), aligning with
earlier findings and providing integral insights into mitigating such bias
through prompt engineering. Extensive robustness checks were performed using
varying hyperparameter and persona setup methods, with or without social
identity labels, across other popular language models.
| 2,024 |
Computation and Language
|
Incremental Sequence Labeling: A Tale of Two Shifts
|
The incremental sequence labeling task involves continuously learning new
classes over time while retaining knowledge of the previous ones. Our
investigation identifies two significant semantic shifts: E2O (where the model
mislabels an old entity as a non-entity) and O2E (where the model labels a
non-entity or old entity as a new entity). Previous research has predominantly
focused on addressing the E2O problem, neglecting the O2E issue. This
negligence results in a model bias towards classifying new data samples as
belonging to the new class during the learning process. To address these
challenges, we propose a novel framework, Incremental Sequential Labeling
without Semantic Shifts (IS3). Motivated by the identified semantic shifts (E2O
and O2E), IS3 aims to mitigate catastrophic forgetting in models. As for the
E2O problem, we use knowledge distillation to maintain the model's
discriminative ability for old entities. Simultaneously, to tackle the O2E
problem, we alleviate the model's bias towards new entities through debiased
loss and optimization levels. Our experimental evaluation, conducted on three
datasets with various incremental settings, demonstrates the superior
performance of IS3 compared to the previous state-of-the-art method by a
significant margin.
| 2,024 |
Computation and Language
|
Steering Conversational Large Language Models for Long Emotional Support
Conversations
|
In this study, we address the challenge of consistently following emotional
support strategies in long conversations by large language models (LLMs). We
introduce the Strategy-Relevant Attention (SRA) metric, a model-agnostic
measure designed to evaluate the effectiveness of LLMs in adhering to strategic
prompts in emotional support contexts. By analyzing conversations within the
Emotional Support Conversations dataset (ESConv) using LLaMA models, we
demonstrate that SRA is significantly correlated with a model's ability to
sustain the outlined strategy throughout the interactions. Our findings reveal
that the application of SRA-informed prompts leads to enhanced strategic
adherence, resulting in conversations that more reliably exhibit the desired
emotional support strategies over longer conversations. Furthermore, we
contribute a comprehensive, multi-branch synthetic conversation dataset for
ESConv, featuring a variety of strategy continuations informed by our optimized
prompting method. The code and data are publicly available on our Github.
| 2,024 |
Computation and Language
|
Large Language Models as Zero-shot Dialogue State Tracker through
Function Calling
|
Large language models (LLMs) are increasingly prevalent in conversational
systems due to their advanced understanding and generative capabilities in
general contexts. However, their effectiveness in task-oriented dialogues
(TOD), which requires not only response generation but also effective dialogue
state tracking (DST) within specific tasks and domains, remains less
satisfying. In this work, we propose a novel approach FnCTOD for solving DST
with LLMs through function calling. This method improves zero-shot DST,
allowing adaptation to diverse domains without extensive data collection or
model tuning. Our experimental results demonstrate that our approach achieves
exceptional performance with both modestly sized open-source and also
proprietary LLMs: with in-context prompting it enables various 7B or 13B
parameter models to surpass the previous state-of-the-art (SOTA) achieved by
ChatGPT, and improves ChatGPT's performance beating the SOTA by 5.6% Avg. JGA.
Individual model results for GPT-3.5 and GPT-4 are boosted by 4.8% and 14%,
respectively. We also show that by fine-tuning on a small collection of diverse
task-oriented dialogues, we can equip modestly sized models, specifically a 13B
parameter LLaMA2-Chat model, with function-calling capabilities and DST
performance comparable to ChatGPT while maintaining their chat capabilities. We
plan to open-source experimental code and model.
| 2,024 |
Computation and Language
|
Comparing Hallucination Detection Metrics for Multilingual Generation
|
While many automatic hallucination detection techniques have been proposed
for English texts, their effectiveness in multilingual contexts remains
unexplored. This paper aims to bridge the gap in understanding how these
hallucination detection metrics perform on non-English languages. We evaluate
the efficacy of various detection metrics, including lexical metrics like ROUGE
and Named Entity Overlap and Natural Language Inference (NLI)-based metrics, at
detecting hallucinations in biographical summaries in many languages; we also
evaluate how correlated these different metrics are to gauge whether they
measure the same phenomena. Our empirical analysis reveals that while lexical
metrics show limited effectiveness, NLI-based metrics perform well in
high-resource languages at the sentence level. In contrast, NLI-based metrics
often fail to detect atomic fact hallucinations. Our findings highlight
existing gaps in multilingual hallucination detection and motivate future
research to develop more robust detection methods for LLM hallucination in
other languages.
| 2,024 |
Computation and Language
|
Zero-shot sampling of adversarial entities in biomedical question
answering
|
The increasing depth of parametric domain knowledge in large language models
(LLMs) is fueling their rapid deployment in real-world applications. In
high-stakes and knowledge-intensive tasks, understanding model vulnerabilities
is essential for quantifying the trustworthiness of model predictions and
regulating their use. The recent discovery of named entities as adversarial
examples in natural language processing tasks raises questions about their
potential guises in other settings. Here, we propose a powerscaled
distance-weighted sampling scheme in embedding space to discover diverse
adversarial entities as distractors. We demonstrate its advantage over random
sampling in adversarial question answering on biomedical topics. Our approach
enables the exploration of different regions on the attack surface, which
reveals two regimes of adversarial entities that markedly differ in their
characteristics. Moreover, we show that the attacks successfully manipulate
token-wise Shapley value explanations, which become deceptive in the
adversarial setting. Our investigations illustrate the brittleness of domain
knowledge in LLMs and reveal a shortcoming of standard evaluations for
high-capacity models.
| 2,024 |
Computation and Language
|
Can We Verify Step by Step for Incorrect Answer Detection?
|
Chain-of-Thought (CoT) prompting has marked a significant advancement in
enhancing the reasoning capabilities of large language models (LLMs). Previous
studies have developed various extensions of CoT, which focus primarily on
enhancing end-task performance. In addition, there has been research on
assessing the quality of reasoning chains in CoT. This raises an intriguing
question: Is it possible to predict the accuracy of LLM outputs by scrutinizing
the reasoning chains they generate? To answer this research question, we
introduce a benchmark, R2PE, designed specifically to explore the relationship
between reasoning chains and performance in various reasoning tasks spanning
five different domains. This benchmark aims to measure the falsehood of the
final output of LLMs based on the reasoning steps. To make full use of
information in multiple reasoning chains, we propose the process discernibility
score (PDS) framework that beats the answer-checking baseline by a large
margin. Concretely, this resulted in an average of 5.1% increase in the F1
score across all 45 subsets within R2PE. We further demonstrate our PDS's
efficacy in advancing open-domain QA accuracy. Data and code are available at
https://github.com/XinXU-USTC/R2PE.
| 2,024 |
Computation and Language
|
Properties and Challenges of LLM-Generated Explanations
|
The self-rationalising capabilities of large language models (LLMs) have been
explored in restricted settings, using task/specific data sets. However,
current LLMs do not (only) rely on specifically annotated data; nonetheless,
they frequently explain their outputs. The properties of the generated
explanations are influenced by the pre-training corpus and by the target data
used for instruction fine-tuning. As the pre-training corpus includes a large
amount of human-written explanations "in the wild", we hypothesise that LLMs
adopt common properties of human explanations. By analysing the outputs for a
multi-domain instruction fine-tuning data set, we find that generated
explanations show selectivity and contain illustrative elements, but less
frequently are subjective or misleading. We discuss reasons and consequences of
the properties' presence or absence. In particular, we outline positive and
negative implications depending on the goals and user groups of the
self-rationalising system.
| 2,024 |
Computation and Language
|
Strong hallucinations from negation and how to fix them
|
Despite great performance on many tasks, language models (LMs) still struggle
with reasoning, sometimes providing responses that cannot possibly be true
because they stem from logical incoherence. We call such responses
\textit{strong hallucinations} and prove that they follow from an LM's
computation of its internal representations for logical operators and outputs
from those representations. Focusing on negation, we provide a novel solution
in which negation is treated not as another element of a latent representation,
but as \textit{an operation over an LM's latent representations that constrains
how they may evolve}. We show that our approach improves model performance in
cloze prompting and natural language inference tasks with negation without
requiring training on sparse negative data.
| 2,024 |
Computation and Language
|
Conversational SimulMT: Efficient Simultaneous Translation with Large
Language Models
|
Simultaneous machine translation (SimulMT) presents a challenging trade-off
between translation quality and latency. Recent studies have shown that LLMs
can achieve good performance in SimulMT tasks. However, this often comes at the
expense of high inference cost and latency. In this paper, we propose a
conversational SimulMT framework to enhance the inference efficiency of
LLM-based SimulMT through multi-turn-dialogue-based decoding. Our experiments
with Llama2-7b-chat on two SimulMT benchmarks demonstrate the superiority of
LLM in translation quality while achieving comparable computational latency to
specialized SimulMT models.
| 2,024 |
Computation and Language
|
Disordered-DABS: A Benchmark for Dynamic Aspect-Based Summarization in
Disordered Texts
|
Aspect-based summarization has seen significant advancements, especially in
structured text. Yet, summarizing disordered, large-scale texts, like those
found in social media and customer feedback, remains a significant challenge.
Current research largely targets predefined aspects within structured texts,
neglecting the complexities of dynamic and disordered environments. Addressing
this gap, we introduce Disordered-DABS, a novel benchmark for dynamic
aspect-based summarization tailored to unstructured text. Developed by adapting
existing datasets for cost-efficiency and scalability, our comprehensive
experiments and detailed human evaluations reveal that Disordered-DABS poses
unique challenges to contemporary summarization models, including
state-of-the-art language models such as GPT-3.5.
| 2,024 |
Computation and Language
|
Neural paraphrasing by automatically crawled and aligned sentence pairs
|
Paraphrasing is the task of re-writing an input text using other words,
without altering the meaning of the original content. Conversational systems
can exploit automatic paraphrasing to make the conversation more natural, e.g.,
talking about a certain topic using different paraphrases in different time
instants. Recently, the task of automatically generating paraphrases has been
approached in the context of Natural Language Generation (NLG). While many
existing systems simply consist in rule-based models, the recent success of the
Deep Neural Networks in several NLG tasks naturally suggests the possibility of
exploiting such networks for generating paraphrases. However, the main obstacle
toward neural-network-based paraphrasing is the lack of large datasets with
aligned pairs of sentences and paraphrases, that are needed to efficiently
train the neural models. In this paper we present a method for the automatic
generation of large aligned corpora, that is based on the assumption that news
and blog websites talk about the same events using different narrative styles.
We propose a similarity search procedure with linguistic constraints that,
given a reference sentence, is able to locate the most similar candidate
paraphrases out from millions of indexed sentences. The data generation process
is evaluated in the case of the Italian language, performing experiments using
pointer-based deep neural architectures.
| 2,019 |
Computation and Language
|
InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs
ready for the Indian Legal Domain?
|
Recent advancements in language technology and Artificial Intelligence have
resulted in numerous Language Models being proposed to perform various tasks in
the legal domain ranging from predicting judgments to generating summaries.
Despite their immense potential, these models have been proven to learn and
exhibit societal biases and make unfair predictions. In this study, we explore
the ability of Large Language Models (LLMs) to perform legal tasks in the
Indian landscape when social factors are involved. We present a novel metric,
$\beta$-weighted $\textit{Legal Safety Score ($LSS_{\beta}$)}$, which
encapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'
safety by considering its performance in the $\textit{Binary Statutory
Reasoning}$ task and its fairness exhibition with respect to various axes of
disparities in the Indian society. Task performance and fairness scores of
LLaMA and LLaMA--2 models indicate that the proposed $LSS_{\beta}$ metric can
effectively determine the readiness of a model for safe usage in the legal
sector. We also propose finetuning pipelines, utilising specialised legal
datasets, as a potential method to mitigate bias and improve model safety. The
finetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\beta}$,
improving their usability in the Indian legal domain. Our code is publicly
released.
| 2,024 |
Computation and Language
|
Direct Preference Optimization with an Offset
|
Direct preference optimization (DPO) is a successful fine-tuning strategy for
aligning large language models with human preferences without the need to train
a reward model or employ reinforcement learning. DPO, as originally formulated,
relies on binary preference data and fine-tunes a language model to increase
the likelihood of a preferred response over a dispreferred response. However,
not all preference pairs are equal: while in some cases the preferred response
is only slightly better than the dispreferred response, there can be a stronger
preference for one response when, for example, the other response includes
harmful or toxic content. In this paper, we propose a generalization of DPO,
termed DPO with an offset (ODPO), that does not treat every preference pair
equally during fine-tuning. Intuitively, ODPO requires the difference between
the likelihood of the preferred and dispreferred response to be greater than an
offset value. The offset is determined based on the extent to which one
response is preferred over another. Our experiments on various tasks suggest
that ODPO significantly outperforms DPO in aligning language models, especially
when the number of preference pairs is limited.
| 2,024 |
Computation and Language
|
LinkNER: Linking Local Named Entity Recognition Models to Large Language
Models using Uncertainty
|
Named Entity Recognition (NER) serves as a fundamental task in natural
language understanding, bearing direct implications for web content analysis,
search engines, and information retrieval systems. Fine-tuned NER models
exhibit satisfactory performance on standard NER benchmarks. However, due to
limited fine-tuning data and lack of knowledge, it performs poorly on unseen
entity recognition. As a result, the usability and reliability of NER models in
web-related applications are compromised. Instead, Large Language Models (LLMs)
like GPT-4 possess extensive external knowledge, but research indicates that
they lack specialty for NER tasks. Furthermore, non-public and large-scale
weights make tuning LLMs difficult. To address these challenges, we propose a
framework that combines small fine-tuned models with LLMs (LinkNER) and an
uncertainty-based linking strategy called RDC that enables fine-tuned models to
complement black-box LLMs, achieving better performance. We experiment with
both standard NER test sets and noisy social media datasets. LinkNER enhances
NER task performance, notably surpassing SOTA models in robustness tests. We
also quantitatively analyze the influence of key components like uncertainty
estimation methods, LLMs, and in-context learning on diverse NER tasks,
offering specific web-related recommendations.
| 2,024 |
Computation and Language
|
Threads of Subtlety: Detecting Machine-Generated Texts Through Discourse
Motifs
|
With the advent of large language models (LLM), the line between
human-crafted and machine-generated texts has become increasingly blurred. This
paper delves into the inquiry of identifying discernible and unique linguistic
properties in texts that were written by humans, particularly uncovering the
underlying discourse structures of texts beyond their surface structures.
Introducing a novel methodology, we leverage hierarchical parse trees and
recursive hypergraphs to unveil distinctive discourse patterns in texts
produced by both LLMs and humans. Empirical findings demonstrate that, although
both LLMs and humans generate distinct discourse patterns influenced by
specific domains, human-written texts exhibit more structural variability,
reflecting the nuanced nature of human writing in different domains. Notably,
incorporating hierarchical discourse features enhances binary classifiers'
overall performance in distinguishing between human-written and
machine-generated texts, even on out-of-distribution and paraphrased samples.
This underscores the significance of incorporating hierarchical discourse
features in the analysis of text patterns. The code and dataset will be
available at [TBA].
| 2,024 |
Computation and Language
|
Do Llamas Work in English? On the Latent Language of Multilingual
Transformers
|
We ask whether multilingual language models trained on unbalanced,
English-dominated corpora use English as an internal pivot language -- a
question of key importance for understanding how language models function and
the origins of linguistic bias. Focusing on the Llama-2 family of transformer
models, our study uses carefully constructed non-English prompts with a unique
correct single-token continuation. From layer to layer, transformers gradually
map an input embedding of the final prompt token to an output embedding from
which next-token probabilities are computed. Tracking intermediate embeddings
through their high-dimensional space reveals three distinct phases, whereby
intermediate embeddings (1) start far away from output token embeddings; (2)
already allow for decoding a semantically correct next token in the middle
layers, but give higher probability to its version in English than in the input
language; (3) finally move into an input-language-specific region of the
embedding space. We cast these results into a conceptual model where the three
phases operate in "input space", "concept space", and "output space",
respectively. Crucially, our evidence suggests that the abstract "concept
space" lies closer to English than to other languages, which may have important
consequences regarding the biases held by multilingual language models.
| 2,024 |
Computation and Language
|
Efficiency at Scale: Investigating the Performance of Diminutive
Language Models in Clinical Tasks
|
The entry of large language models (LLMs) into research and commercial spaces
has led to a trend of ever-larger models, with initial promises of
generalisability, followed by a widespread desire to downsize and create
specialised models without the need for complete fine-tuning, using Parameter
Efficient Fine-tuning (PEFT) methods. We present an investigation into the
suitability of different PEFT methods to clinical decision-making tasks, across
a range of model sizes, including extremely small models with as few as $25$
million parameters.
Our analysis shows that the performance of most PEFT approaches varies
significantly from one task to another, with the exception of LoRA, which
maintains relatively high performance across all model sizes and tasks,
typically approaching or matching full fine-tuned performance. The
effectiveness of PEFT methods in the clinical domain is evident, particularly
for specialised models which can operate on low-cost, in-house computing
infrastructure. The advantages of these models, in terms of speed and reduced
training costs, dramatically outweighs any performance gain from large
foundation LLMs. Furthermore, we highlight how domain-specific pre-training
interacts with PEFT methods and model size, and discuss how these factors
interplay to provide the best efficiency-performance trade-off. Full code
available at: tbd.
| 2,024 |
Computation and Language
|
Jailbreaking Proprietary Large Language Models using Word Substitution
Cipher
|
Large Language Models (LLMs) are aligned to moral and ethical guidelines but
remain susceptible to creative prompts called Jailbreak that can bypass the
alignment process. However, most jailbreaking prompts contain harmful questions
in the natural language (mainly English), which can be detected by the LLM
themselves. In this paper, we present jailbreaking prompts encoded using
cryptographic techniques. We first present a pilot study on the
state-of-the-art LLM, GPT-4, in decoding several safe sentences that have been
encrypted using various cryptographic techniques and find that a
straightforward word substitution cipher can be decoded most effectively.
Motivated by this result, we use this encoding technique for writing
jailbreaking prompts. We present a mapping of unsafe words with safe words and
ask the unsafe question using these mapped words. Experimental results show an
attack success rate (up to 59.42%) of our proposed jailbreaking approach on
state-of-the-art proprietary models including ChatGPT, GPT-4, and Gemini-Pro.
Additionally, we discuss the over-defensiveness of these models. We believe
that our work will encourage further research in making these LLMs more robust
while maintaining their decoding capabilities.
| 2,024 |
Computation and Language
|
Retrieve Only When It Needs: Adaptive Retrieval Augmentation for
Hallucination Mitigation in Large Language Models
|
Hallucinations pose a significant challenge for the practical implementation
of large language models (LLMs). The utilization of parametric knowledge in
generating factual content is constrained by the limited knowledge of LLMs,
potentially resulting in internal hallucinations. While incorporating external
information can help fill knowledge gaps, it also introduces the risk of
irrelevant information, thereby increasing the likelihood of external
hallucinations. A careful and balanced integration of the parametric knowledge
within LLMs with external information is crucial to alleviate hallucinations.
In this study, we present Rowen, a novel approach that enhances LLMs with a
selective retrieval augmentation process tailored to address hallucinated
outputs. This process is governed by a multilingual semantic-aware detection
module, which evaluates the consistency of the perturbed responses across
various languages for the same queries. Upon detecting inconsistencies
indicative of hallucinations, Rowen activates the retrieval of external
information to rectify the model outputs. Rowen adeptly harmonizes the
intrinsic parameters in LLMs with external knowledge sources, effectively
mitigating hallucinations by ensuring a balanced integration of internal
reasoning and external evidence. Through a comprehensive empirical analysis, we
demonstrate that Rowen surpasses the current state-of-the-art in both detecting
and mitigating hallucinated content within the outputs of LLMs.
| 2,024 |
Computation and Language
|
Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate
Controllable Controversial Statements
|
Making LLMs speak for different, especially minority groups of people, and
generate statements supporting their diverse or even controversial perspectives
is critical to creating an inclusive environment. However, existing LLMs lack
sufficient controllability to the stance of their generated content, which
often contains inconsistent, neutral, or biased statements. In this paper, we
improve the controllability of LLMs in generating statements supporting an
argument the user defined in the prompt. We find that multi-round debates
between two LLMs with opposite stances generate higher-quality and more salient
statements for each, which are important training data to improve the
controllability of LLMs. Motivated by this, we develop a novel debate & tuning
("DEBATunE") pipeline finetuning LLMs to generate the statements obtained via
debate. To examine DEBATunE, we curate the largest dataset of debate topics so
far, which covers 710 controversial topics and corresponding arguments for each
topic. Evaluations by the GPT-4 judge with a novel controversy controllability
metric show that LLMs' capability of expressing diverse perspectives is
significantly improved by DEBATunE. Moreover, such controllability can be
generalized to unseen topics, generating high-quality statements supporting
controversial arguments. Our codes, models, and data will be released at
https://github.com/tianyi-lab/DEBATunE.
| 2,024 |
Computation and Language
|
Enhancing Role-playing Systems through Aggressive Queries: Evaluation
and Improvement
|
The advent of Large Language Models (LLMs) has propelled dialogue generation
into new realms, particularly in the field of role-playing systems (RPSs).
While enhanced with ordinary role-relevant training dialogues, existing
LLM-based RPSs still struggle to align with roles when handling intricate and
trapped queries in boundary scenarios. In this paper, we design the Modular
ORchestrated Trap-setting Interaction SystEm (MORTISE) to benchmark and improve
the role-playing LLMs' performance. MORTISE can produce highly role-relevant
aggressive queries through the collaborative effort of multiple LLM-based
modules, and formulate corresponding responses to create an adversarial
training dataset via a consistent response generator. We select 190 Chinese and
English roles to construct aggressive queries to benchmark existing
role-playing LLMs. Through comprehensive evaluation, we find that existing
models exhibit a general deficiency in role alignment capabilities. We further
select 180 of the roles to collect an adversarial training dataset (named
RoleAD) and retain the other 10 roles for testing. Experiments on models
improved by RoleAD indicate that our adversarial dataset ameliorates this
deficiency, with the improvements demonstrating a degree of generalizability in
ordinary scenarios.
| 2,024 |
Computation and Language
|
BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via
Self-Distillation
|
The upscaling of Large Language Models (LLMs) has yielded impressive advances
in natural language processing, yet it also poses significant deployment
challenges. Weight quantization has emerged as a widely embraced solution to
reduce memory and computational demands. This paper introduces BitDistiller, a
framework that synergizes Quantization-Aware Training (QAT) with Knowledge
Distillation (KD) to boost the performance of LLMs at ultra-low precisions
(sub-4-bit). Specifically, BitDistiller first incorporates a tailored
asymmetric quantization and clipping technique to maximally preserve the
fidelity of quantized weights, and then proposes a novel Confidence-Aware
Kullback-Leibler Divergence (CAKLD) objective, which is employed in a
self-distillation manner to enable faster convergence and superior model
performance. Empirical evaluations demonstrate that BitDistiller significantly
surpasses existing methods in both 3-bit and 2-bit configurations on general
language understanding and complex reasoning benchmarks. Notably, BitDistiller
is shown to be more cost-effective, demanding fewer data and training
resources. The code is available at https://github.com/DD-DuDa/BitDistiller.
| 2,024 |
Computation and Language
|
Generalizability of Mixture of Domain-Specific Adapters from the Lens of
Signed Weight Directions and its Application to Effective Model Pruning
|
Several parameter-efficient fine-tuning methods based on adapters have been
proposed as a streamlined approach to incorporate not only a single specialized
knowledge into existing Pre-Trained Language Models (PLMs) but also multiple of
them at once. Recent works such as AdapterSoup propose to mix not all but only
a selective sub-set of domain-specific adapters during inference via model
weight averaging to optimize performance on novel, unseen domains with
excellent computational efficiency. However, the essential generalizability of
this emerging weight-space adapter mixing mechanism on unseen, in-domain
examples remains unexplored. Thus, in this study, we conduct a comprehensive
analysis to elucidate the generalizability of domain-specific adapter mixtures
in in-domain evaluation. We also provide investigations into the inner workings
of the mixture of domain-specific adapters by analyzing their weight signs,
yielding critical analysis on the negative correlation between their fraction
of weight sign difference and their mixtures' generalizability. All source code
will be published.
| 2,024 |
Computation and Language
|
`Keep it Together': Enforcing Cohesion in Extractive Summaries by
Simulating Human Memory
|
Extractive summaries are usually presented as lists of sentences with no
expected cohesion between them. In this paper, we aim to enforce cohesion
whilst controlling for informativeness and redundancy in summaries, in cases
where the input exhibits high redundancy. The pipeline controls for redundancy
in long inputs as it is consumed, and balances informativeness and cohesion
during sentence selection. Our sentence selector simulates human memory to keep
track of topics --modeled as lexical chains--, enforcing cohesive ties between
noun phrases. Across a variety of domains, our experiments revealed that it is
possible to extract highly cohesive summaries that nevertheless read as
informative to humans as summaries extracted by only accounting for
informativeness or redundancy. The extracted summaries exhibit smooth topic
transitions between sentences as signaled by lexical chains, with chains
spanning adjacent or near-adjacent sentences.
| 2,024 |
Computation and Language
|
Can Separators Improve Chain-of-Thought Prompting?
|
Chain-of-thought (CoT) prompting is a simple and effective method for
improving the reasoning capabilities of Large language models (LLMs). The basic
idea of CoT is to let LLMs break down their thought processes step-by-step by
putting exemplars in the input prompt. However, the densely structured prompt
exemplars of CoT may cause the cognitive overload of LLMs. Inspired by human
cognition, we introduce CoT-Sep, a novel method that strategically employs
separators at the end of each exemplar in CoT prompting. These separators are
designed to help the LLMs understand their thought processes better while
reasoning. It turns out that CoT-Sep significantly improves the LLMs'
performances on complex reasoning tasks (e.g., GSM-8K, AQuA, CSQA), compared
with the vanilla CoT, which does not use separators. We also study the effects
of the type and the location of separators tested on multiple LLMs, including
GPT-3.5-Turbo, GPT-4, and LLaMA-2 7B. Interestingly, the type/location of
separators should be chosen appropriately to boost the reasoning capability of
CoT.
| 2,024 |
Computation and Language
|
AbsInstruct: Eliciting Abstraction Ability from LLMs through Explanation
Tuning with Plausibility Estimation
|
Abstraction ability is crucial in human intelligence, which can also benefit
various tasks in NLP study. Existing work shows that LLMs are deficient in
abstract ability, and how to improve it remains unexplored. In this work, we
design the framework AbsInstruct to enhance LLMs' abstraction ability through
instruction tuning. The framework builds instructions with in-depth
explanations to assist LLMs in capturing the underlying rationale of
abstraction. Meanwhile, we introduce a plausibility estimator to select
instructions that are more consistent with the abstraction knowledge of LLMs to
be aligned. Then, our framework combines abstraction instructions with
general-purpose ones to build a hybrid dataset. Extensive experiments and
analyses demonstrate that our framework can considerably enhance LLMs'
abstraction ability with strong generalization performance while maintaining
their general instruction-following abilities.
| 2,024 |
Computation and Language
|
Enhancing Numerical Reasoning with the Guidance of Reliable Reasoning
Processes
|
Numerical reasoning is an essential ability for NLP systems to handle numeric
information. Recent research indicates that fine-tuning a small-scale model to
learn generating reasoning processes alongside answers can significantly
enhance performance. However, current methods have the limitation that most
methods generate reasoning processes with large language models (LLMs), which
are "unreliable" since such processes could contain information unrelated to
the answer. To address this limitation, we introduce Enhancing NumeriCal
reasOning with Reliable procEsses (Encore), which derives the reliable
reasoning process by decomposing the answer formula, ensuring which fully
supports the answer. Nevertheless, models could lack enough data to learn the
reasoning process generation adequately, since our method generates only one
single reasoning process for one formula. To overcome this difficulty, we
present a series of pre-training tasks to help models learn the reasoning
process generation with synthesized data. The experiments show that Encore
yields improvement on all five experimental datasets with an average of 1.8%,
proving the effectiveness of our method.
| 2,024 |
Computation and Language
|
Fine Tuning Named Entity Extraction Models for the Fantasy Domain
|
Named Entity Recognition (NER) is a sequence classification Natural Language
Processing task where entities are identified in the text and classified into
predefined categories. It acts as a foundation for most information extraction
systems. Dungeons and Dragons (D&D) is an open-ended tabletop fantasy game with
its own diverse lore. DnD entities are domain-specific and are thus
unrecognizable by even the state-of-the-art off-the-shelf NER systems as the
NER systems are trained on general data for pre-defined categories such as:
person (PERS), location (LOC), organization (ORG), and miscellaneous (MISC).
For meaningful extraction of information from fantasy text, the entities need
to be classified into domain-specific entity categories as well as the models
be fine-tuned on a domain-relevant corpus. This work uses available lore of
monsters in the D&D domain to fine-tune Trankit, which is a prolific NER
framework that uses a pre-trained model for NER. Upon this training, the system
acquires the ability to extract monster names from relevant domain documents
under a novel NER tag. This work compares the accuracy of the monster name
identification against; the zero-shot Trankit model and two FLAIR models. The
fine-tuned Trankit model achieves an 87.86% F1 score surpassing all the other
considered models.
| 2,024 |
Computation and Language
|
Improving Demonstration Diversity by Human-Free Fusing for Text-to-SQL
|
Currently, the in-context learning method based on large language models
(LLMs) has become the mainstream of text-to-SQL research. Previous works have
discussed how to select demonstrations related to the user question from a
human-labeled demonstration pool. However, human labeling suffers from the
limitations of insufficient diversity and high labeling overhead. Therefore, in
this paper, we discuss how to measure and improve the diversity of the
demonstrations for text-to-SQL. We present a metric to measure the diversity of
the demonstrations and analyze the insufficient of the existing labeled data by
experiments. Based on the above discovery, we propose fusing iteratively for
demonstrations (Fused) to build a high-diversity demonstration pool through
human-free multiple-iteration synthesis, improving diversity and lowering label
cost. Our method achieves an average improvement of 3.2% and 5.0% with and
without human labeling on several mainstream datasets, which proves the
effectiveness of Fused.
| 2,024 |
Computation and Language
|
Multi-Hop Table Retrieval for Open-Domain Text-to-SQL
|
Open-domain text-to-SQL is an important task that retrieves question-relevant
tables from massive databases and then generates SQL. However, existing
retrieval methods that retrieve in a single hop do not pay attention to the
text-to-SQL challenge of schema linking, which is aligning the entities in the
question with table entities, reflected in two aspects: similar irrelevant
entity and domain mismatch entity. Therefore, we propose our method, the
multi-hop table retrieval with rewrite and beam search (Murre). To reduce the
effect of the similar irrelevant entity, our method focuses on unretrieved
entities at each hop and considers the low-ranked tables by beam search. To
alleviate the limitation of domain mismatch entity, Murre rewrites the question
based on retrieved tables in multiple hops, decreasing the domain gap with
relevant tables. We conduct experiments on SpiderUnion and BirdUnion+, reaching
new state-of-the-art results with an average improvement of 6.38%.
| 2,024 |
Computation and Language
|
Humans or LLMs as the Judge? A Study on Judgement Biases
|
Adopting human and large language models (LLM) as judges (\textit{a.k.a}
human- and LLM-as-a-judge) for evaluating the performance of existing LLMs has
recently gained attention. Nonetheless, this approach concurrently introduces
potential biases from human and LLM judges, questioning the reliability of the
evaluation results. In this paper, we propose a novel framework for
investigating 5 types of biases for LLM and human judges. We curate a dataset
with 142 samples referring to the revised Bloom's Taxonomy and conduct
thousands of human and LLM evaluations. Results show that human and LLM judges
are vulnerable to perturbations to various degrees, and that even the most
cutting-edge judges possess considerable biases. We further exploit their
weakness and conduct attacks on LLM judges. We hope that our work can notify
the community of the vulnerability of human- and LLM-as-a-judge against
perturbations, as well as the urgency of developing robust evaluation systems.
| 2,024 |
Computation and Language
|
OpenFMNav: Towards Open-Set Zero-Shot Object Navigation via
Vision-Language Foundation Models
|
Object navigation (ObjectNav) requires an agent to navigate through unseen
environments to find queried objects. Many previous methods attempted to solve
this task by relying on supervised or reinforcement learning, where they are
trained on limited household datasets with close-set objects. However, two key
challenges are unsolved: understanding free-form natural language instructions
that demand open-set objects, and generalizing to new environments in a
zero-shot manner. Aiming to solve the two challenges, in this paper, we propose
OpenFMNav, an Open-set Foundation Model based framework for zero-shot object
Navigation. We first unleash the reasoning abilities of large language models
(LLMs) to extract proposed objects from natural language instructions that meet
the user's demand. We then leverage the generalizability of large vision
language models (VLMs) to actively discover and detect candidate objects from
the scene, building a Versatile Semantic Score Map (VSSM). Then, by conducting
common sense reasoning on VSSM, our method can perform effective
language-guided exploration and exploitation of the scene and finally reach the
goal. By leveraging the reasoning and generalizing abilities of foundation
models, our method can understand free-form human instructions and perform
effective open-set zero-shot navigation in diverse environments. Extensive
experiments on the HM3D ObjectNav benchmark show that our method surpasses all
the strong baselines on all metrics, proving our method's effectiveness.
Furthermore, we perform real robot demonstrations to validate our method's
open-set-ness and generalizability to real-world environments.
| 2,024 |
Computation and Language
|
Decomposition for Enhancing Attention: Improving LLM-based Text-to-SQL
through Workflow Paradigm
|
In-context learning of large-language models (LLMs) has achieved remarkable
success in the field of natural language processing, while extensive case
studies reveal that the single-step chain-of-thought prompting approach faces
challenges such as attention diffusion and inadequate performance in complex
tasks like text-to-SQL. To improve the contextual learning capabilities of LLMs
in text-to-SQL, a workflow paradigm method is proposed, aiming to enhance the
attention and problem-solving scope of LLMs through decomposition.
Specifically, the information determination module for eliminating redundant
information and the brand-new prompt structure based on problem classification
greatly enhance the model's attention. Additionally, the inclusion of
self-correcting and active learning modules greatly expands the problem-solving
scope of LLMs, hence improving the upper limit of LLM-based approaches.
Extensive experiments conducted on three datasets demonstrate that our approach
outperforms other methods by a significant margin. About 2-3 percentage point
improvements compared to the existing baseline on the Spider Dev and
Spider-Realistic datasets and new SOTA results on the Spider Test dataset are
achieved. Our code is available on GitHub:
\url{https://github.com/FlyingFeather/DEA-SQL}.
| 2,024 |
Computation and Language
|
German Text Simplification: Finetuning Large Language Models with
Semi-Synthetic Data
|
This study pioneers the use of synthetically generated data for training
generative models in document-level text simplification of German texts. We
demonstrate the effectiveness of our approach with real-world online texts.
Addressing the challenge of data scarcity in language simplification, we
crawled professionally simplified German texts and synthesized a corpus using
GPT-4. We finetune Large Language Models with up to 13 billion parameters on
this data and evaluate their performance. This paper employs various
methodologies for evaluation and demonstrates the limitations of currently used
rule-based metrics. Both automatic and manual evaluations reveal that our
models can significantly simplify real-world online texts, indicating the
potential of synthetic data in improving text simplification.
| 2,024 |
Computation and Language
|
LongHeads: Multi-Head Attention is Secretly a Long Context Processor
|
Large language models (LLMs) have achieved impressive performance in numerous
domains but often struggle to process lengthy inputs effectively and
efficiently due to limited length generalization and attention's quadratic
computational demands. Many sought to mitigate this by restricting the
attention window within the pre-trained length. However, these methods
introduce new issues such as ignoring the middle context and requiring
additional training. To address these problems, we propose LongHeads, a
training-free framework that enhances LLM's long context ability by unlocking
multi-head attention's untapped potential. Instead of allowing each head to
attend to the full sentence, which struggles with generalizing to longer
sequences due to out-of-distribution (OOD) issues, we allow each head to
process in-distribution length by selecting and attending to important context
chunks. To this end, we propose a chunk selection strategy that relies on the
inherent correlation between the query and the key representations, efficiently
distributing context chunks to different heads. In this way, each head ensures
it can effectively process attended tokens within the trained length, while
different heads in different layers can collectively process longer contexts.
LongHeads works efficiently in linear time, fits seamlessly with many LLMs that
use relative positional encoding. Our extensive empirical analyses verify
LongHeads's efficacy in extending the usable context window for existing
models, showcasing its promise for enhancing long text understanding.
| 2,024 |
Computation and Language
|
Opening the Black Box of Large Language Models: Two Views on Holistic
Interpretability
|
As large language models (LLMs) grow more powerful, concerns around potential
harms like toxicity, unfairness, and hallucination threaten user trust.
Ensuring beneficial alignment of LLMs with human values through model alignment
is thus critical yet challenging, requiring a deeper understanding of LLM
behaviors and mechanisms. We propose opening the black box of LLMs through a
framework of holistic interpretability encompassing complementary bottom-up and
top-down perspectives. The bottom-up view, enabled by mechanistic
interpretability, focuses on component functionalities and training dynamics.
The top-down view utilizes representation engineering to analyze behaviors
through hidden representations. In this paper, we review the landscape around
mechanistic interpretability and representation engineering, summarizing
approaches, discussing limitations and applications, and outlining future
challenges in using these techniques to achieve ethical, honest, and reliable
reasoning aligned with human values.
| 2,024 |
Computation and Language
|
Multi-Cultural Commonsense Knowledge Distillation
|
Despite recent progress, large language models (LLMs) still face the
challenge of appropriately reacting to the intricacies of social and cultural
conventions. This paper presents MANGO, a methodology for distilling
high-accuracy, high-recall assertions of cultural knowledge. We judiciously and
iteratively prompt LLMs for this purpose from two entry points, concepts and
cultures. Outputs are consolidated via clustering and generative summarization.
Running the MANGO method with GPT-3.5 as underlying LLM yields 167K
high-accuracy assertions for 30K concepts and 11K cultures, surpassing prior
resources by a large margin. For extrinsic evaluation, we explore augmenting
dialogue systems with cultural knowledge assertions. We find that adding
knowledge from MANGO improves the overall quality, specificity, and cultural
sensitivity of dialogue responses, as judged by human annotators. Data and code
are available for download.
| 2,024 |
Computation and Language
|
MultiPoT: Multilingual Program of Thoughts Harnesses Multiple
Programming Languages
|
Program of Thoughts (PoT) is an approach characterized by its executable
intermediate steps, which ensure the accuracy of the numerical calculations in
the reasoning process. Currently, PoT primarily uses Python. However, relying
solely on a single language may result in suboptimal solutions and overlook the
potential benefits of other programming languages. In this paper, we conduct
comprehensive experiments on the programming languages used in PoT and find
that no single language consistently delivers optimal performance across all
tasks and models. The effectiveness of each language varies depending on the
specific scenarios. Inspired by this, we propose a task and model agnostic
approach called MultiPoT, which harnesses strength and diversity from various
languages. Experimental results reveal that it significantly outperforms Python
Self-Consistency. Furthermore, it achieves comparable or superior performance
compared to the best monolingual PoT in almost all tasks across all models. In
particular, MultiPoT achieves more than 4.6\% improvement on average on both
Starcoder and ChatGPT (gpt-3.5-turbo).
| 2,024 |
Computation and Language
|
Exploring Precision and Recall to assess the quality and diversity of
LLMs
|
This paper introduces a novel evaluation framework for Large Language Models
(LLMs) such as Llama-2 and Mistral, focusing on the adaptation of Precision and
Recall metrics from image generation to text generation. This approach allows
for a nuanced assessment of the quality and diversity of generated text without
the need for aligned corpora. By conducting a comprehensive evaluation of
state-of-the-art language models, the study reveals significant insights into
their performance on open-ended generation tasks, which are not adequately
captured by traditional benchmarks. The findings highlight a trade-off between
the quality and diversity of generated samples, particularly when models are
fine-tuned with human feedback. This work extends the toolkit for
distribution-based NLP evaluation, offering insights into the practical
capabilities and challenges faced by current LLMs in generating diverse and
high-quality text.
| 2,024 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.