Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Extract, Denoise and Enforce: Evaluating and Improving Concept
Preservation for Text-to-Text Generation
|
Prior studies on text-to-text generation typically assume that the model
could figure out what to attend to in the input and what to include in the
output via seq2seq learning, with only the parallel training data and no
additional guidance. However, it remains unclear whether current models can
preserve important concepts in the source input, as seq2seq learning does not
have explicit focus on the concepts and commonly used evaluation metrics also
treat concepts equally important as other tokens. In this paper, we present a
systematic analysis that studies whether current seq2seq models, especially
pre-trained language models, are good enough for preserving important input
concepts and to what extent explicitly guiding generation with the concepts as
lexical constraints is beneficial. We answer the above questions by conducting
extensive analytical experiments on four representative text-to-text generation
tasks. Based on the observations, we then propose a simple yet effective
framework to automatically extract, denoise, and enforce important input
concepts as lexical constraints. This new method performs comparably or better
than its unconstrained counterpart on automatic metrics, demonstrates higher
coverage for concept preservation, and receives better ratings in the human
evaluation. Our code is available at https://github.com/morningmoni/EDE.
| 2,021 |
Computation and Language
|
AmericasNLI: Evaluating Zero-shot Natural Language Understanding of
Pretrained Multilingual Models in Truly Low-resource Languages
|
Pretrained multilingual models are able to perform cross-lingual transfer in
a zero-shot setting, even for languages unseen during pretraining. However,
prior work evaluating performance on unseen languages has largely been limited
to low-level, syntactic tasks, and it remains unclear if zero-shot learning of
high-level, semantic tasks is possible for unseen languages. To explore this
question, we present AmericasNLI, an extension of XNLI (Conneau et al., 2018)
to 10 indigenous languages of the Americas. We conduct experiments with XLM-R,
testing multiple zero-shot and translation-based approaches. Additionally, we
explore model adaptation via continued pretraining and provide an analysis of
the dataset by considering hypothesis-only models. We find that XLM-R's
zero-shot performance is poor for all 10 languages, with an average performance
of 38.62%. Continued pretraining offers improvements, with an average accuracy
of 44.05%. Surprisingly, training on poorly translated data by far outperforms
all other methods with an accuracy of 48.72%.
| 2,022 |
Computation and Language
|
GooAQ: Open Question Answering with Diverse Answer Types
|
While day-to-day questions come with a variety of answer types, the current
question-answering (QA) literature has failed to adequately address the answer
diversity of questions. To this end, we present GooAQ, a large-scale dataset
with a variety of answer types. This dataset contains over 5 million questions
and 3 million answers collected from Google. GooAQ questions are collected
semi-automatically from the Google search engine using its autocomplete
feature. This results in naturalistic questions of practical interest that are
nonetheless short and expressed using simple language. GooAQ answers are mined
from Google's responses to our collected questions, specifically from the
answer boxes in the search results. This yields a rich space of answer types,
containing both textual answers (short and long) as well as more structured
ones such as collections. We benchmarkT5 models on GooAQ and observe that: (a)
in line with recent work, LM's strong performance on GooAQ's short-answer
questions heavily benefit from annotated data; however, (b) their quality in
generating coherent and accurate responses for questions requiring long
responses (such as 'how' and 'why' questions) is less reliant on observing
annotated data and mainly supported by their pre-training. We release GooAQ to
facilitate further research on improving QA with diverse response types.
| 2,021 |
Computation and Language
|
Revealing Persona Biases in Dialogue Systems
|
Dialogue systems in the form of chatbots and personal assistants are being
increasingly integrated into people's lives. Modern dialogue systems may
consider adopting anthropomorphic personas, mimicking societal demographic
groups to appear more approachable and trustworthy to users. However, the
adoption of a persona can result in the adoption of biases. In this paper, we
present the first large-scale study on persona biases in dialogue systems and
conduct analyses on personas of different social classes, sexual orientations,
races, and genders. We define persona biases as harmful differences in
responses (e.g., varying levels of offensiveness, agreement with harmful
statements) generated from adopting different demographic personas.
Furthermore, we introduce an open-source framework, UnitPersonaBias, to explore
and aggregate persona biases in dialogue systems. By analyzing the Blender and
DialoGPT dialogue systems, we observe that adopting personas can actually
decrease harmful responses, compared to not using any personas. Additionally,
we find that persona choices can affect the degree of harms in generated
responses and thus should be systematically evaluated before deployment. We
also analyze how personas can result in different amounts of harm towards
specific demographics.
| 2,021 |
Computation and Language
|
Unsupervised Deep Keyphrase Generation
|
Keyphrase generation aims to summarize long documents with a collection of
salient phrases. Deep neural models have demonstrated a remarkable success in
this task, capable of predicting keyphrases that are even absent from a
document. However, such abstractiveness is acquired at the expense of a
substantial amount of annotated data. In this paper, we present a novel method
for keyphrase generation, AutoKeyGen, without the supervision of any human
annotation. Motivated by the observation that an absent keyphrase in one
document can appear in other places, in whole or in part, we first construct a
phrase bank by pooling all phrases in a corpus. With this phrase bank, we then
draw candidate absent keyphrases for each document through a partial matching
process. To rank both types of candidates, we combine their lexical- and
semantic-level similarities to the input document. Moreover, we utilize these
top-ranked candidates as to train a deep generative model for more absent
keyphrases. Extensive experiments demonstrate that AutoKeyGen outperforms all
unsupervised baselines and can even beat strong supervised methods in certain
cases.
| 2,021 |
Computation and Language
|
Can NLI Models Verify QA Systems' Predictions?
|
To build robust question answering systems, we need the ability to verify
whether answers to questions are truly correct, not just "good enough" in the
context of imperfect QA datasets. We explore the use of natural language
inference (NLI) as a way to achieve this goal, as NLI inherently requires the
premise (document context) to contain all necessary information to support the
hypothesis (proposed answer to the question). We leverage large pre-trained
models and recent prior datasets to construct powerful question converter and
decontextualization modules, which can reformulate QA instances as
premise-hypothesis pairs with very high reliability. Then, by combining
standard NLI datasets with NLI examples automatically derived from QA training
data, we can train NLI models to judge the correctness of QA models' proposed
answers. We show that our NLI approach can generally improve the confidence
estimation of a QA model across different domains, evaluated in a selective QA
setting. Careful manual analysis over the predictions of our NLI model shows
that it can further identify cases where the QA model produces the right answer
for the wrong reason, or where the answer cannot be verified as addressing all
aspects of the question.
| 2,021 |
Computation and Language
|
Learning with Instance Bundles for Reading Comprehension
|
When training most modern reading comprehension models, all the questions
associated with a context are treated as being independent from each other.
However, closely related questions and their corresponding answers are not
independent, and leveraging these relationships could provide a strong
supervision signal to a model. Drawing on ideas from contrastive estimation, we
introduce several new supervision techniques that compare question-answer
scores across multiple related instances. Specifically, we normalize these
scores across various neighborhoods of closely contrasting questions and/or
answers, adding another cross entropy loss term that is used in addition to
traditional maximum likelihood estimation. Our techniques require bundles of
related question-answer pairs, which we can either mine from within existing
data or create using various automated heuristics. We empirically demonstrate
the effectiveness of training with instance bundles on two datasets -- HotpotQA
and ROPES -- showing up to 11% absolute gains in accuracy.
| 2,021 |
Computation and Language
|
Low-Rank Subspaces for Unsupervised Entity Linking
|
Entity linking is an important problem with many applications. Most previous
solutions were designed for settings where annotated training data is
available, which is, however, not the case in numerous domains. We propose a
light-weight and scalable entity linking method, Eigenthemes, that relies
solely on the availability of entity names and a referent knowledge base.
Eigenthemes exploits the fact that the entities that are truly mentioned in a
document (the "gold entities") tend to form a semantically dense subset of the
set of all candidate entities in the document. Geometrically speaking, when
representing entities as vectors via some given embedding, the gold entities
tend to lie in a low-rank subspace of the full embedding space. Eigenthemes
identifies this subspace using the singular value decomposition and scores
candidate entities according to their proximity to the subspace. On the
empirical front, we introduce multiple strong baselines that compare favorably
to (and sometimes even outperform) the existing state of the art. Extensive
experiments on benchmark datasets from a variety of real-world domains showcase
the effectiveness of our approach.
| 2,022 |
Computation and Language
|
CEAR: Cross-Entity Aware Reranker for Knowledge Base Completion
|
Pre-trained language models (LMs) like BERT have shown to store factual
knowledge about the world. This knowledge can be used to augment the
information present in Knowledge Bases, which tend to be incomplete. However,
prior attempts at using BERT for task of Knowledge Base Completion (KBC)
resulted in performance worse than embedding based techniques that rely only on
the graph structure. In this work we develop a novel model, Cross-Entity Aware
Reranker (CEAR), that uses BERT to re-rank the output of existing KBC models
with cross-entity attention. Unlike prior work that scores each entity
independently, CEAR uses BERT to score the entities together, which is
effective for exploiting its factual knowledge. CEAR achieves a new state of
art for the OLPBench dataset.
| 2,022 |
Computation and Language
|
Go Forth and Prosper: Language Modeling with Ancient Textual History
|
We introduce a technique for improving document-level language models (LM) by
leveraging "ancient history": text that is outside the LM's current context
window. We learn an auxiliary function to select spans from the ancient history
which can help the LM to predict future text. The selected text spans are then
copied directly into the LM's context window, replacing less predictive spans.
This method can improve perplexity of pretrained LMs with no updates to the
LM's own parameters. We further observe that an auxiliary function trained in a
specific textual domain like Wikipedia will also work in a substantially
different domain such as scientific publications. With this technique we see a
7 percent perplexity reduction on Wikipedia articles, and a 12 percent
perplexity reduction on scientific texts.
| 2,021 |
Computation and Language
|
Generative Context Pair Selection for Multi-hop Question Answering
|
Compositional reasoning tasks like multi-hop question answering, require
making latent decisions to get the final answer, given a question. However,
crowdsourced datasets often capture only a slice of the underlying task
distribution, which can induce unanticipated biases in models performing
compositional reasoning. Furthermore, discriminatively trained models exploit
such biases to get a better held-out performance, without learning the right
way to reason, as they do not necessitate paying attention to the question
representation (conditioning variable) in its entirety, to estimate the answer
likelihood. In this work, we propose a generative context selection model for
multi-hop question answering that reasons about how the given question could
have been generated given a context pair. While being comparable to the
state-of-the-art answering performance, our proposed generative passage
selection model has a better performance (4.9% higher than baseline) on
adversarial held-out set which tests robustness of model's multi-hop reasoning
capabilities.
| 2,021 |
Computation and Language
|
DCH-2: A Parallel Customer-Helpdesk Dialogue Corpus with Distributions
of Annotators' Labels
|
We introduce a data set called DCH-2, which contains 4,390 real
customer-helpdesk dialogues in Chinese and their English translations. DCH-2
also contains dialogue-level annotations and turn-level annotations obtained
independently from either 19 or 20 annotators. The data set was built through
our effort as organisers of the NTCIR-14 Short Text Conversation and NTCIR-15
Dialogue Evaluation tasks, to help researchers understand what constitutes an
effective customer-helpdesk dialogue, and thereby build efficient and helpful
helpdesk systems that are available to customers at all times. In addition,
DCH-2 may be utilised for other purposes, for example, as a repository for
retrieval-based dialogue systems, or as a parallel corpus for machine
translation in the helpdesk domain.
| 2,021 |
Computation and Language
|
Zero-shot Cross-lingual Transfer of Neural Machine Translation with
Multilingual Pretrained Encoders
|
Previous work mainly focuses on improving cross-lingual transfer for NLU
tasks with a multilingual pretrained encoder (MPE), or improving the
performance on supervised machine translation with BERT. However, it is
under-explored that whether the MPE can help to facilitate the cross-lingual
transferability of NMT model. In this paper, we focus on a zero-shot
cross-lingual transfer task in NMT. In this task, the NMT model is trained with
parallel dataset of only one language pair and an off-the-shelf MPE, then it is
directly tested on zero-shot language pairs. We propose SixT, a simple yet
effective model for this task. SixT leverages the MPE with a two-stage training
schedule and gets further improvement with a position disentangled encoder and
a capacity-enhanced decoder. Using this method, SixT significantly outperforms
mBART, a pretrained multilingual encoder-decoder model explicitly designed for
NMT, with an average improvement of 7.1 BLEU on zero-shot any-to-English test
sets across 14 source languages. Furthermore, with much less training
computation cost and training data, our model achieves better performance on 15
any-to-English test sets than CRISS and m2m-100, two strong multilingual NMT
baselines.
| 2,021 |
Computation and Language
|
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean
Crawled Corpus
|
Large language models have led to remarkable progress on many NLP tasks, and
researchers are turning to ever-larger text corpora to train them. Some of the
largest corpora available are made by scraping significant portions of the
internet, and are frequently introduced with only minimal documentation. In
this work we provide some of the first documentation for the Colossal Clean
Crawled Corpus (C4; Raffel et al., 2020), a dataset created by applying a set
of filters to a single snapshot of Common Crawl. We begin by investigating
where the data came from, and find a significant amount of text from unexpected
sources like patents and US military websites. Then we explore the content of
the text itself, and find machine-generated text (e.g., from machine
translation systems) and evaluation examples from other benchmark NLP datasets.
To understand the impact of the filters applied to create this dataset, we
evaluate the text that was removed, and show that blocklist filtering
disproportionately removes text from and about minority individuals. Finally,
we conclude with some recommendations for how to created and document web-scale
datasets from a scrape of the internet.
| 2,021 |
Computation and Language
|
Case-based Reasoning for Natural Language Queries over Knowledge Bases
|
It is often challenging to solve a complex problem from scratch, but much
easier if we can access other similar problems with their solutions -- a
paradigm known as case-based reasoning (CBR). We propose a neuro-symbolic CBR
approach (CBR-KBQA) for question answering over large knowledge bases. CBR-KBQA
consists of a nonparametric memory that stores cases (question and logical
forms) and a parametric model that can generate a logical form for a new
question by retrieving cases that are relevant to it. On several KBQA datasets
that contain complex questions, CBR-KBQA achieves competitive performance. For
example, on the ComplexWebQuestions dataset, CBR-KBQA outperforms the current
state of the art by 11\% on accuracy. Furthermore, we show that CBR-KBQA is
capable of using new cases \emph{without} any further training: by
incorporating a few human-labeled examples in the case memory, CBR-KBQA is able
to successfully generate logical forms containing unseen KB entities as well as
relations.
| 2,021 |
Computation and Language
|
Making Attention Mechanisms More Robust and Interpretable with Virtual
Adversarial Training
|
Although attention mechanisms have become fundamental components of deep
learning models, they are vulnerable to perturbations, which may degrade the
prediction performance and model interpretability. Adversarial training (AT)
for attention mechanisms has successfully reduced such drawbacks by considering
adversarial perturbations. However, this technique requires label information,
and thus, its use is limited to supervised settings. In this study, we explore
the concept of incorporating virtual AT (VAT) into the attention mechanisms, by
which adversarial perturbations can be computed even from unlabeled data. To
realize this approach, we propose two general training techniques, namely VAT
for attention mechanisms (Attention VAT) and "interpretable" VAT for attention
mechanisms (Attention iVAT), which extend AT for attention mechanisms to a
semi-supervised setting. In particular, Attention iVAT focuses on the
differences in attention; thus, it can efficiently learn clearer attention and
improve model interpretability, even with unlabeled data. Empirical experiments
based on six public datasets revealed that our techniques provide better
prediction performance than conventional AT-based as well as VAT-based
techniques, and stronger agreement with evidence that is provided by humans in
detecting important words in sentences. Moreover, our proposal offers these
advantages without needing to add the careful selection of unlabeled data. That
is, even if the model using our VAT-based technique is trained on unlabeled
data from a source other than the target task, both the prediction performance
and model interpretability can be improved.
| 2,022 |
Computation and Language
|
Improving Neural Model Performance through Natural Language Feedback on
Their Explanations
|
A class of explainable NLP models for reasoning tasks support their decisions
by generating free-form or structured explanations, but what happens when these
supporting structures contain errors? Our goal is to allow users to
interactively correct explanation structures through natural language feedback.
We introduce MERCURIE - an interactive system that refines its explanations for
a given reasoning task by getting human feedback in natural language. Our
approach generates graphs that have 40% fewer inconsistencies as compared with
the off-the-shelf system. Further, simply appending the corrected explanation
structures to the output leads to a gain of 1.2 points on accuracy on
defeasible reasoning across all three domains. We release a dataset of over
450k graphs for defeasible reasoning generated by our system at
https://tinyurl.com/mercurie .
| 2,021 |
Computation and Language
|
Constrained Language Models Yield Few-Shot Semantic Parsers
|
We explore the use of large pretrained language models as few-shot semantic
parsers. The goal in semantic parsing is to generate a structured meaning
representation given a natural language input. However, language models are
trained to generate natural language. To bridge the gap, we use language models
to paraphrase inputs into a controlled sublanguage resembling English that can
be automatically mapped to a target meaning representation. Our results
demonstrate that with only a small amount of data and very little code to
convert into English-like representations, our blueprint for rapidly
bootstrapping semantic parsers leads to surprisingly effective performance on
multiple community tasks, greatly exceeding baseline methods also trained on
the same limited data.
| 2,021 |
Computation and Language
|
Cross-Attention is All You Need: Adapting Pretrained Transformers for
Machine Translation
|
We study the power of cross-attention in the Transformer architecture within
the context of transfer learning for machine translation, and extend the
findings of studies into cross-attention when training from scratch. We conduct
a series of experiments through fine-tuning a translation model on data where
either the source or target language has changed. These experiments reveal that
fine-tuning only the cross-attention parameters is nearly as effective as
fine-tuning all parameters (i.e., the entire translation model). We provide
insights into why this is the case and observe that limiting fine-tuning in
this manner yields cross-lingually aligned embeddings. The implications of this
finding for researchers and practitioners include a mitigation of catastrophic
forgetting, the potential for zero-shot translation, and the ability to extend
machine translation models to several new language pairs with reduced parameter
storage overhead.
| 2,021 |
Computation and Language
|
Cross-Task Generalization via Natural Language Crowdsourcing
Instructions
|
Humans (e.g., crowdworkers) have a remarkable ability in solving different
tasks, by simply reading textual instructions that define them and looking at a
few examples. Despite the success of the conventional supervised learning on
individual datasets, such models often struggle with generalization across
tasks (e.g., a question-answering system cannot solve classification tasks). A
long-standing challenge in AI is to build a model that learns a new task by
understanding the human-readable instructions that define it. To study this, we
introduce NATURAL INSTRUCTIONS, a dataset of 61 distinct tasks, their
human-authored instructions, and 193k task instances (input-output pairs). The
instructions are obtained from crowdsourcing instructions used to create
existing NLP datasets and mapped to a unified schema. Using this meta-dataset,
we measure cross-task generalization by training models on seen tasks and
measuring generalization to the remaining unseen ones. We adopt generative
pre-trained language models to encode task-specific instructions along with
input and generate task output. Our results indicate that models benefit from
instructions when evaluated in terms of generalization to unseen tasks (19%
better for models utilizing instructions). These models, however, are far
behind an estimated performance upperbound indicating significant room for more
progress in this direction.
| 2,022 |
Computation and Language
|
Variational Weakly Supervised Sentiment Analysis with Posterior
Regularization
|
Sentiment analysis is an important task in natural language processing (NLP).
Most of existing state-of-the-art methods are under the supervised learning
paradigm. However, human annotations can be scarce. Thus, we should leverage
more weak supervision for sentiment analysis. In this paper, we propose a
posterior regularization framework for the variational approach to the weakly
supervised sentiment analysis to better control the posterior distribution of
the label assignment. The intuition behind the posterior regularization is that
if extracted opinion words from two documents are semantically similar, the
posterior distributions of two documents should be similar. Our experimental
results show that the posterior regularization can improve the original
variational approach to the weakly supervised sentiment analysis and the
performance is more stable with smaller prediction variance.
| 2,021 |
Computation and Language
|
On the Sensitivity and Stability of Model Interpretations in NLP
|
Recent years have witnessed the emergence of a variety of post-hoc
interpretations that aim to uncover how natural language processing (NLP)
models make predictions. Despite the surge of new interpretation methods, it
remains an open problem how to define and quantitatively measure the
faithfulness of interpretations, i.e., to what extent interpretations reflect
the reasoning process by a model. We propose two new criteria, sensitivity and
stability, that provide complementary notions of faithfulness to the existed
removal-based criteria. Our results show that the conclusion for how faithful
interpretations are could vary substantially based on different notions.
Motivated by the desiderata of sensitivity and stability, we introduce a new
class of interpretation methods that adopt techniques from adversarial
robustness. Empirical results show that our proposed methods are effective
under the new criteria and overcome limitations of gradient-based methods on
removal-based criteria. Besides text classification, we also apply
interpretation methods and metrics to dependency parsing. Our results shed
light on understanding the diverse set of interpretations.
| 2,022 |
Computation and Language
|
Fantastically Ordered Prompts and Where to Find Them: Overcoming
Few-Shot Prompt Order Sensitivity
|
When primed with only a handful of training samples, very large, pretrained
language models such as GPT-3 have shown competitive results when compared to
fully-supervised, fine-tuned, large, pretrained language models. We demonstrate
that the order in which the samples are provided can make the difference
between near state-of-the-art and random guess performance: essentially some
permutations are "fantastic" and some not. We analyse this phenomenon in
detail, establishing that: it is present across model sizes (even for the
largest current models), it is not related to a specific subset of samples, and
that a given good permutation for one model is not transferable to another.
While one could use a development set to determine which permutations are
performant, this would deviate from the true few-shot setting as it requires
additional annotated data. Instead, we use the generative nature of language
models to construct an artificial development set and based on entropy
statistics of the candidate permutations on this set, we identify performant
prompts. Our method yields a 13% relative improvement for GPT-family models
across eleven different established text classification tasks.
| 2,022 |
Computation and Language
|
Chinese Sentences Similarity via Cross-Attention Based Siamese Network
|
Measuring sentence similarity is a key research area nowadays as it allows
machines to better understand human languages. In this paper, we proposed a
Cross-Attention Siamese Network (CATsNet) to carry out the task of learning the
semantic meanings of Chinese sentences and comparing the similarity between two
sentences. This novel model is capable of catching non-local features.
Additionally, we also tried to apply the long short-term memory (LSTM) network
in the model to improve its performance. The experiments were conducted on the
LCQMC dataset and the results showed that our model could achieve a higher
accuracy than previous work.
| 2,021 |
Computation and Language
|
Misinfo Reaction Frames: Reasoning about Readers' Reactions to News
Headlines
|
Even to a simple and short news headline, readers react in a multitude of
ways: cognitively (e.g. inferring the writer's intent), emotionally (e.g.
feeling distrust), and behaviorally (e.g. sharing the news with their friends).
Such reactions are instantaneous and yet complex, as they rely on factors that
go beyond interpreting factual content of news. We propose Misinfo Reaction
Frames (MRF), a pragmatic formalism for modeling how readers might react to a
news headline. In contrast to categorical schema, our free-text dimensions
provide a more nuanced way of understanding intent beyond being benign or
malicious. We also introduce a Misinfo Reaction Frames corpus, a crowdsourced
dataset of reactions to over 25k news headlines focusing on global crises: the
Covid-19 pandemic, climate change, and cancer. Empirical results confirm that
it is indeed possible for neural models to predict the prominent patterns of
readers' reactions to previously unseen news headlines. Additionally, our user
study shows that displaying machine-generated MRF implications alongside news
headlines to readers can increase their trust in real news while decreasing
their trust in misinformation. Our work demonstrates the feasibility and
importance of pragmatic inferences on news headlines to help enhance AI-guided
misinformation detection and mitigation.
| 2,022 |
Computation and Language
|
Human-Imitating Metrics for Training and Evaluating Privacy Preserving
Emotion Recognition Models Using Sociolinguistic Knowledge
|
Privacy preservation is a crucial component of any real-world application.
But, in applications relying on machine learning backends, privacy is
challenging because models often capture more than what the model was initially
trained for, resulting in the potential leakage of sensitive information. In
this paper, we propose an automatic and quantifiable metric that allows us to
evaluate humans' perception of a model's ability to preserve privacy with
respect to sensitive variables. In this paper, we focus on saliency-based
explanations, explanations that highlight regions of the input text, to infer
internal workings of a black box model. We use the degree with which
differences in interpretation of general vs privacy preserving models correlate
with sociolinguistic biases to inform metric design. We show how certain
commonly-used methods that seek to preserve privacy do not align with human
perception of privacy preservation leading to distrust about model's claims. We
demonstrate the versatility of our proposed metric by validating its utility
for measuring cross corpus generalization for both privacy and emotion.
Finally, we conduct crowdsourcing experiments to evaluate the inclination of
the evaluators to choose a particular model for a given purpose when model
explanations are provided, and show a positive relationship with the proposed
metric. To the best of our knowledge, we take the first step in proposing
automatic and quantifiable metrics that best align with human perception of
model's ability for privacy preservation, allowing for cost-effective model
development.
| 2,021 |
Computation and Language
|
SalKG: Learning From Knowledge Graph Explanations for Commonsense
Reasoning
|
Augmenting pre-trained language models with knowledge graphs (KGs) has
achieved success on various commonsense reasoning tasks. However, for a given
task instance, the KG, or certain parts of the KG, may not be useful. Although
KG-augmented models often use attention to focus on specific KG components, the
KG is still always used, and the attention mechanism is never explicitly taught
which KG components should be used. Meanwhile, saliency methods can measure how
much a KG feature (e.g., graph, node, path) influences the model to make the
correct prediction, thus explaining which KG features are useful. This paper
explores how saliency explanations can be used to improve KG-augmented models'
performance. First, we propose to create coarse (Is the KG useful?) and fine
(Which nodes/paths in the KG are useful?) saliency explanations. Second, to
motivate saliency-based supervision, we analyze oracle KG-augmented models
which directly use saliency explanations as extra inputs for guiding their
attention. Third, we propose SalKG, a framework for KG-augmented models to
learn from coarse and/or fine saliency explanations. Given saliency
explanations created from a task's training set, SalKG jointly trains the model
to predict the explanations, then solve the task by attending to KG features
highlighted by the predicted explanations. On three commonsense QA benchmarks
(CSQA, OBQA, CODAH) and a range of KG-augmented models, we show that SalKG can
yield considerable performance gains -- up to 2.76% absolute improvement on
CSQA.
| 2,022 |
Computation and Language
|
Keyphrase Generation with Fine-Grained Evaluation-Guided Reinforcement
Learning
|
Aiming to generate a set of keyphrases, Keyphrase Generation (KG) is a
classical task for capturing the central idea from a given document. Based on
Seq2Seq models, the previous reinforcement learning framework on KG tasks
utilizes the evaluation metrics to further improve the well-trained neural
models. However, these KG evaluation metrics such as $F_1@5$ and $F_1@M$ are
only aware of the exact correctness of predictions on phrase-level and ignore
the semantic similarities between similar predictions and targets, which
inhibits the model from learning deep linguistic patterns. In response to this
problem, we propose a new fine-grained evaluation metric to improve the RL
framework, which considers different granularities: token-level $F_1$ score,
edit distance, duplication, and prediction quantities. On the whole, the new
framework includes two reward functions: the fine-grained evaluation score and
the vanilla $F_1$ score. This framework helps the model identifying some
partial match phrases which can be further optimized as the exact match ones.
Experiments on KG benchmarks show that our proposed training framework
outperforms the previous RL training frameworks among all evaluation scores. In
addition, our method can effectively ease the synonym problem and generate a
higher quality prediction. The source code is available at
\url{https://github.com/xuyige/FGRL4KG}.
| 2,021 |
Computation and Language
|
Back-Training excels Self-Training at Unsupervised Domain Adaptation of
Question Generation and Passage Retrieval
|
In this work, we introduce back-training, an alternative to self-training for
unsupervised domain adaptation (UDA) from source to target domain. While
self-training generates synthetic training data where natural inputs are
aligned with noisy outputs, back-training results in natural outputs aligned
with noisy inputs. This significantly reduces the gap between the target domain
and synthetic data distribution, and reduces model overfitting to the source
domain. We run UDA experiments on question generation and passage retrieval
from the \textit{Natural Questions} domain to machine learning and biomedical
domains. We find that back-training vastly outperforms self-training by a mean
improvement of 7.8 BLEU-4 points on generation, and 17.6\% top-20 retrieval
accuracy across both domains. We further propose consistency filters to remove
low-quality synthetic data before training. We also release a new
domain-adaptation dataset- \textit{MLQuestions} containing 35K unaligned
questions, 50K unaligned passages, and 3K aligned question-passage pairs.
| 2,021 |
Computation and Language
|
Consistent Accelerated Inference via Confident Adaptive Transformers
|
We develop a novel approach for confidently accelerating inference in the
large and expensive multilayer Transformers that are now ubiquitous in natural
language processing (NLP). Amortized or approximate computational methods
increase efficiency, but can come with unpredictable performance costs. In this
work, we present CATs -- Confident Adaptive Transformers -- in which we
simultaneously increase computational efficiency, while guaranteeing a
specifiable degree of consistency with the original model with high confidence.
Our method trains additional prediction heads on top of intermediate layers,
and dynamically decides when to stop allocating computational effort to each
input using a meta consistency classifier. To calibrate our early prediction
stopping rule, we formulate a unique extension of conformal prediction. We
demonstrate the effectiveness of this approach on four classification and
regression tasks.
| 2,021 |
Computation and Language
|
Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation
for Few-shot Learning
|
The ability to continuously expand knowledge over time and utilize it to
rapidly generalize to new tasks is a key feature of human linguistic
intelligence. Existing models that pursue rapid generalization to new tasks
(e.g., few-shot learning methods), however, are mostly trained in a single shot
on fixed datasets, unable to dynamically expand their knowledge; while
continual learning algorithms are not specifically designed for rapid
generalization. We present a new learning setup, Continual Learning of Few-Shot
Learners (CLIF), to address the challenges of both learning settings in a
unified setup. CLIF assumes a model learns from a sequence of diverse NLP tasks
arriving sequentially, accumulating knowledge for improved generalization to
new tasks, while also retaining performance on the tasks learned earlier. We
examine how the generalization ability is affected in the continual learning
setup, evaluate a number of continual learning algorithms, and propose a novel
regularized adapter generation approach. We find that catastrophic forgetting
affects generalization ability to a less degree than performance on seen tasks;
while continual learning algorithms can still bring considerable benefit to the
generalization ability.
| 2,022 |
Computation and Language
|
SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts
|
Determining coreference of concept mentions across multiple documents is a
fundamental task in natural language understanding. Previous work on
cross-document coreference resolution (CDCR) typically considers mentions of
events in the news, which seldom involve abstract technical concepts that are
prevalent in science and technology. These complex concepts take diverse or
ambiguous forms and have many hierarchical levels of granularity (e.g., tasks
and subtasks), posing challenges for CDCR. We present a new task of
Hierarchical CDCR (H-CDCR) with the goal of jointly inferring coreference
clusters and hierarchy between them. We create SciCo, an expert-annotated
dataset for H-CDCR in scientific papers, 3X larger than the prominent ECB+
resource. We study strong baseline models that we customize for H-CDCR, and
highlight challenges for future work.
| 2,021 |
Computation and Language
|
Human Schema Curation via Causal Association Rule Mining
|
Event schemas are structured knowledge sources defining typical real-world
scenarios (e.g., going to an airport). We present a framework for efficient
human-in-the-loop construction of a schema library, based on a novel script
induction system and a well-crafted interface that allows non-experts to
"program" complex event structures. Associated with this work we release a
schema library: a machine readable resource of 232 detailed event schemas, each
of which describe a distinct typical scenario in terms of its relevant
sub-event structure (what happens in the scenario), participants (who plays a
role in the scenario), fine-grained typing of each participant, and the implied
relational constraints between them. We make our schema library and the
SchemaBlocks interface available online.
| 2,022 |
Computation and Language
|
Contrastive Out-of-Distribution Detection for Pretrained Transformers
|
Pretrained Transformers achieve remarkable performance when training and test
data are from the same distribution. However, in real-world scenarios, the
model often faces out-of-distribution (OOD) instances that can cause severe
semantic shift problems at inference time. Therefore, in practice, a reliable
model should identify such instances, and then either reject them during
inference or pass them over to models that handle another distribution. In this
paper, we develop an unsupervised OOD detection method, in which only the
in-distribution (ID) data are used in training. We propose to fine-tune the
Transformers with a contrastive loss, which improves the compactness of
representations, such that OOD instances can be better differentiated from ID
ones. These OOD instances can then be accurately detected using the Mahalanobis
distance in the model's penultimate layer. We experiment with comprehensive
settings and achieve near-perfect OOD detection performance, outperforming
baselines drastically. We further investigate the rationales behind the
improvement, finding that more compact representations through margin-based
contrastive learning bring the improvement. We release our code to the
community for future research.
| 2,022 |
Computation and Language
|
FedNLP: Benchmarking Federated Learning Methods for Natural Language
Processing Tasks
|
Increasing concerns and regulations about data privacy and sparsity
necessitate the study of privacy-preserving, decentralized learning methods for
natural language processing (NLP) tasks. Federated learning (FL) provides
promising approaches for a large number of clients (e.g., personal devices or
organizations) to collaboratively learn a shared global model to benefit all
clients while allowing users to keep their data locally. Despite interest in
studying FL methods for NLP tasks, a systematic comparison and analysis is
lacking in the literature. Herein, we present the FedNLP, a benchmarking
framework for evaluating federated learning methods on four different task
formulations: text classification, sequence tagging, question answering, and
seq2seq. We propose a universal interface between Transformer-based language
models (e.g., BERT, BART) and FL methods (e.g., FedAvg, FedOPT, etc.) under
various non-IID partitioning strategies. Our extensive experiments with FedNLP
provide empirical comparisons between FL methods and helps us better understand
the inherent challenges of this direction. The comprehensive analysis points to
intriguing and exciting future research aimed at developing FL methods for NLP
tasks.
| 2,022 |
Computation and Language
|
Stream-level Latency Evaluation for Simultaneous Machine Translation
|
Simultaneous machine translation has recently gained traction thanks to
significant quality improvements and the advent of streaming applications.
Simultaneous translation systems need to find a trade-off between translation
quality and response time, and with this purpose multiple latency measures have
been proposed. However, latency evaluations for simultaneous translation are
estimated at the sentence level, not taking into account the sequential nature
of a streaming scenario. Indeed, these sentence-level latency measures are not
well suited for continuous stream translation resulting in figures that are not
coherent with the simultaneous translation policy of the system being assessed.
This work proposes a stream-level adaptation of the current latency measures
based on a re-segmentation approach applied to the output translation, that is
successfully evaluated on streaming conditions for a reference IWSLT task.
| 2,021 |
Computation and Language
|
SimCSE: Simple Contrastive Learning of Sentence Embeddings
|
This paper presents SimCSE, a simple contrastive learning framework that
greatly advances state-of-the-art sentence embeddings. We first describe an
unsupervised approach, which takes an input sentence and predicts itself in a
contrastive objective, with only standard dropout used as noise. This simple
method works surprisingly well, performing on par with previous supervised
counterparts. We find that dropout acts as minimal data augmentation, and
removing it leads to a representation collapse. Then, we propose a supervised
approach, which incorporates annotated pairs from natural language inference
datasets into our contrastive learning framework by using "entailment" pairs as
positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on
standard semantic textual similarity (STS) tasks, and our unsupervised and
supervised models using BERT base achieve an average of 76.3% and 81.6%
Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to
the previous best results. We also show -- both theoretically and empirically
-- that the contrastive learning objective regularizes pre-trained embeddings'
anisotropic space to be more uniform, and it better aligns positive pairs when
supervised signals are available.
| 2,022 |
Computation and Language
|
Flexible Generation of Natural Language Deductions
|
An interpretable system for open-domain reasoning needs to express its
reasoning process in a transparent form. Natural language is an attractive
representation for this purpose -- it is both highly expressive and easy for
humans to understand. However, manipulating natural language statements in
logically consistent ways is hard: models must cope with variation in how
meaning is expressed while remaining precise. In this paper, we describe
ParaPattern, a method for building models to generate deductive inferences from
diverse natural language inputs without direct human supervision. We train
BART-based models (Lewis et al., 2020) to generate the result of applying a
particular logical operation to one or more premise statements. Crucially, we
develop a largely automated pipeline for constructing suitable training
examples from Wikipedia. We evaluate our models using out-of-domain sentence
compositions from the QASC (Khot et al., 2020) and EntailmentBank (Dalvi et
al., 2021) datasets as well as targeted perturbation sets. Our results show
that our models are substantially more accurate and flexible than baseline
systems. ParaPattern achieves 85% validity on examples of the 'substitution'
operation from EntailmentBank without the use of any in-domain training data,
matching the performance of a model fine-tuned for EntailmentBank. The full
source code for our method is publicly available.
| 2,021 |
Computation and Language
|
GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation
|
Large-scale language models such as GPT-3 are excellent few-shot learners,
allowing them to be controlled via natural text prompts. Recent studies report
that prompt-based direct classification eliminates the need for fine-tuning but
lacks data and inference scalability. This paper proposes a novel data
augmentation technique that leverages large-scale language models to generate
realistic text samples from a mixture of real samples. We also propose
utilizing soft-labels predicted by the language models, effectively distilling
knowledge from the large-scale language models and creating textual
perturbations simultaneously. We perform data augmentation experiments on
diverse classification tasks and show that our method hugely outperforms
existing text augmentation methods. Ablation studies and a qualitative analysis
provide more insights into our approach.
| 2,021 |
Computation and Language
|
Modeling Ideological Salience and Framing in Polarized Online Groups
with Graph Neural Networks and Structured Sparsity
|
The increasing polarization of online political discourse calls for
computational tools that automatically detect and monitor ideological divides
in social media. We introduce a minimally supervised method that leverages the
network structure of online discussion forums, specifically Reddit, to detect
polarized concepts. We model polarization along the dimensions of salience and
framing, drawing upon insights from moral psychology. Our architecture combines
graph neural networks with structured sparsity learning and results in
representations for concepts and subreddits that capture temporal ideological
dynamics such as right-wing and left-wing radicalization.
| 2,022 |
Computation and Language
|
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in
NLP
|
Humans can learn a new language task efficiently with only few examples, by
leveraging their knowledge obtained when learning prior tasks. In this paper,
we explore whether and how such cross-task generalization ability can be
acquired, and further applied to build better few-shot learners across diverse
NLP tasks. We introduce CrossFit, a problem setup for studying cross-task
generalization ability, which standardizes seen/unseen task partitions, data
access during different learning stages, and the evaluation protocols. To
instantiate different seen/unseen task partitions in CrossFit and facilitate
in-depth analysis, we present the NLP Few-shot Gym, a repository of 160 diverse
few-shot NLP tasks created from open-access NLP datasets and converted to a
unified text-to-text format. Our analysis reveals that the few-shot learning
ability on unseen tasks can be improved via an upstream learning stage using a
set of seen tasks. We also observe that the selection of upstream learning
tasks can significantly influence few-shot performance on unseen tasks, asking
further analysis on task similarity and transferability.
| 2,021 |
Computation and Language
|
LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich
Document Understanding
|
Multimodal pre-training with text, layout, and image has achieved SOTA
performance for visually-rich document understanding tasks recently, which
demonstrates the great potential for joint learning across different
modalities. In this paper, we present LayoutXLM, a multimodal pre-trained model
for multilingual document understanding, which aims to bridge the language
barriers for visually-rich document understanding. To accurately evaluate
LayoutXLM, we also introduce a multilingual form understanding benchmark
dataset named XFUND, which includes form understanding samples in 7 languages
(Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and
key-value pairs are manually labeled for each language. Experiment results show
that the LayoutXLM model has significantly outperformed the existing SOTA
cross-lingual pre-trained models on the XFUND dataset. The pre-trained
LayoutXLM model and the XFUND dataset are publicly available at
https://aka.ms/layoutxlm.
| 2,021 |
Computation and Language
|
On the Influence of Masking Policies in Intermediate Pre-training
|
Current NLP models are predominantly trained through a two-stage "pre-train
then fine-tune" pipeline. Prior work has shown that inserting an intermediate
pre-training stage, using heuristic masking policies for masked language
modeling (MLM), can significantly improve final performance. However, it is
still unclear (1) in what cases such intermediate pre-training is helpful, (2)
whether hand-crafted heuristic objectives are optimal for a given task, and (3)
whether a masking policy designed for one task is generalizable beyond that
task. In this paper, we perform a large-scale empirical study to investigate
the effect of various masking policies in intermediate pre-training with nine
selected tasks across three categories. Crucially, we introduce methods to
automate the discovery of optimal masking policies via direct supervision or
meta-learning. We conclude that the success of intermediate pre-training is
dependent on appropriate pre-train corpus, selection of output format (i.e.,
masked spans or full sentence), and clear understanding of the role that MLM
plays for the downstream task. In addition, we find our learned masking
policies outperform the heuristic of masking named entities on TriviaQA, and
policies learned from one task can positively transfer to other tasks in
certain cases, inviting future research in this direction.
| 2,021 |
Computation and Language
|
Emotion-Regularized Conditional Variational Autoencoder for Emotional
Response Generation
|
This paper presents an emotion-regularized conditional variational
autoencoder (Emo-CVAE) model for generating emotional conversation responses.
In conventional CVAE-based emotional response generation, emotion labels are
simply used as additional conditions in prior, posterior and decoder networks.
Considering that emotion styles are naturally entangled with semantic contents
in the language space, the Emo-CVAE model utilizes emotion labels to regularize
the CVAE latent space by introducing an extra emotion prediction network. In
the training stage, the estimated latent variables are required to predict the
emotion labels and token sequences of the input responses simultaneously.
Experimental results show that our Emo-CVAE model can learn a more informative
and structured latent space than a conventional CVAE model and output responses
with better content and emotion performance than baseline CVAE and
sequence-to-sequence (Seq2Seq) models.
| 2,021 |
Computation and Language
|
Language in a (Search) Box: Grounding Language Learning in Real-World
Human-Machine Interaction
|
We investigate grounded language learning through real-world data, by
modelling a teacher-learner dynamics through the natural interactions occurring
between users and search engines; in particular, we explore the emergence of
semantic generalization from unsupervised dense representations outside of
synthetic environments. A grounding domain, a denotation function and a
composition function are learned from user data only. We show how the resulting
semantics for noun phrases exhibits compositional properties while being fully
learnable without any explicit labelling. We benchmark our grounded semantics
on compositionality and zero-shot inference tasks, and we show that it provides
better results and better generalizations than SOTA non-grounded models, such
as word2vec and BERT.
| 2,021 |
Computation and Language
|
The Preposition Project
|
Prepositions are an important vehicle for indicating semantic roles. Their
meanings are difficult to analyze and they are often discarded in processing
text. The Preposition Project is designed to provide a comprehensive database
of preposition senses suitable for use in natural language processing
applications. In the project, prepositions in the FrameNet corpus are
disambiguated using a sense inventory from a current dictionary, guided by a
comprehensive treatment of preposition meaning. The methodology provides a
framework for identifying and characterizing semantic roles, a gold standard
corpus of instances for further analysis, and an account of semantic role
alternation patterns. By adhering to this methodology, it is hoped that a
comprehensive and improved characterization of preposition behavior (semantic
role identification, and syntactic and semantic properties of the preposition
complement and attachment point) will be developed. The databases generated in
the project are publicly available for further use by researchers and
application developers.
| 2,021 |
Computation and Language
|
Attention-based Clinical Note Summarization
|
In recent years, the trend of deploying digital systems in numerous
industries has hiked. The health sector has observed an extensive adoption of
digital systems and services that generate significant medical records.
Electronic health records contain valuable information for prospective and
retrospective analysis that is often not entirely exploited because of the
complicated dense information storage. The crude purpose of condensing health
records is to select the information that holds most characteristics of the
original documents based on a reported disease. These summaries may boost
diagnosis and save a doctor's time during a saturated workload situation like
the COVID-19 pandemic. In this paper, we are applying a multi-head
attention-based mechanism to perform extractive summarization of meaningful
phrases on clinical notes. Our method finds major sentences for a summary by
correlating tokens, segments, and positional embeddings of sentences in a
clinical note. The model outputs attention scores that are statistically
transformed to extract critical phrases for visualization on the heat-mapping
tool and for human use.
| 2,022 |
Computation and Language
|
Reference-based Weak Supervision for Answer Sentence Selection using Web
Data
|
Answer sentence selection (AS2) modeling requires annotated data, i.e.,
hand-labeled question-answer pairs. We present a strategy to collect weakly
supervised answers for a question based on its reference to improve AS2
modeling. Specifically, we introduce Reference-based Weak Supervision (RWS), a
fully automatic large-scale data pipeline that harvests high-quality
weakly-supervised answers from abundant Web data requiring only a
question-reference pair as input. We study the efficacy and robustness of RWS
in the setting of TANDA, a recent state-of-the-art fine-tuning approach
specialized for AS2. Our experiments indicate that the produced data
consistently bolsters TANDA. We achieve the state of the art in terms of P@1,
90.1%, and MAP, 92.9%, on WikiQA.
| 2,021 |
Computation and Language
|
On the Use of Context for Predicting Citation Worthiness of Sentences in
Scholarly Articles
|
In this paper, we study the importance of context in predicting the citation
worthiness of sentences in scholarly articles. We formulate this problem as a
sequence labeling task solved using a hierarchical BiLSTM model. We contribute
a new benchmark dataset containing over two million sentences and their
corresponding labels. We preserve the sentence order in this dataset and
perform document-level train/test splits, which importantly allows
incorporating contextual information in the modeling process. We evaluate the
proposed approach on three benchmark datasets. Our results quantify the
benefits of using context and contextual embeddings for citation worthiness.
Lastly, through error analysis, we provide insights into cases where context
plays an essential role in predicting citation worthiness.
| 2,021 |
Computation and Language
|
A recipe for annotating grounded clarifications
|
In order to interpret the communicative intents of an utterance, it needs to
be grounded in something that is outside of language; that is, grounded in
world modalities. In this paper, we argue that dialogue clarification
mechanisms make explicit the process of interpreting the communicative intents
of the speaker's utterances by grounding them in the various modalities in
which the dialogue is situated. This paper frames dialogue clarification
mechanisms as an understudied research problem and a key missing piece in the
giant jigsaw puzzle of natural language understanding. We discuss both the
theoretical background and practical challenges posed by this problem and
propose a recipe for obtaining grounding annotations. We conclude by
highlighting ethical issues that need to be addressed in future work.
| 2,021 |
Computation and Language
|
Sentiment Classification in Swahili Language Using Multilingual BERT
|
The evolution of the Internet has increased the amount of information that is
expressed by people on different platforms. This information can be product
reviews, discussions on forums, or social media platforms. Accessibility of
these opinions and peoples feelings open the door to opinion mining and
sentiment analysis. As language and speech technologies become more advanced,
many languages have been used and the best models have been obtained. However,
due to linguistic diversity and lack of datasets, African languages have been
left behind. In this study, by using the current state-of-the-art model,
multilingual BERT, we perform sentiment classification on Swahili datasets. The
data was created by extracting and annotating 8.2k reviews and comments on
different social media platforms and the ISEAR emotion dataset. The data were
classified as either positive or negative. The model was fine-tuned and achieve
the best accuracy of 87.59%.
| 2,021 |
Computation and Language
|
Few-shot Learning for Topic Modeling
|
Topic models have been successfully used for analyzing text documents.
However, with existing topic models, many documents are required for training.
In this paper, we propose a neural network-based few-shot learning method that
can learn a topic model from just a few documents. The neural networks in our
model take a small number of documents as inputs, and output topic model
priors. The proposed method trains the neural networks such that the expected
test likelihood is improved when topic model parameters are estimated by
maximizing the posterior probability using the priors based on the EM
algorithm. Since each step in the EM algorithm is differentiable, the proposed
method can backpropagate the loss through the EM algorithm to train the neural
networks. The expected test likelihood is maximized by a stochastic gradient
descent method using a set of multiple text corpora with an episodic training
framework. In our experiments, we demonstrate that the proposed method achieves
better perplexity than existing methods using three real-world text document
sets.
| 2,021 |
Computation and Language
|
Production vs Perception: The Role of Individuality in Usage-Based
Grammar Induction
|
This paper asks whether a distinction between production-based and
perception-based grammar induction influences either (i) the growth curve of
grammars and lexicons or (ii) the similarity between representations learned
from independent sub-sets of a corpus. A production-based model is trained on
the usage of a single individual, thus simulating the grammatical knowledge of
a single speaker. A perception-based model is trained on an aggregation of many
individuals, thus simulating grammatical generalizations learned from exposure
to many different speakers. To ensure robustness, the experiments are
replicated across two registers of written English, with four additional
registers reserved as a control. A set of three computational experiments shows
that production-based grammars are significantly different from
perception-based grammars across all conditions, with a steeper growth curve
that can be explained by substantial inter-individual grammatical differences.
| 2,021 |
Computation and Language
|
BigGreen at SemEval-2021 Task 1: Lexical Complexity Prediction with
Assembly Models
|
This paper describes a system submitted by team BigGreen to LCP 2021 for
predicting the lexical complexity of English words in a given context. We
assemble a feature engineering-based model with a deep neural network model
founded on BERT. While BERT itself performs competitively, our feature
engineering-based model helps in extreme cases, eg. separating instances of
easy and neutral difficulty. Our handcrafted features comprise a breadth of
lexical, semantic, syntactic, and novel phonological measures. Visualizations
of BERT attention maps offer insight into potential features that Transformers
models may learn when fine-tuned for lexical complexity prediction. Our
ensembled predictions score reasonably well for the single word subtask, and we
demonstrate how they can be harnessed to perform well on the multi word
expression subtask too.
| 2,021 |
Computation and Language
|
Neural Unsupervised Semantic Role Labeling
|
The task of semantic role labeling (SRL) is dedicated to finding the
predicate-argument structure. Previous works on SRL are mostly supervised and
do not consider the difficulty in labeling each example which can be very
expensive and time-consuming. In this paper, we present the first neural
unsupervised model for SRL. To decompose the task as two argument related
subtasks, identification and clustering, we propose a pipeline that
correspondingly consists of two neural modules. First, we train a neural model
on two syntax-aware statistically developed rules. The neural model gets the
relevance signal for each token in a sentence, to feed into a BiLSTM, and then
an adversarial layer for noise-adding and classifying simultaneously, thus
enabling the model to learn the semantic structure of a sentence. Then we
propose another neural model for argument role clustering, which is done
through clustering the learned argument embeddings biased towards their
dependency relations. Experiments on CoNLL-2009 English dataset demonstrate
that our model outperforms previous state-of-the-art baseline in terms of
non-neural models for argument identification and classification.
| 2,021 |
Computation and Language
|
Improving Faithfulness in Abstractive Summarization with Contrast
Candidate Generation and Selection
|
Despite significant progress in neural abstractive summarization, recent
studies have shown that the current models are prone to generating summaries
that are unfaithful to the original context. To address the issue, we study
contrast candidate generation and selection as a model-agnostic post-processing
technique to correct the extrinsic hallucinations (i.e. information not present
in the source text) in unfaithful summaries. We learn a discriminative
correction model by generating alternative candidate summaries where named
entities and quantities in the generated summary are replaced with ones with
compatible semantic types from the source document. This model is then used to
select the best candidate as the final output summary. Our experiments and
analysis across a number of neural summarization systems show that our proposed
method is effective in identifying and correcting extrinsic hallucinations. We
analyze the typical hallucination phenomenon by different types of neural
summarization systems, in hope to provide insights for future work on the
direction.
| 2,021 |
Computation and Language
|
Scattered Factor Universality -- The Power of the Remainder
|
Scattered factor (circular) universality was firstly introduced by Barker et
al. in 2020. A word $w$ is called $k$-universal for some natural number $k$, if
every word of length $k$ of $w$'s alphabet occurs as a scattered factor in $w$;
it is called circular $k$-universal if a conjugate of $w$ is $k$-universal.
Here, a word $u=u_1\cdots u_n$ is called a scattered factor of $w$ if $u$ is
obtained from $w$ by deleting parts of $w$, i.e. there exists (possibly empty)
words $v_1,\dots,v_{n+1}$ with $w=v_1u_1v_2\cdots v_nu_nv_{n+1}$. In this work,
we prove two problems, left open in the aforementioned paper, namely a
generalisation of one of their main theorems to arbitrary alphabets and a
slight modification of another theorem such that we characterise the circular
universality by the universality. On the way, we present deep insights into the
behaviour of the remainder of the so called arch factorisation by Hebrard when
repetitions of words are considered.
| 2,021 |
Computation and Language
|
IIITT@LT-EDI-EACL2021-Hope Speech Detection: There is always Hope in
Transformers
|
In a world filled with serious challenges like climate change, religious and
political conflicts, global pandemics, terrorism, and racial discrimination, an
internet full of hate speech, abusive and offensive content is the last thing
we desire for. In this paper, we work to identify and promote positive and
supportive content on these platforms. We work with several transformer-based
models to classify social media comments as hope speech or not-hope speech in
English, Malayalam and Tamil languages. This paper portrays our work for the
Shared Task on Hope Speech Detection for Equality, Diversity, and Inclusion at
LT-EDI 2021- EACL 2021.
| 2,021 |
Computation and Language
|
UVCE-IIITT@DravidianLangTech-EACL2021: Tamil Troll Meme Classification:
You need to Pay more Attention
|
Tamil is a Dravidian language that is commonly used and spoken in the
southern part of Asia. In the era of social media, memes have been a fun moment
in the day-to-day life of people. Here, we try to analyze the true meaning of
Tamil memes by categorizing them as troll and non-troll. We propose an
ingenious model comprising of a transformer-transformer architecture that tries
to attain state-of-the-art by using attention as its main component. The
dataset consists of troll and non-troll images with their captions as text. The
task is a binary classification task. The objective of the model is to pay more
attention to the extracted features and to ignore the noise in both images and
text.
| 2,021 |
Computation and Language
|
Alexa Conversations: An Extensible Data-driven Approach for Building
Task-oriented Dialogue Systems
|
Traditional goal-oriented dialogue systems rely on various components such as
natural language understanding, dialogue state tracking, policy learning and
response generation. Training each component requires annotations which are
hard to obtain for every new domain, limiting scalability of such systems.
Similarly, rule-based dialogue systems require extensive writing and
maintenance of rules and do not scale either. End-to-End dialogue systems, on
the other hand, do not require module-specific annotations but need a large
amount of data for training. To overcome these problems, in this demo, we
present Alexa Conversations, a new approach for building goal-oriented dialogue
systems that is scalable, extensible as well as data efficient. The components
of this system are trained in a data-driven manner, but instead of collecting
annotated conversations for training, we generate them using a novel dialogue
simulator based on a few seed dialogues and specifications of APIs and entities
provided by the developer. Our approach provides out-of-the-box support for
natural conversational phenomena like entity sharing across turns or users
changing their mind during conversation without requiring developers to provide
any such dialogue flows. We exemplify our approach using a simple pizza
ordering task and showcase its value in reducing the developer burden for
creating a robust experience. Finally, we evaluate our system using a typical
movie ticket booking task and show that the dialogue simulator is an essential
component of the system that leads to over $50\%$ improvement in turn-level
action signature prediction accuracy.
| 2,021 |
Computation and Language
|
Acoustic Data-Driven Subword Modeling for End-to-End Speech Recognition
|
Subword units are commonly used for end-to-end automatic speech recognition
(ASR), while a fully acoustic-oriented subword modeling approach is somewhat
missing. We propose an acoustic data-driven subword modeling (ADSM) approach
that adapts the advantages of several text-based and acoustic-based subword
methods into one pipeline. With a fully acoustic-oriented label design and
learning process, ADSM produces acoustic-structured subword units and
acoustic-matched target sequence for further ASR training. The obtained ADSM
labels are evaluated with different end-to-end ASR approaches including CTC,
RNN-Transducer and attention models. Experiments on the LibriSpeech corpus show
that ADSM clearly outperforms both byte pair encoding (BPE) and
pronunciation-assisted subword modeling (PASM) in all cases. Detailed analysis
shows that ADSM achieves acoustically more logical word segmentation and more
balanced sequence length, and thus, is suitable for both time-synchronous and
label-synchronous models. We also briefly describe how to apply acoustic-based
subword regularization and unseen text segmentation using ADSM.
| 2,023 |
Computation and Language
|
No comments: Addressing commentary sections in websites' analyses
|
Removing or extracting the commentary sections from a series of websites is a
tedious task, as no standard way to code them is widely adopted. This operation
is thus very rarely performed. In this paper, we show that these commentary
sections can induce significant biases in the analyses, especially in the case
of controversial Highlights $\bullet$ Commentary sections can induce biases in
the analysis of websites' contents $\bullet$ Analyzing these sections can be
interesting per se. $\bullet$ We illustrate these points using a corpus of
anti-vaccine websites. $\bullet$ We provide guidelines to remove or extract
these sections.
| 2,021 |
Computation and Language
|
BERTi\'c -- The Transformer Language Model for Bosnian, Croatian,
Montenegrin and Serbian
|
In this paper we describe a transformer model pre-trained on 8 billion tokens
of crawled text from the Croatian, Bosnian, Serbian and Montenegrin web
domains. We evaluate the transformer model on the tasks of part-of-speech
tagging, named-entity-recognition, geo-location prediction and commonsense
causal reasoning, showing improvements on all tasks over state-of-the-art
models. For commonsense reasoning evaluation, we introduce COPA-HR -- a
translation of the Choice of Plausible Alternatives (COPA) dataset into
Croatian. The BERTi\'c model is made available for free usage and further
task-specific fine-tuning through HuggingFace.
| 2,021 |
Computation and Language
|
Code Structure Guided Transformer for Source Code Summarization
|
Code summaries help developers comprehend programs and reduce their time to
infer the program functionalities during software maintenance. Recent efforts
resort to deep learning techniques such as sequence-to-sequence models for
generating accurate code summaries, among which Transformer-based approaches
have achieved promising performance. However, effectively integrating the code
structure information into the Transformer is under-explored in this task
domain. In this paper, we propose a novel approach named SG-Trans to
incorporate code structural properties into Transformer. Specifically, we
inject the local symbolic information (e.g., code tokens and statements) and
global syntactic structure (e.g., data flow graph) into the self-attention
module of Transformer as inductive bias. To further capture the hierarchical
characteristics of code, the local information and global structure are
designed to distribute in the attention heads of lower layers and high layers
of Transformer. Extensive evaluation shows the superior performance of SG-Trans
over the state-of-the-art approaches. Compared with the best-performing
baseline, SG-Trans still improves 1.4% and 2.0% in terms of METEOR score, a
metric widely used for measuring generation quality, respectively on two
benchmark datasets.
| 2,023 |
Computation and Language
|
Probing for Bridging Inference in Transformer Language Models
|
We probe pre-trained transformer language models for bridging inference. We
first investigate individual attention heads in BERT and observe that attention
heads at higher layers prominently focus on bridging relations in-comparison
with the lower and middle layers, also, few specific attention heads
concentrate consistently on bridging. More importantly, we consider language
models as a whole in our second approach where bridging anaphora resolution is
formulated as a masked token prediction task (Of-Cloze test). Our formulation
produces optimistic results without any fine-tuning, which indicates that
pre-trained language models substantially capture bridging inference. Our
further investigation shows that the distance between anaphor-antecedent and
the context provided to language models play an important role in the
inference.
| 2,021 |
Computation and Language
|
Everything Has a Cause: Leveraging Causal Inference in Legal Text
Analysis
|
Causal inference is the process of capturing cause-effect relationship among
variables. Most existing works focus on dealing with structured data, while
mining causal relationship among factors from unstructured data, like text, has
been less examined, but is of great importance, especially in the legal domain.
In this paper, we propose a novel Graph-based Causal Inference (GCI)
framework, which builds causal graphs from fact descriptions without much human
involvement and enables causal inference to facilitate legal practitioners to
make proper decisions. We evaluate the framework on a challenging similar
charge disambiguation task. Experimental results show that GCI can capture the
nuance from fact descriptions among multiple confusing charges and provide
explainable discrimination, especially in few-shot settings. We also observe
that the causal knowledge contained in GCI can be effectively injected into
powerful neural networks for better performance and interpretability.
| 2,021 |
Computation and Language
|
Advanced Long-context End-to-end Speech Recognition Using
Context-expanded Transformers
|
This paper addresses end-to-end automatic speech recognition (ASR) for long
audio recordings such as lecture and conversational speeches. Most end-to-end
ASR models are designed to recognize independent utterances, but contextual
information (e.g., speaker or topic) over multiple utterances is known to be
useful for ASR. In our prior work, we proposed a context-expanded Transformer
that accepts multiple consecutive utterances at the same time and predicts an
output sequence for the last utterance, achieving 5-15% relative error
reduction from utterance-based baselines in lecture and conversational ASR
benchmarks. Although the results have shown remarkable performance gain, there
is still potential to further improve the model architecture and the decoding
process. In this paper, we extend our prior work by (1) introducing the
Conformer architecture to further improve the accuracy, (2) accelerating the
decoding process with a novel activation recycling technique, and (3) enabling
streaming decoding with triggered attention. We demonstrate that the extended
Transformer provides state-of-the-art end-to-end ASR performance, obtaining a
17.3% character error rate for the HKUST dataset and 12.0%/6.3% word error
rates for the Switchboard-300 Eval2000 CallHome/Switchboard test sets. The new
decoding method reduces decoding time by more than 50% and further enables
streaming ASR with limited accuracy degradation.
| 2,021 |
Computation and Language
|
Transductive Learning for Abstractive News Summarization
|
Pre-trained and fine-tuned news summarizers are expected to generalize to
news articles unseen in the fine-tuning (training) phase. However, these
articles often contain specifics, such as new events and people, a summarizer
could not learn about in training. This applies to scenarios such as a news
publisher training a summarizer on dated news and summarizing incoming recent
news. In this work, we explore the first application of transductive learning
to summarization where we further fine-tune models on test set inputs.
Specifically, we construct pseudo summaries from salient article sentences and
input randomly masked articles. Moreover, this approach is also beneficial in
the fine-tuning phase, where we jointly predict extractive pseudo references
and abstractive gold summaries in the training set. We show that our approach
yields state-of-the-art results on CNN/DM and NYT datasets, improving ROUGE-L
by 1.05 and 0.74, respectively. Importantly, our approach does not require any
changes of the original architecture. Moreover, we show the benefits of
transduction from dated to more recent CNN news. Finally, through human and
automatic evaluation, we demonstrate improvements in summary abstractiveness
and coherence.
| 2,022 |
Computation and Language
|
Can Latent Alignments Improve Autoregressive Machine Translation?
|
Latent alignment objectives such as CTC and AXE significantly improve
non-autoregressive machine translation models. Can they improve autoregressive
models as well? We explore the possibility of training autoregressive machine
translation models with latent alignment objectives, and observe that, in
practice, this approach results in degenerate models. We provide a theoretical
explanation for these empirical results, and prove that latent alignment
objectives are incompatible with teacher forcing.
| 2,021 |
Computation and Language
|
Extracting Temporal Event Relation with Syntax-guided Graph Transformer
|
Extracting temporal relations (e.g., before, after, and simultaneous) among
events is crucial to natural language understanding. One of the key challenges
of this problem is that when the events of interest are far away in text, the
context in-between often becomes complicated, making it challenging to resolve
the temporal relationship between them. This paper thus proposes a new
Syntax-guided Graph Transformer network (SGT) to mitigate this issue, by (1)
explicitly exploiting the connection between two events based on their
dependency parsing trees, and (2) automatically locating temporal cues between
two events via a novel syntax-guided attention mechanism. Experiments on two
benchmark datasets, MATRES and TB-Dense, show that our approach significantly
outperforms previous state-of-the-art methods on both end-to-end temporal
relation extraction and temporal relation classification; This improvement also
proves to be robust on the contrast set of MATRES. The code is publicly
available at https://github.com/VT-NLP/Syntax-Guided-Graph-Transformer.
| 2,022 |
Computation and Language
|
Probing Commonsense Explanation in Dialogue Response Generation
|
Humans use commonsense reasoning (CSR) implicitly to produce natural and
coherent responses in conversations. Aiming to close the gap between current
response generation (RG) models and human communication abilities, we want to
understand why RG models respond as they do by probing RG model's understanding
of commonsense reasoning that elicits proper responses. We formalize the
problem by framing commonsense as a latent variable in the RG task and using
explanations for responses as textual form of commonsense. We collect 6k
annotated explanations justifying responses from four dialogue datasets and ask
humans to verify them and propose two probing settings to evaluate RG models'
CSR capabilities. Probing results show that models fail to capture the logical
relations between commonsense explanations and responses and fine-tuning on
in-domain data and increasing model sizes do not lead to understanding of CSR
for RG. We hope our study motivates more research in making RG models emulate
the human reasoning process in pursuit of smooth human-AI communication.
| 2,021 |
Computation and Language
|
Improving Cross-Modal Alignment in Vision Language Navigation via
Syntactic Information
|
Vision language navigation is the task that requires an agent to navigate
through a 3D environment based on natural language instructions. One key
challenge in this task is to ground instructions with the current visual
information that the agent perceives. Most of the existing work employs soft
attention over individual words to locate the instruction required for the next
action. However, different words have different functions in a sentence (e.g.,
modifiers convey attributes, verbs convey actions). Syntax information like
dependencies and phrase structures can aid the agent to locate important parts
of the instruction. Hence, in this paper, we propose a navigation agent that
utilizes syntax information derived from a dependency tree to enhance alignment
between the instruction and the current visual scenes. Empirically, our agent
outperforms the baseline model that does not use syntax information on the
Room-to-Room dataset, especially in the unseen environment. Besides, our agent
achieves the new state-of-the-art on Room-Across-Room dataset, which contains
instructions in 3 languages (English, Hindi, and Telugu). We also show that our
agent is better at aligning instructions with the current visual information
via qualitative visualizations. Code and models:
https://github.com/jialuli-luka/SyntaxVLN
| 2,021 |
Computation and Language
|
ELECTRAMed: a new pre-trained language representation model for
biomedical NLP
|
The overwhelming amount of biomedical scientific texts calls for the
development of effective language models able to tackle a wide range of
biomedical natural language processing (NLP) tasks. The most recent dominant
approaches are domain-specific models, initialized with general-domain textual
data and then trained on a variety of scientific corpora. However, it has been
observed that for specialized domains in which large corpora exist, training a
model from scratch with just in-domain knowledge may yield better results.
Moreover, the increasing focus on the compute costs for pre-training recently
led to the design of more efficient architectures, such as ELECTRA. In this
paper, we propose a pre-trained domain-specific language model, called
ELECTRAMed, suited for the biomedical field. The novel approach inherits the
learning framework of the general-domain ELECTRA architecture, as well as its
computational advantages. Experiments performed on benchmark datasets for
several biomedical NLP tasks support the usefulness of ELECTRAMed, which sets
the novel state-of-the-art result on the BC5CDR corpus for named entity
recognition, and provides the best outcome in 2 over the 5 runs of the 7th
BioASQ-factoid Challange for the question answering task.
| 2,021 |
Computation and Language
|
Operationalizing a National Digital Library: The Case for a Norwegian
Transformer Model
|
In this work, we show the process of building a large-scale training set from
digital and digitized collections at a national library. The resulting
Bidirectional Encoder Representations from Transformers (BERT)-based language
model for Norwegian outperforms multilingual BERT (mBERT) models in several
token and sequence classification tasks for both Norwegian Bokm{\aa}l and
Norwegian Nynorsk. Our model also improves the mBERT performance for other
languages present in the corpus such as English, Swedish, and Danish. For
languages not included in the corpus, the weights degrade moderately while
keeping strong multilingual properties. Therefore, we show that building
high-quality models within a memory institution using somewhat noisy optical
character recognition (OCR) content is feasible, and we hope to pave the way
for other memory institutions to follow.
| 2,021 |
Computation and Language
|
Refining Targeted Syntactic Evaluation of Language Models
|
Targeted syntactic evaluation of subject-verb number agreement in English
(TSE) evaluates language models' syntactic knowledge using hand-crafted minimal
pairs of sentences that differ only in the main verb's conjugation. The method
evaluates whether language models rate each grammatical sentence as more likely
than its ungrammatical counterpart. We identify two distinct goals for TSE.
First, evaluating the systematicity of a language model's syntactic knowledge:
given a sentence, can it conjugate arbitrary verbs correctly? Second,
evaluating a model's likely behavior: given a sentence, does the model
concentrate its probability mass on correctly conjugated verbs, even if only on
a subset of the possible verbs? We argue that current implementations of TSE do
not directly capture either of these goals, and propose new metrics to capture
each goal separately. Under our metrics, we find that TSE overestimates
systematicity of language models, but that models score up to 40% better on
verbs that they predict are likely in context.
| 2,021 |
Computation and Language
|
Neural Language Models with Distant Supervision to Identify Major
Depressive Disorder from Clinical Notes
|
Major depressive disorder (MDD) is a prevalent psychiatric disorder that is
associated with significant healthcare burden worldwide. Phenotyping of MDD can
help early diagnosis and consequently may have significant advantages in
patient management. In prior research MDD phenotypes have been extracted from
structured Electronic Health Records (EHR) or using Electroencephalographic
(EEG) data with traditional machine learning models to predict MDD phenotypes.
However, MDD phenotypic information is also documented in free-text EHR data,
such as clinical notes. While clinical notes may provide more accurate
phenotyping information, natural language processing (NLP) algorithms must be
developed to abstract such information. Recent advancements in NLP resulted in
state-of-the-art neural language models, such as Bidirectional Encoder
Representations for Transformers (BERT) model, which is a transformer-based
model that can be pre-trained from a corpus of unsupervised text data and then
fine-tuned on specific tasks. However, such neural language models have been
underutilized in clinical NLP tasks due to the lack of large training datasets.
In the literature, researchers have utilized the distant supervision paradigm
to train machine learning models on clinical text classification tasks to
mitigate the issue of lacking annotated training data. It is still unknown
whether the paradigm is effective for neural language models. In this paper, we
propose to leverage the neural language models in a distant supervision
paradigm to identify MDD phenotypes from clinical notes. The experimental
results indicate that our proposed approach is effective in identifying MDD
phenotypes and that the Bio- Clinical BERT, a specific BERT model for clinical
data, achieved the best performance in comparison with conventional machine
learning models.
| 2,021 |
Computation and Language
|
NewsEdits: A Dataset of Revision Histories for News Articles (Technical
Report: Data Processing)
|
News article revision histories have the potential to give us novel insights
across varied fields of linguistics and social sciences. In this work, we
present, to our knowledge, the first publicly available dataset of news article
revision histories, or NewsEdits.
Our dataset is multilingual; it contains 1,278,804 articles with 4,609,430
versions from over 22 English- and French-language newspaper sources based in
three countries. Across version pairs, we count 10.9 million added sentences;
8.9 million changed sentences and 6.8 million removed sentences. Within the
changed sentences, we derive 72 million atomic edits. NewsEdits is, to our
knowledge, the largest corpus of revision histories of any domain.
| 2,022 |
Computation and Language
|
Modeling "Newsworthiness" for Lead-Generation Across Corpora
|
Journalists obtain "leads", or story ideas, by reading large corpora of
government records: court cases, proposed bills, etc. However, only a small
percentage of such records are interesting documents. We propose a model of
"newsworthiness" aimed at surfacing interesting documents. We train models on
automatically labeled corpora -- published newspaper articles -- to predict
whether each article was a front-page article (i.e., \textbf{newsworthy}) or
not (i.e., \textbf{less newsworthy}). We transfer these models to unlabeled
corpora -- court cases, bills, city-council meeting minutes -- to rank
documents in these corpora on "newsworthiness". A fine-tuned RoBERTa model
achieves .93 AUC performance on heldout labeled documents, and .88 AUC on
expert-validated unlabeled corpora. We provide interpretation and visualization
for our models.
| 2,021 |
Computation and Language
|
"Don't quote me on that": Finding Mixtures of Sources in News Articles
|
Journalists publish statements provided by people, or \textit{sources} to
contextualize current events, help voters make informed decisions, and hold
powerful individuals accountable. In this work, we construct an ontological
labeling system for sources based on each source's \textit{affiliation} and
\textit{role}. We build a probabilistic model to infer these attributes for
named sources and to describe news articles as mixtures of these sources. Our
model outperforms existing mixture modeling and co-clustering approaches and
correctly infers source-type in 80\% of expert-evaluated trials. Such work can
facilitate research in downstream tasks like opinion and argumentation mining,
representing a first step towards machine-in-the-loop \textit{computational
journalism} systems.
| 2,021 |
Computation and Language
|
skweak: Weak Supervision Made Easy for NLP
|
We present skweak, a versatile, Python-based software toolkit enabling NLP
developers to apply weak supervision to a wide range of NLP tasks. Weak
supervision is an emerging machine learning paradigm based on a simple idea:
instead of labelling data points by hand, we use labelling functions derived
from domain knowledge to automatically obtain annotations for a given dataset.
The resulting labels are then aggregated with a generative model that estimates
the accuracy (and possible confusions) of each labelling function. The skweak
toolkit makes it easy to implement a large spectrum of labelling functions
(such as heuristics, gazetteers, neural models or linguistic constraints) on
text data, apply them on a corpus, and aggregate their results in a fully
unsupervised fashion. skweak is especially designed to facilitate the use of
weak supervision for NLP tasks such as text classification and sequence
labelling. We illustrate the use of skweak for NER and sentiment analysis.
skweak is released under an open-source license and is available at:
https://github.com/NorskRegnesentral/skweak
| 2,021 |
Computation and Language
|
When FastText Pays Attention: Efficient Estimation of Word
Representations using Constrained Positional Weighting
|
In 2018, Mikolov et al. introduced the positional language model, which has
characteristics of attention-based neural machine translation models and which
achieved state-of-the-art performance on the intrinsic word analogy task.
However, the positional model is not practically fast and it has never been
evaluated on qualitative criteria or extrinsic tasks. We propose a constrained
positional model, which adapts the sparse attention mechanism from neural
machine translation to improve the speed of the positional model. We evaluate
the positional and constrained positional models on three novel qualitative
criteria and on language modeling. We show that the positional and constrained
positional models contain interpretable information about the grammatical
properties of words and outperform other shallow models on language modeling.
We also show that our constrained model outperforms the positional model on
language modeling and trains twice as fast.
| 2,022 |
Computation and Language
|
Efficient pre-training objectives for Transformers
|
The Transformer architecture deeply changed the natural language processing,
outperforming all previous state-of-the-art models. However, well-known
Transformer models like BERT, RoBERTa, and GPT-2 require a huge compute budget
to create a high quality contextualised representation. In this paper, we study
several efficient pre-training objectives for Transformers-based models. By
testing these objectives on different tasks, we determine which of the ELECTRA
model's new features is the most relevant. We confirm that Transformers
pre-training is improved when the input does not contain masked tokens and that
the usage of the whole output to compute the loss reduces training time.
Moreover, inspired by ELECTRA, we study a model composed of two blocks; a
discriminator and a simple generator based on a statistical model with no
impact on the computational performances. Besides, we prove that eliminating
the MASK token and considering the whole output during the loss computation are
essential choices to improve performance. Furthermore, we show that it is
possible to efficiently train BERT-like models using a discriminative approach
as in ELECTRA but without a complex generator, which is expensive. Finally, we
show that ELECTRA benefits heavily from a state-of-the-art hyper-parameters
search.
| 2,021 |
Computation and Language
|
X-METRA-ADA: Cross-lingual Meta-Transfer Learning Adaptation to Natural
Language Understanding and Question Answering
|
Multilingual models, such as M-BERT and XLM-R, have gained increasing
popularity, due to their zero-shot cross-lingual transfer learning
capabilities. However, their generalization ability is still inconsistent for
typologically diverse languages and across different benchmarks. Recently,
meta-learning has garnered attention as a promising technique for enhancing
transfer learning under low-resource scenarios: particularly for cross-lingual
transfer in Natural Language Understanding (NLU). In this work, we propose
X-METRA-ADA, a cross-lingual MEta-TRAnsfer learning ADAptation approach for
NLU. Our approach adapts MAML, an optimization-based meta-learning approach, to
learn to adapt to new languages. We extensively evaluate our framework on two
challenging cross-lingual NLU tasks: multilingual task-oriented dialog and
typologically diverse question answering. We show that our approach outperforms
naive fine-tuning, reaching competitive performance on both tasks for most
languages. Our analysis reveals that X-METRA-ADA can leverage limited data for
faster adaptation.
| 2,021 |
Computation and Language
|
Problems and Countermeasures in Natural Language Processing Evaluation
|
Evaluation in natural language processing guides and promotes research on
models and methods. In recent years, new evalua-tion data sets and evaluation
tasks have been continuously proposed. At the same time, a series of problems
exposed by ex-isting evaluation have also restricted the progress of natural
language processing technology. Starting from the concept, com-position,
development and meaning of natural language evaluation, this article classifies
and summarizes the tasks and char-acteristics of mainstream natural language
evaluation, and then summarizes the problems and causes of natural language
pro-cessing evaluation. Finally, this article refers to the human language
ability evaluation standard, puts forward the concept of human-like machine
language ability evaluation, and proposes a series of basic principles and
implementation ideas for hu-man-like machine language ability evaluation from
the three aspects of reliability, difficulty and validity.
| 2,021 |
Computation and Language
|
Mitigating Temporal-Drift: A Simple Approach to Keep NER Models Crisp
|
Performance of neural models for named entity recognition degrades over time,
becoming stale. This degradation is due to temporal drift, the change in our
target variables' statistical properties over time. This issue is especially
problematic for social media data, where topics change rapidly. In order to
mitigate the problem, data annotation and retraining of models is common.
Despite its usefulness, this process is expensive and time-consuming, which
motivates new research on efficient model updating. In this paper, we propose
an intuitive approach to measure the potential trendiness of tweets and use
this metric to select the most informative instances to use for training. We
conduct experiments on three state-of-the-art models on the Temporal Twitter
Dataset. Our approach shows larger increases in prediction accuracy with less
training data than the alternatives, making it an attractive, practical
solution.
| 2,021 |
Computation and Language
|
Seed Word Selection for Weakly-Supervised Text Classification with
Unsupervised Error Estimation
|
Weakly-supervised text classification aims to induce text classifiers from
only a few user-provided seed words. The vast majority of previous work assumes
high-quality seed words are given. However, the expert-annotated seed words are
sometimes non-trivial to come up with. Furthermore, in the weakly-supervised
learning setting, we do not have any labeled document to measure the seed
words' efficacy, making the seed word selection process "a walk in the dark".
In this work, we remove the need for expert-curated seed words by first mining
(noisy) candidate seed words associated with the category names. We then train
interim models with individual candidate seed words. Lastly, we estimate the
interim models' error rate in an unsupervised manner. The seed words that yield
the lowest estimated error rates are added to the final seed word set. A
comprehensive evaluation of six binary classification tasks on four popular
datasets demonstrates that the proposed method outperforms a baseline using
only category name seed words and obtained comparable performance as a
counterpart using expert-annotated seed words.
| 2,021 |
Computation and Language
|
Subsentence Extraction from Text Using Coverage-Based Deep Learning
Language Models
|
Sentiment prediction remains a challenging and unresolved task in various
research fields, including psychology, neuroscience, and computer science. This
stems from its high degree of subjectivity and limited input sources that can
effectively capture the actual sentiment. This can be even more challenging
with only text-based input. Meanwhile, the rise of deep learning and an
unprecedented large volume of data have paved the way for artificial
intelligence to perform impressively accurate predictions or even human-level
reasoning. Drawing inspiration from this, we propose a coverage-based sentiment
and subsentence extraction system that estimates a span of input text and
recursively feeds this information back to the networks. The predicted
subsentence consists of auxiliary information expressing a sentiment. This is
an important building block for enabling vivid and epic sentiment delivery
(within the scope of this paper) and for other natural language processing
tasks such as text summarisation and Q&A. Our approach outperforms the
state-of-the-art approaches by a large margin in subsentence prediction (i.e.,
Average Jaccard scores from 0.72 to 0.89). For the evaluation, we designed
rigorous experiments consisting of 24 ablation studies. Finally, our learned
lessons are returned to the community by sharing software packages and a public
dataset that can reproduce the results presented in this paper.
| 2,021 |
Computation and Language
|
Identifying Helpful Sentences in Product Reviews
|
In recent years online shopping has gained momentum and became an important
venue for customers wishing to save time and simplify their shopping process. A
key advantage of shopping online is the ability to read what other customers
are saying about products of interest. In this work, we aim to maintain this
advantage in situations where extreme brevity is needed, for example, when
shopping by voice. We suggest a novel task of extracting a single
representative helpful sentence from a set of reviews for a given product. The
selected sentence should meet two conditions: first, it should be helpful for a
purchase decision and second, the opinion it expresses should be supported by
multiple reviewers. This task is closely related to the task of Multi Document
Summarization in the product reviews domain but differs in its objective and
its level of conciseness. We collect a dataset in English of sentence
helpfulness scores via crowd-sourcing and demonstrate its reliability despite
the inherent subjectivity involved. Next, we describe a complete model that
extracts representative helpful sentences with positive and negative sentiment
towards the product and demonstrate that it outperforms several baselines.
| 2,021 |
Computation and Language
|
Addressing the Vulnerability of NMT in Input Perturbations
|
Neural Machine Translation (NMT) has achieved significant breakthrough in
performance but is known to suffer vulnerability to input perturbations. As
real input noise is difficult to predict during training, robustness is a big
issue for system deployment. In this paper, we improve the robustness of NMT
models by reducing the effect of noisy words through a Context-Enhanced
Reconstruction (CER) approach. CER trains the model to resist noise in two
steps: (1) perturbation step that breaks the naturalness of input sequence with
made-up words; (2) reconstruction step that defends the noise propagation by
generating better and more robust contextual representation. Experimental
results on Chinese-English (ZH-EN) and French-English (FR-EN) translation tasks
demonstrate robustness improvement on both news and social media text. Further
fine-tuning experiments on social media text show our approach can converge at
a higher position and provide a better adaptation.
| 2,021 |
Computation and Language
|
WASSA@IITK at WASSA 2021: Multi-task Learning and Transformer Finetuning
for Emotion Classification and Empathy Prediction
|
This paper describes our contribution to the WASSA 2021 shared task on
Empathy Prediction and Emotion Classification. The broad goal of this task was
to model an empathy score, a distress score and the overall level of emotion of
an essay written in response to a newspaper article associated with harm to
someone. We have used the ELECTRA model abundantly and also advanced deep
learning approaches like multi-task learning. Additionally, we also leveraged
standard machine learning techniques like ensembling. Our system achieves a
Pearson Correlation Coefficient of 0.533 on sub-task I and a macro F1 score of
0.5528 on sub-task II. We ranked 1st in Emotion Classification sub-task and 3rd
in Empathy Prediction sub-task
| 2,021 |
Computation and Language
|
Frustratingly Easy Edit-based Linguistic Steganography with a Masked
Language Model
|
With advances in neural language models, the focus of linguistic
steganography has shifted from edit-based approaches to generation-based ones.
While the latter's payload capacity is impressive, generating genuine-looking
texts remains challenging. In this paper, we revisit edit-based linguistic
steganography, with the idea that a masked language model offers an
off-the-shelf solution. The proposed method eliminates painstaking rule
construction and has a high payload capacity for an edit-based model. It is
also shown to be more secure against automatic detection than a
generation-based method while offering better control of the security/payload
capacity trade-off.
| 2,021 |
Computation and Language
|
RoFormer: Enhanced Transformer with Rotary Position Embedding
|
Position encoding recently has shown effective in the transformer
architecture. It enables valuable supervision for dependency modeling between
elements at different positions of the sequence. In this paper, we first
investigate various methods to integrate positional information into the
learning process of transformer-based language models. Then, we propose a novel
method named Rotary Position Embedding(RoPE) to effectively leverage the
positional information. Specifically, the proposed RoPE encodes the absolute
position with a rotation matrix and meanwhile incorporates the explicit
relative position dependency in self-attention formulation. Notably, RoPE
enables valuable properties, including the flexibility of sequence length,
decaying inter-token dependency with increasing relative distances, and the
capability of equipping the linear self-attention with relative position
encoding. Finally, we evaluate the enhanced transformer with rotary position
embedding, also called RoFormer, on various long text classification benchmark
datasets. Our experiments show that it consistently overcomes its alternatives.
Furthermore, we provide a theoretical analysis to explain some experimental
results. RoFormer is already integrated into Huggingface:
\url{https://huggingface.co/docs/transformers/model_doc/roformer}.
| 2,023 |
Computation and Language
|
HYPER^2: Hyperbolic Poincare Embedding for Hyper-Relational Link
Prediction
|
Link Prediction, addressing the issue of completing KGs with missing facts,
has been broadly studied. However, less light is shed on the ubiquitous
hyper-relational KGs. Most existing hyper-relational KG embedding models still
tear an n-ary fact into smaller tuples, neglecting the indecomposability of
some n-ary facts. While other frameworks work for certain arity facts only or
ignore the significance of primary triple. In this paper, we represent an n-ary
fact as a whole, simultaneously keeping the integrity of n-ary fact and
maintaining the vital role that the primary triple plays. In addition, we
generalize hyperbolic Poincar\'e embedding from binary to arbitrary arity data,
which has not been studied yet. To tackle the weak expressiveness and high
complexity issue, we propose HYPER^2 which is qualified for capturing the
interaction between entities within and beyond triple through information
aggregation on the tangent space. Extensive experiments demonstrate HYPER^2
achieves superior performance to its translational and deep analogues,
improving SOTA by up to 34.5\% with relatively few dimensions. Moreover, we
study the side effect of literals and we theoretically and experimentally
compare the computational complexity of HYPER^2 against several best performing
baselines, HYPER^2 is 49-61 times quicker than its counterparts.
| 2,021 |
Computation and Language
|
Grammatical Error Generation Based on Translated Fragments
|
We perform neural machine translation of sentence fragments in order to
create large amounts of training data for English grammatical error correction.
Our method aims at simulating mistakes made by second language learners, and
produces a wider range of non-native style language in comparison to
state-of-the-art synthetic data creation methods. In addition to purely
grammatical errors, our approach generates other types of errors, such as
lexical errors. We perform grammatical error correction experiments using
neural sequence-to-sequence models, and carry out quantitative and qualitative
evaluation. A model trained on data created using our proposed method is shown
to outperform a baseline model on test data with a high proportion of errors.
| 2,021 |
Computation and Language
|
Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using
Multilingual BERT
|
We classify seven months' worth of Belgian COVID-related Tweets using
multilingual BERT and relate them to their governments' COVID measures. We
classify Tweets by their stated opinion on Belgian government curfew measures
(too strict, ok, too loose). We examine the change in topics discussed and
views expressed over time and in reference to dates of related events such as
implementation of new measures or COVID-19 related announcements in the media.
| 2,021 |
Computation and Language
|
Robustness Tests of NLP Machine Learning Models: Search and Semantically
Replace
|
This paper proposes a strategy to assess the robustness of different machine
learning models that involve natural language processing (NLP). The overall
approach relies upon a Search and Semantically Replace strategy that consists
of two steps: (1) Search, which identifies important parts in the text; (2)
Semantically Replace, which finds replacements for the important parts, and
constrains the replaced tokens with semantically similar words. We introduce
different types of Search and Semantically Replace methods designed
specifically for particular types of machine learning models. We also
investigate the effectiveness of this strategy and provide a general framework
to assess a variety of machine learning models. Finally, an empirical
comparison is provided of robustness performance among three different model
types, each with a different text representation.
| 2,021 |
Computation and Language
|
Beyond Fair Pay: Ethical Implications of NLP Crowdsourcing
|
The use of crowdworkers in NLP research is growing rapidly, in tandem with
the exponential increase in research production in machine learning and AI.
Ethical discussion regarding the use of crowdworkers within the NLP research
community is typically confined in scope to issues related to labor conditions
such as fair pay. We draw attention to the lack of ethical considerations
related to the various tasks performed by workers, including labeling,
evaluation, and production. We find that the Final Rule, the common ethical
framework used by researchers, did not anticipate the use of online
crowdsourcing platforms for data collection, resulting in gaps between the
spirit and practice of human-subjects ethics in NLP research. We enumerate
common scenarios where crowdworkers performing NLP tasks are at risk of harm.
We thus recommend that researchers evaluate these risks by considering the
three ethical principles set up by the Belmont Report. We also clarify some
common misconceptions regarding the Institutional Review Board (IRB)
application. We hope this paper will serve to reopen the discussion within our
community regarding the ethical use of crowdworkers.
| 2,021 |
Computation and Language
|
UIT-ISE-NLP at SemEval-2021 Task 5: Toxic Spans Detection with
BiLSTM-CRF and ToxicBERT Comment Classification
|
We present our works on SemEval-2021 Task 5 about Toxic Spans Detection. This
task aims to build a model for identifying toxic words in whole posts. We use
the BiLSTM-CRF model combining with ToxicBERT Classification to train the
detection model for identifying toxic words in posts. Our model achieves 62.23%
by F1-score on the Toxic Spans Detection task.
| 2,021 |
Computation and Language
|
Enhancing Cognitive Models of Emotions with Representation Learning
|
We present a novel deep learning-based framework to generate embedding
representations of fine-grained emotions that can be used to computationally
describe psychological models of emotions. Our framework integrates a
contextualized embedding encoder with a multi-head probing model that enables
to interpret dynamically learned representations optimized for an emotion
classification task. Our model is evaluated on the Empathetic Dialogue dataset
and shows the state-of-the-art result for classifying 32 emotions. Our layer
analysis can derive an emotion graph to depict hierarchical relations among the
emotions. Our emotion representations can be used to generate an emotion wheel
directly comparable to the one from Plutchik's\LN model, and also augment the
values of missing emotions in the PAD emotional state model.
| 2,021 |
Computation and Language
|
Efficient Retrieval Optimized Multi-task Learning
|
Recently, there have been significant advances in neural methods for tackling
knowledge-intensive tasks such as open domain question answering (QA). These
advances are fueled by combining large pre-trained language models with
learnable retrieval of documents. Majority of these models use separate
encoders for learning query representation, passage representation for the
retriever and an additional encoder for the downstream task. Using separate
encoders for each stage/task occupies a lot of memory and makes it difficult to
scale to a large number of tasks. In this paper, we propose a novel Retrieval
Optimized Multi-task (ROM) framework for jointly training self-supervised
tasks, knowledge retrieval, and extractive question answering. Our ROM approach
presents a unified and generalizable framework that enables scaling efficiently
to multiple tasks, varying levels of supervision, and optimization choices such
as different learning schedules without changing the model architecture. It
also provides the flexibility of changing the encoders without changing the
architecture of the system. Using our framework, we achieve comparable or
better performance than recent methods on QA, while drastically reducing the
number of parameters.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.