Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks | Although coherence modeling has come a long way in developing novel models,
their evaluation on downstream applications for which they are purportedly
developed has largely been neglected. With the advancements made by neural
approaches in applications such as machine translation (MT), summarization and
dialog systems, the need for coherence evaluation of these tasks is now more
crucial than ever. However, coherence models are typically evaluated only on
synthetic tasks, which may not be representative of their performance in
downstream applications. To investigate how representative the synthetic tasks
are of downstream use cases, we conduct experiments on benchmarking well-known
traditional and neural coherence models on synthetic sentence ordering tasks,
and contrast this with their performance on three downstream applications:
coherence evaluation for MT and summarization, and next utterance prediction in
retrieval-based dialog. Our results demonstrate a weak correlation between the
model performances in the synthetic tasks and the downstream applications,
{motivating alternate training and evaluation methods for coherence models.
| 2,021 | Computation and Language |
Robust Question Answering Through Sub-part Alignment | Current textual question answering models achieve strong performance on
in-domain test sets, but often do so by fitting surface-level patterns in the
data, so they fail to generalize to out-of-distribution settings. To make a
more robust and understandable QA system, we model question answering as an
alignment problem. We decompose both the question and context into smaller
units based on off-the-shelf semantic representations (here, semantic roles),
and align the question to a subgraph of the context in order to find the
answer. We formulate our model as a structured SVM, with alignment scores
computed via BERT, and we can train end-to-end despite using beam search for
approximate inference. Our explicit use of alignments allows us to explore a
set of constraints with which we can prohibit certain types of bad model
behavior arising in cross-domain settings. Furthermore, by investigating
differences in scores across different potential answers, we can seek to
understand what particular aspects of the input lead the model to choose the
answer without relying on post-hoc explanation techniques. We train our model
on SQuAD v1.1 and test it on several adversarial and out-of-domain datasets.
The results show that our model is more robust cross-domain than the standard
BERT QA model, and constraints derived from alignment scores allow us to
effectively trade off coverage and accuracy.
| 2,021 | Computation and Language |
Capsule-Transformer for Neural Machine Translation | Transformer hugely benefits from its key design of the multi-head
self-attention network (SAN), which extracts information from various
perspectives through transforming the given input into different subspaces.
However, its simple linear transformation aggregation strategy may still
potentially fail to fully capture deeper contextualized information. In this
paper, we thus propose the capsule-Transformer, which extends the linear
transformation into a more general capsule routing algorithm by taking SAN as a
special case of capsule network. So that the resulted capsule-Transformer is
capable of obtaining a better attention distribution representation of the
input sequence via information aggregation among different heads and words.
Specifically, we see groups of attention weights in SAN as low layer capsules.
By applying the iterative capsule routing algorithm they can be further
aggregated into high layer capsules which contain deeper contextualized
information. Experimental results on the widely-used machine translation
datasets show our proposed capsule-Transformer outperforms strong Transformer
baseline significantly.
| 2,020 | Computation and Language |
NUBIA: NeUral Based Interchangeability Assessor for Text Generation | We present NUBIA, a methodology to build automatic evaluation metrics for
text generation using only machine learning models as core components. A
typical NUBIA model is composed of three modules: a neural feature extractor,
an aggregator and a calibrator. We demonstrate an implementation of NUBIA which
outperforms metrics currently used to evaluate machine translation, summaries
and slightly exceeds/matches state of the art metrics on correlation with human
judgement on the WMT segment-level Direct Assessment task, sentence-level
ranking and image captioning evaluation. The model implemented is modular,
explainable and set to continuously improve over time.
| 2,020 | Computation and Language |
End-to-End Neural Word Alignment Outperforms GIZA++ | Word alignment was once a core unsupervised learning task in natural language
processing because of its essential role in training statistical machine
translation (MT) models. Although unnecessary for training neural MT models,
word alignment still plays an important role in interactive applications of
neural machine translation, such as annotation transfer and lexicon injection.
While statistical MT methods have been replaced by neural approaches with
superior performance, the twenty-year-old GIZA++ toolkit remains a key
component of state-of-the-art word alignment systems. Prior work on neural word
alignment has only been able to outperform GIZA++ by using its output during
training. We present the first end-to-end neural word alignment method that
consistently outperforms GIZA++ on three data sets. Our approach repurposes a
Transformer model trained for supervised translation to also serve as an
unsupervised word alignment model in a manner that is tightly integrated and
does not affect translation quality.
| 2,020 | Computation and Language |
AMPERSAND: Argument Mining for PERSuAsive oNline Discussions | Argumentation is a type of discourse where speakers try to persuade their
audience about the reasonableness of a claim by presenting supportive
arguments. Most work in argument mining has focused on modeling arguments in
monologues. We propose a computational model for argument mining in online
persuasive discussion forums that brings together the micro-level (argument as
product) and macro-level (argument as process) models of argumentation.
Fundamentally, this approach relies on identifying relations between components
of arguments in a discussion thread. Our approach for relation prediction uses
contextual information in terms of fine-tuning a pre-trained language model and
leveraging discourse relations based on Rhetorical Structure Theory. We
additionally propose a candidate selection method to automatically predict what
parts of one's argument will be targeted by other participants in the
discussion. Our models obtain significant improvements compared to recent
state-of-the-art approaches using pointer networks and a pre-trained language
model.
| 2,020 | Computation and Language |
Semi-Supervised Text Simplification with Back-Translation and Asymmetric
Denoising Autoencoders | Text simplification (TS) rephrases long sentences into simplified variants
while preserving inherent semantics. Traditional sequence-to-sequence models
heavily rely on the quantity and quality of parallel sentences, which limits
their applicability in different languages and domains. This work investigates
how to leverage large amounts of unpaired corpora in TS task. We adopt the
back-translation architecture in unsupervised machine translation (NMT),
including denoising autoencoders for language modeling and automatic generation
of parallel data by iterative back-translation. However, it is non-trivial to
generate appropriate complex-simple pair if we directly treat the set of simple
and complex corpora as two different languages, since the two types of
sentences are quite similar and it is hard for the model to capture the
characteristics in different types of sentences. To tackle this problem, we
propose asymmetric denoising methods for sentences with separate complexity.
When modeling simple and complex sentences with autoencoders, we introduce
different types of noise into the training process. Such a method can
significantly improve the simplification performance. Our model can be trained
in both unsupervised and semi-supervised manner. Automatic and human
evaluations show that our unsupervised model outperforms the previous systems,
and with limited supervision, our model can perform competitively with multiple
state-of-the-art simplification systems.
| 2,020 | Computation and Language |
A Span-based Linearization for Constituent Trees | We propose a novel linearization of a constituent tree, together with a new
locally normalized model. For each split point in a sentence, our model
computes the normalizer on all spans ending with that split point, and then
predicts a tree span from them. Compared with global models, our model is fast
and parallelizable. Different from previous local models, our linearization
method is tied on the spans directly and considers more local features when
performing span prediction, which is more interpretable and effective.
Experiments on PTB (95.8 F1) and CTB (92.4 F1) show that our model
significantly outperforms existing local models and efficiently achieves
competitive results with global models.
| 2,020 | Computation and Language |
Towards Unsupervised Language Understanding and Generation by Joint Dual
Learning | In modular dialogue systems, natural language understanding (NLU) and natural
language generation (NLG) are two critical components, where NLU extracts the
semantics from the given texts and NLG is to construct corresponding natural
language sentences based on the input semantic representations. However, the
dual property between understanding and generation has been rarely explored.
The prior work is the first attempt that utilized the duality between NLU and
NLG to improve the performance via a dual supervised learning framework.
However, the prior work still learned both components in a supervised manner,
instead, this paper introduces a general learning framework to effectively
exploit such duality, providing flexibility of incorporating both supervised
and unsupervised learning algorithms to train language understanding and
generation models in a joint fashion. The benchmark experiments demonstrate
that the proposed approach is capable of boosting the performance of both NLU
and NLG.
| 2,020 | Computation and Language |
Named Entity Recognition without Labelled Data: A Weak Supervision
Approach | Named Entity Recognition (NER) performance often degrades rapidly when
applied to target domains that differ from the texts observed during training.
When in-domain labelled data is available, transfer learning techniques can be
used to adapt existing NER models to the target domain. But what should one do
when there is no hand-labelled data for the target domain? This paper presents
a simple but powerful approach to learn NER models in the absence of labelled
data through weak supervision. The approach relies on a broad spectrum of
labelling functions to automatically annotate texts from the target domain.
These annotations are then merged together using a hidden Markov model which
captures the varying accuracies and confusions of the labelling functions. A
sequence labelling model can finally be trained on the basis of this unified
annotation. We evaluate the approach on two English datasets (CoNLL 2003 and
news articles from Reuters and Bloomberg) and demonstrate an improvement of
about 7 percentage points in entity-level $F_1$ scores compared to an
out-of-domain neural NER model.
| 2,020 | Computation and Language |
Self-Supervised and Controlled Multi-Document Opinion Summarization | We address the problem of unsupervised abstractive summarization of
collections of user generated reviews with self-supervision and control. We
propose a self-supervised setup that considers an individual document as a
target summary for a set of similar documents. This setting makes training
simpler than previous approaches by relying only on standard log-likelihood
loss. We address the problem of hallucinations through the use of control
codes, to steer the generation towards more coherent and relevant
summaries.Finally, we extend the Transformer architecture to allow for multiple
reviews as input. Our benchmarks on two datasets against graph-based and recent
neural abstractive unsupervised models show that our proposed method generates
summaries with a superior quality and relevance.This is confirmed in our human
evaluation which focuses explicitly on the faithfulness of generated summaries
We also provide an ablation study, which shows the importance of the control
setup in controlling hallucinations and achieve high sentiment and topic
alignment of the summaries with the input reviews.
| 2,020 | Computation and Language |
Conditional Augmentation for Aspect Term Extraction via Masked
Sequence-to-Sequence Generation | Aspect term extraction aims to extract aspect terms from review texts as
opinion targets for sentiment analysis. One of the big challenges with this
task is the lack of sufficient annotated data. While data augmentation is
potentially an effective technique to address the above issue, it is
uncontrollable as it may change aspect words and aspect labels unexpectedly. In
this paper, we formulate the data augmentation as a conditional generation
task: generating a new sentence while preserving the original opinion targets
and labels. We propose a masked sequence-to-sequence method for conditional
augmentation of aspect term extraction. Unlike existing augmentation
approaches, ours is controllable and allows us to generate more diversified
sentences. Experimental results confirm that our method alleviates the data
scarcity problem significantly. It also effectively boosts the performances of
several current models for aspect term extraction.
| 2,020 | Computation and Language |
Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion | Human-curated knowledge graphs provide critical supportive information to
various natural language processing tasks, but these graphs are usually
incomplete, urging auto-completion of them. Prevalent graph embedding
approaches, e.g., TransE, learn structured knowledge via representing graph
elements into dense embeddings and capturing their triple-level relationship
with spatial distance. However, they are hardly generalizable to the elements
never visited in training and are intrinsically vulnerable to graph
incompleteness. In contrast, textual encoding approaches, e.g., KG-BERT, resort
to graph triple's text and triple-level contextualized representations. They
are generalizable enough and robust to the incompleteness, especially when
coupled with pre-trained encoders. But two major drawbacks limit the
performance: (1) high overheads due to the costly scoring of all possible
triples in inference, and (2) a lack of structured knowledge in the textual
encoder. In this paper, we follow the textual encoding paradigm and aim to
alleviate its drawbacks by augmenting it with graph embedding techniques -- a
complementary hybrid of both paradigms. Specifically, we partition each triple
into two asymmetric parts as in translation-based graph embedding approach, and
encode both parts into contextualized representations by a Siamese-style
textual encoder. Built upon the representations, our model employs both
deterministic classifier and spatial measurement for representation and
structure learning respectively. Moreover, we develop a self-adaptive ensemble
scheme to further improve the performance by incorporating triple scores from
an existing graph embedding model. In experiments, we achieve state-of-the-art
performance on three benchmarks and a zero-shot dataset for link prediction,
with highlights of inference costs reduced by 1-2 orders of magnitude compared
to a textual encoding method.
| 2,021 | Computation and Language |
Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting
BERT | By introducing a small set of additional parameters, a probe learns to solve
specific linguistic tasks (e.g., dependency parsing) in a supervised manner
using feature representations (e.g., contextualized embeddings). The
effectiveness of such probing tasks is taken as evidence that the pre-trained
model encodes linguistic knowledge. However, this approach of evaluating a
language model is undermined by the uncertainty of the amount of knowledge that
is learned by the probe itself. Complementary to those works, we propose a
parameter-free probing technique for analyzing pre-trained language models
(e.g., BERT). Our method does not require direct supervision from the probing
tasks, nor do we introduce additional parameters to the probing process. Our
experiments on BERT show that syntactic trees recovered from BERT using our
method are significantly better than linguistically-uninformed baselines. We
further feed the empirically induced dependency structures into a downstream
sentiment classification task and find its improvement compatible with or even
superior to a human-designed dependency schema.
| 2,021 | Computation and Language |
Character-Level Translation with Self-attention | We explore the suitability of self-attention models for character-level
neural machine translation. We test the standard transformer model, as well as
a novel variant in which the encoder block combines information from nearby
characters using convolutions. We perform extensive experiments on WMT and UN
datasets, testing both bilingual and multilingual translation to English using
up to three input languages (French, Spanish, and Chinese). Our transformer
variant consistently outperforms the standard transformer at the
character-level and converges faster while learning more robust character-level
alignments.
| 2,020 | Computation and Language |
STARC: Structured Annotations for Reading Comprehension | We present STARC (Structured Annotations for Reading Comprehension), a new
annotation framework for assessing reading comprehension with multiple choice
questions. Our framework introduces a principled structure for the answer
choices and ties them to textual span annotations. The framework is implemented
in OneStopQA, a new high-quality dataset for evaluation and analysis of reading
comprehension in English. We use this dataset to demonstrate that STARC can be
leveraged for a key new application for the development of SAT-like reading
comprehension materials: automatic annotation quality probing via span ablation
experiments. We further show that it enables in-depth analyses and comparisons
between machine and human reading comprehension behavior, including error
distributions and guessing ability. Our experiments also reveal that the
standard multiple choice dataset in NLP, RACE, is limited in its ability to
measure reading comprehension. 47% of its questions can be guessed by machines
without accessing the passage, and 18% are unanimously judged by humans as not
having a unique correct answer. OneStopQA provides an alternative test set for
reading comprehension which alleviates these shortcomings and has a
substantially higher human ceiling performance.
| 2,020 | Computation and Language |
ENT-DESC: Entity Description Generation by Exploring Knowledge Graph | Previous works on knowledge-to-text generation take as input a few RDF
triples or key-value pairs conveying the knowledge of some entities to generate
a natural language description. Existing datasets, such as WIKIBIO, WebNLG, and
E2E, basically have a good alignment between an input triple/pair set and its
output text. However, in practice, the input knowledge could be more than
enough, since the output description may only cover the most significant
knowledge. In this paper, we introduce a large-scale and challenging dataset to
facilitate the study of such a practical scenario in KG-to-text. Our dataset
involves retrieving abundant knowledge of various types of main entities from a
large knowledge graph (KG), which makes the current graph-to-sequence models
severely suffer from the problems of information loss and parameter explosion
while generating the descriptions. We address these challenges by proposing a
multi-graph structure that is able to represent the original graph information
more comprehensively. Furthermore, we also incorporate aggregation methods that
learn to extract the rich graph information. Extensive experiments demonstrate
the effectiveness of our model architecture.
| 2,020 | Computation and Language |
Vocabulary Adaptation for Distant Domain Adaptation in Neural Machine
Translation | Neural network methods exhibit strong performance only in a few resource-rich
domains. Practitioners, therefore, employ domain adaptation from resource-rich
domains that are, in most cases, distant from the target domain. Domain
adaptation between distant domains (e.g., movie subtitles and research papers),
however, cannot be performed effectively due to mismatches in vocabulary; it
will encounter many domain-specific words (e.g., "angstrom") and words whose
meanings shift across domains(e.g., "conductor"). In this study, aiming to
solve these vocabulary mismatches in domain adaptation for neural machine
translation (NMT), we propose vocabulary adaptation, a simple method for
effective fine-tuning that adapts embedding layers in a given pre-trained NMT
model to the target domain. Prior to fine-tuning, our method replaces the
embedding layers of the NMT model by projecting general word embeddings induced
from monolingual data in a target domain onto a source-domain embedding space.
Experimental results indicate that our method improves the performance of
conventional fine-tuning by 3.86 and 3.28 BLEU points in En-Ja and De-En
translation, respectively.
| 2,020 | Computation and Language |
Accurate Word Alignment Induction from Neural Machine Translation | Despite its original goal to jointly learn to align and translate, prior
researches suggest that Transformer captures poor word alignments through its
attention mechanism. In this paper, we show that attention weights DO capture
accurate word alignments and propose two novel word alignment induction methods
Shift-Att and Shift-AET. The main idea is to induce alignments at the step when
the to-be-aligned target token is the decoder input rather than the decoder
output as in previous work. Shift-Att is an interpretation method that induces
alignments from the attention weights of Transformer and does not require
parameter update or architecture change. Shift-AET extracts alignments from an
additional alignment module which is tightly integrated into Transformer and
trained in isolation with supervision from symmetrized Shift-Att alignments.
Experiments on three publicly available datasets demonstrate that both methods
perform better than their corresponding neural baselines and Shift-AET
significantly outperforms GIZA++ by 1.4-4.8 AER points.
| 2,020 | Computation and Language |
Do Neural Models Learn Systematicity of Monotonicity Inference in
Natural Language? | Despite the success of language models using neural networks, it remains
unclear to what extent neural models have the generalization ability to perform
inferences. In this paper, we introduce a method for evaluating whether neural
models can learn systematicity of monotonicity inference in natural language,
namely, the regularity for performing arbitrary inferences with generalization
on composition. We consider four aspects of monotonicity inferences and test
whether the models can systematically interpret lexical and logical phenomena
on different training/test splits. A series of experiments show that three
neural models systematically draw inferences on unseen combinations of lexical
and logical phenomena when the syntactic structures of the sentences are
similar between the training and test sets. However, the performance of the
models significantly decreases when the structures are slightly changed in the
test set while retaining all vocabularies and constituents already appearing in
the training set. This indicates that the generalization ability of neural
models is limited to cases where the syntactic structures are nearly the same
as those in the training set.
| 2,020 | Computation and Language |
The role of context in neural pitch accent detection in English | Prosody is a rich information source in natural language, serving as a marker
for phenomena such as contrast. In order to make this information available to
downstream tasks, we need a way to detect prosodic events in speech. We propose
a new model for pitch accent detection, inspired by the work of Stehwien et al.
(2018), who presented a CNN-based model for this task. Our model makes greater
use of context by using full utterances as input and adding an LSTM layer. We
find that these innovations lead to an improvement from 87.5% to 88.7% accuracy
on pitch accent detection on American English speech in the Boston University
Radio News Corpus, a state-of-the-art result. We also find that a simple
baseline that just predicts a pitch accent on every content word yields 82.2%
accuracy, and we suggest that this is the appropriate baseline for this task.
Finally, we conduct ablation tests that show pitch is the most important
acoustic feature for this task and this corpus.
| 2,020 | Computation and Language |
Enriched Pre-trained Transformers for Joint Slot Filling and Intent
Detection | Detecting the user's intent and finding the corresponding slots among the
utterance's words are important tasks in natural language understanding. Their
interconnected nature makes their joint modeling a standard part of training
such models. Moreover, data scarceness and specialized vocabularies pose
additional challenges. Recently, the advances in pre-trained language models,
namely contextualized models such as ELMo and BERT have revolutionized the
field by tapping the potential of training very large models with just a few
steps of fine-tuning on a task-specific dataset. Here, we leverage such models,
namely BERT and RoBERTa, and we design a novel architecture on top of them.
Moreover, we propose an intent pooling attention mechanism, and we reinforce
the slot filling task by fusing intent distributions, word features, and token
representations. The experimental results on standard datasets show that our
model outperforms both the current non-BERT state of the art as well as some
stronger BERT-based baselines.
| 2,021 | Computation and Language |
TACRED Revisited: A Thorough Evaluation of the TACRED Relation
Extraction Task | TACRED (Zhang et al., 2017) is one of the largest, most widely used
crowdsourced datasets in Relation Extraction (RE). But, even with recent
advances in unsupervised pre-training and knowledge enhanced neural RE, models
still show a high error rate. In this paper, we investigate the questions: Have
we reached a performance ceiling or is there still room for improvement? And
how do crowd annotations, dataset, and models contribute to this error rate? To
answer these questions, we first validate the most challenging 5K examples in
the development and test sets using trained annotators. We find that label
errors account for 8% absolute F1 test error, and that more than 50% of the
examples need to be relabeled. On the relabeled test set the average F1 score
of a large baseline model set improves from 62.1 to 70.1. After validation, we
analyze misclassifications on the challenging instances, categorize them into
linguistically motivated error groups, and verify the resulting error
hypotheses on three state-of-the-art RE models. We show that two groups of
ambiguous relations are responsible for most of the remaining errors and that
models may adopt shallow heuristics on the dataset when entities are not
masked.
| 2,020 | Computation and Language |
Mind Your Inflections! Improving NLP for Non-Standard Englishes with
Base-Inflection Encoding | Inflectional variation is a common feature of World Englishes such as
Colloquial Singapore English and African American Vernacular English. Although
comprehension by human readers is usually unimpaired by non-standard
inflections, current NLP systems are not yet robust. We propose Base-Inflection
Encoding (BITE), a method to tokenize English text by reducing inflected words
to their base forms before reinjecting the grammatical information as special
symbols. Fine-tuning pretrained NLP models for downstream tasks using our
encoding defends against inflectional adversaries while maintaining performance
on clean data. Models using BITE generalize better to dialects with
non-standard inflections without explicit training and translation models
converge faster when trained with BITE. Finally, we show that our encoding
improves the vocabulary efficiency of popular data-driven subword tokenizers.
Since there has been no prior work on quantitatively evaluating vocabulary
efficiency, we propose metrics to do so.
| 2,020 | Computation and Language |
Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization | Spoken language understanding has been addressed as a supervised learning
problem, where a set of training data is available for each domain. However,
annotating data for each domain is both financially costly and non-scalable so
we should fully utilize information across all domains. One existing approach
solves the problem by conducting multi-domain learning, using shared parameters
for joint training across domains. We propose to improve the parameterization
of this method by using domain-specific and task-specific model parameters to
improve knowledge learning and transfer. Experiments on 5 domains show that our
model is more effective for multi-domain SLU and obtain the best results. In
addition, we show its transferability by outperforming the prior best model by
12.4\% when adapting to a new domain with little data.
| 2,021 | Computation and Language |
Analyzing the Surprising Variability in Word Embedding Stability Across
Languages | Word embeddings are powerful representations that form the foundation of many
natural language processing architectures, both in English and in other
languages. To gain further insight into word embeddings, we explore their
stability (e.g., overlap between the nearest neighbors of a word in different
embedding spaces) in diverse languages. We discuss linguistic properties that
are related to stability, drawing out insights about correlations with
affixing, language gender systems, and other features. This has implications
for embedding use, particularly in research that uses them to study language
trends.
| 2,021 | Computation and Language |
MLSUM: The Multilingual Summarization Corpus | We present MLSUM, the first large-scale MultiLingual SUMmarization dataset.
Obtained from online newspapers, it contains 1.5M+ article/summary pairs in
five different languages -- namely, French, German, Spanish, Russian, Turkish.
Together with English newspapers from the popular CNN/Daily mail dataset, the
collected data form a large scale multilingual dataset which can enable new
research directions for the text summarization community. We report
cross-lingual comparative analyses based on state-of-the-art systems. These
highlight existing biases which motivate the use of a multi-lingual dataset.
| 2,020 | Computation and Language |
Modelling Suspense in Short Stories as Uncertainty Reduction over Neural
Representation | Suspense is a crucial ingredient of narrative fiction, engaging readers and
making stories compelling. While there is a vast theoretical literature on
suspense, it is computationally not well understood. We compare two ways for
modelling suspense: surprise, a backward-looking measure of how unexpected the
current state is given the story so far; and uncertainty reduction, a
forward-looking measure of how unexpected the continuation of the story is.
Both can be computed either directly over story representations or over their
probability distributions. We propose a hierarchical language model that
encodes stories and computes surprise and uncertainty reduction. Evaluating
against short stories annotated with human suspense judgements, we find that
uncertainty reduction over representations is the best predictor, resulting in
near-human accuracy. We also show that uncertainty reduction can be used to
predict suspenseful events in movie synopses.
| 2,020 | Computation and Language |
You are right. I am ALARMED -- But by Climate Change Counter Movement | The world is facing the challenge of climate crisis. Despite the consensus in
scientific community about anthropogenic global warming, the web is flooded
with articles spreading climate misinformation. These articles are carefully
constructed by climate change counter movement (cccm) organizations to
influence the narrative around climate change. We revisit the literature on
climate misinformation in social sciences and repackage it to introduce in the
community of NLP. Despite considerable work in detection of fake news, there is
no misinformation dataset available that is specific to the domain.of climate
change. We try to bridge this gap by scraping and releasing articles with known
climate change misinformation.
| 2,020 | Computation and Language |
Recipes for Adapting Pre-trained Monolingual and Multilingual Models to
Machine Translation | There has been recent success in pre-training on monolingual data and
fine-tuning on Machine Translation (MT), but it remains unclear how to best
leverage a pre-trained model for a given MT task. This paper investigates the
benefits and drawbacks of freezing parameters, and adding new ones, when
fine-tuning a pre-trained model on MT. We focus on 1) Fine-tuning a model
trained only on English monolingual data, BART. 2) Fine-tuning a model trained
on monolingual data from 25 languages, mBART. For BART we get the best
performance by freezing most of the model parameters, and adding extra
positional embeddings. For mBART we match or outperform the performance of
naive fine-tuning for most language pairs with the encoder, and most of the
decoder, frozen. The encoder-decoder attention parameters are most important to
fine-tune. When constraining ourselves to an out-of-domain training set for
Vietnamese to English we see the largest improvements over the fine-tuning
baseline.
| 2,022 | Computation and Language |
Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for
Fast and Good Topics too! | Topic models are a useful analysis tool to uncover the underlying themes
within document collections. The dominant approach is to use probabilistic
topic models that posit a generative story, but in this paper we propose an
alternative way to obtain topics: clustering pre-trained word embeddings while
incorporating document information for weighted clustering and reranking top
words. We provide benchmarks for the combination of different word embeddings
and clustering algorithms, and analyse their performance under dimensionality
reduction with PCA. The best performing combination for our approach performs
as well as classical topic models, but with lower runtime and computational
complexity.
| 2,020 | Computation and Language |
Bridging Linguistic Typology and Multilingual Machine Translation with
Multi-View Language Representations | Sparse language vectors from linguistic typology databases and learned
embeddings from tasks like multilingual machine translation have been
investigated in isolation, without analysing how they could benefit from each
other's language characterisation. We propose to fuse both views using singular
vector canonical correlation analysis and study what kind of information is
induced from each source. By inferring typological features and language
phylogenies, we observe that our representations embed typology and strengthen
correlations with language relationships. We then take advantage of our
multi-view language vector space for multilingual machine translation, where we
achieve competitive overall translation accuracy in tasks that require
information about language similarities, such as language clustering and
ranking candidates for multilingual transfer. With our method, which is also
released as a tool, we can easily project and assess new languages without
expensive retraining of massive multilingual or ranking models, which are major
disadvantages of related approaches.
| 2,020 | Computation and Language |
Addressing Zero-Resource Domains Using Document-Level Context in Neural
Machine Translation | Achieving satisfying performance in machine translation on domains for which
there is no training data is challenging. Traditional supervised domain
adaptation is not suitable for addressing such zero-resource domains because it
relies on in-domain parallel data. We show that when in-domain parallel data is
not available, access to document-level context enables better capturing of
domain generalities compared to only having access to a single sentence. Having
access to more information provides a more reliable domain estimation. We
present two document-level Transformer models which are capable of using large
context sizes and we compare these models against strong Transformer baselines.
We obtain improvements for the two zero resource domains we study. We
additionally provide an analysis where we vary the amount of context and look
at the case where in-domain data is available.
| 2,021 | Computation and Language |
Language Model Prior for Low-Resource Neural Machine Translation | The scarcity of large parallel corpora is an important obstacle for neural
machine translation. A common solution is to exploit the knowledge of language
models (LM) trained on abundant monolingual data. In this work, we propose a
novel approach to incorporate a LM as prior in a neural translation model (TM).
Specifically, we add a regularization term, which pushes the output
distributions of the TM to be probable under the LM prior, while avoiding wrong
predictions when the TM "disagrees" with the LM. This objective relates to
knowledge distillation, where the LM can be viewed as teaching the TM about the
target language. The proposed approach does not compromise decoding speed,
because the LM is used only at training time, unlike previous work that
requires it during inference. We present an analysis of the effects that
different methods have on the distributions of the TM. Results on two
low-resource machine translation datasets show clear improvements even with
limited monolingual data.
| 2,020 | Computation and Language |
A Call for More Rigor in Unsupervised Cross-lingual Learning | We review motivations, definition, approaches, and methodology for
unsupervised cross-lingual learning and call for a more rigorous position in
each of them. An existing rationale for such research is based on the lack of
parallel data for many of the world's languages. However, we argue that a
scenario without any parallel data and abundant monolingual data is unrealistic
in practice. We also discuss different training signals that have been used in
previous work, which depart from the pure unsupervised setting. We then
describe common methodological issues in tuning and evaluation of unsupervised
cross-lingual models and present best practices. Finally, we provide a unified
outlook for different types of research in this area (i.e., cross-lingual word
embeddings, deep multilingual pretraining, and unsupervised machine
translation) and argue for comparable evaluation of these models.
| 2,021 | Computation and Language |
Natural Language Premise Selection: Finding Supporting Statements for
Mathematical Text | Mathematical text is written using a combination of words and mathematical
expressions. This combination, along with a specific way of structuring
sentences makes it challenging for state-of-art NLP tools to understand and
reason on top of mathematical discourse. In this work, we propose a new NLP
task, the natural premise selection, which is used to retrieve supporting
definitions and supporting propositions that are useful for generating an
informal mathematical proof for a particular statement. We also make available
a dataset, NL-PS, which can be used to evaluate different approaches for the
natural premise selection task. Using different baselines, we demonstrate the
underlying interpretation challenges associated with the task.
| 2,020 | Computation and Language |
Mutlitask Learning for Cross-Lingual Transfer of Semantic Dependencies | We describe a method for developing broad-coverage semantic dependency
parsers for languages for which no semantically annotated resource is
available. We leverage a multitask learning framework coupled with an
annotation projection method. We transfer supervised semantic dependency parse
annotations from a rich-resource language to a low-resource language through
parallel data, and train a semantic parser on projected data. We make use of
supervised syntactic parsing as an auxiliary task in a multitask learning
framework, and show that with different multitask learning settings, we
consistently improve over the single-task baseline. In the setting in which
English is the source, and Czech is the target language, our best multitask
model improves the labeled F1 score over the single-task baseline by 1.8 in the
in-domain SemEval data (Oepen et al., 2015), as well as 2.5 in the
out-of-domain test set. Moreover, we observe that syntactic and semantic
dependency direction match is an important factor in improving the results.
| 2,020 | Computation and Language |
Data and Representation for Turkish Natural Language Inference | Large annotated datasets in NLP are overwhelmingly in English. This is an
obstacle to progress in other languages. Unfortunately, obtaining new annotated
resources for each task in each language would be prohibitively expensive. At
the same time, commercial machine translation systems are now robust. Can we
leverage these systems to translate English-language datasets automatically? In
this paper, we offer a positive response for natural language inference (NLI)
in Turkish. We translated two large English NLI datasets into Turkish and had a
team of experts validate their translation quality and fidelity to the original
labels. Using these datasets, we address core issues of representation for
Turkish NLI. We find that in-language embeddings are essential and that
morphological parsing can be avoided where the training set is large. Finally,
we show that models trained on our machine-translated datasets are successful
on human-translated evaluation sets. We share all code, models, and data
publicly.
| 2,020 | Computation and Language |
PlotMachines: Outline-Conditioned Generation with Dynamic Plot State
Tracking | We propose the task of outline-conditioned story generation: given an outline
as a set of phrases that describe key characters and events to appear in a
story, the task is to generate a coherent narrative that is consistent with the
provided outline. This task is challenging as the input only provides a rough
sketch of the plot, and thus, models need to generate a story by interweaving
the key points provided in the outline. This requires the model to keep track
of the dynamic states of the latent plot, conditioning on the input outline
while generating the full story. We present PlotMachines, a neural narrative
model that learns to transform an outline into a coherent story by tracking the
dynamic plot states. In addition, we enrich PlotMachines with high-level
discourse structure so that the model can learn different writing styles
corresponding to different parts of the narrative. Comprehensive experiments
over three fiction and non-fiction datasets demonstrate that large-scale
language models, such as GPT-2 and Grover, despite their impressive generation
performance, are not sufficient in generating coherent narratives for the given
outline, and dynamic plot state tracking is important for composing narratives
with tighter, more consistent plots.
| 2,020 | Computation and Language |
Fact or Fiction: Verifying Scientific Claims | We introduce scientific claim verification, a new task to select abstracts
from the research literature containing evidence that SUPPORTS or REFUTES a
given scientific claim, and to identify rationales justifying each decision. To
study this task, we construct SciFact, a dataset of 1.4K expert-written
scientific claims paired with evidence-containing abstracts annotated with
labels and rationales. We develop baseline models for SciFact, and demonstrate
that simple domain adaptation techniques substantially improve performance
compared to models trained on Wikipedia or political news. We show that our
system is able to verify claims related to COVID-19 by identifying evidence
from the CORD-19 corpus. Our experiments indicate that SciFact will provide a
challenging testbed for the development of new systems designed to retrieve and
reason over corpora containing specialized domain knowledge. Data and code for
this new task are publicly available at https://github.com/allenai/scifact. A
leaderboard and COVID-19 fact-checking demo are available at
https://scifact.apps.allenai.org.
| 2,020 | Computation and Language |
Investigating Transferability in Pretrained Language Models | How does language model pretraining help transfer learning? We consider a
simple ablation technique for determining the impact of each pretrained layer
on transfer task performance. This method, partial reinitialization, involves
replacing different layers of a pretrained model with random weights, then
finetuning the entire model on the transfer task and observing the change in
performance. This technique reveals that in BERT, layers with high probing
performance on downstream GLUE tasks are neither necessary nor sufficient for
high accuracy on those tasks. Furthermore, the benefit of using pretrained
parameters for a layer varies dramatically with finetuning dataset size:
parameters that provide tremendous performance improvement when data is
plentiful may provide negligible benefits in data-scarce settings. These
results reveal the complexity of the transfer learning process, highlighting
the limitations of methods that operate on frozen models or single data
samples.
| 2,020 | Computation and Language |
Paraphrasing vs Coreferring: Two Sides of the Same Coin | We study the potential synergy between two different NLP tasks, both
confronting predicate lexical variability: identifying predicate paraphrases,
and event coreference resolution. First, we used annotations from an event
coreference dataset as distant supervision to re-score heuristically-extracted
predicate paraphrases. The new scoring gained more than 18 points in average
precision upon their ranking by the original scoring method. Then, we used the
same re-ranking features as additional inputs to a state-of-the-art event
coreference resolution model, which yielded modest but consistent improvements
to the model's performance. The results suggest a promising direction to
leverage data and models for each of the tasks to the benefit of the other.
| 2,020 | Computation and Language |
Control, Generate, Augment: A Scalable Framework for Multi-Attribute
Text Generation | We introduce CGA, a conditional VAE architecture, to control, generate, and
augment text. CGA is able to generate natural English sentences controlling
multiple semantic and syntactic attributes by combining adversarial learning
with a context-aware loss and a cyclical word dropout routine. We demonstrate
the value of the individual model components in an ablation study. The
scalability of our approach is ensured through a single discriminator,
independently of the number of attributes. We show high quality, diversity and
attribute control in the generated sentences through a series of automatic and
human assessments. As the main application of our work, we test the potential
of this new NLG model in a data augmentation scenario. In a downstream NLP
task, the sentences generated by our CGA model show significant improvements
over a strong baseline, and a classification performance often comparable to
adding same amount of additional real data.
| 2,020 | Computation and Language |
A Study in Improving BLEU Reference Coverage with Diverse Automatic
Paraphrasing | We investigate a long-perceived shortcoming in the typical use of BLEU: its
reliance on a single reference. Using modern neural paraphrasing techniques, we
study whether automatically generating additional diverse references can
provide better coverage of the space of valid translations and thereby improve
its correlation with human judgments. Our experiments on the into-English
language directions of the WMT19 metrics task (at both the system and sentence
level) show that using paraphrased references does generally improve BLEU, and
when it does, the more diverse the better. However, we also show that better
results could be achieved if those paraphrases were to specifically target the
parts of the space most relevant to the MT outputs being evaluated. Moreover,
the gains remain slight even when human paraphrases are used, suggesting
inherent limitations to BLEU's capacity to correctly exploit multiple
references. Surprisingly, we also find that adequacy appears to be less
important, as shown by the high results of a strong sampling approach, which
even beats human paraphrases when used with sentence-level BLEU.
| 2,020 | Computation and Language |
How do Decisions Emerge across Layers in Neural Models? Interpretation
with Differentiable Masking | Attribution methods assess the contribution of inputs to the model
prediction. One way to do so is erasure: a subset of inputs is considered
irrelevant if it can be removed without affecting the prediction. Though
conceptually simple, erasure's objective is intractable and approximate search
remains expensive with modern deep NLP models. Erasure is also susceptible to
the hindsight bias: the fact that an input can be dropped does not mean that
the model `knows' it can be dropped. The resulting pruning is over-aggressive
and does not reflect how the model arrives at the prediction. To deal with
these challenges, we introduce Differentiable Masking. DiffMask learns to
mask-out subsets of the input while maintaining differentiability. The decision
to include or disregard an input token is made with a simple model based on
intermediate hidden layers of the analyzed model. First, this makes the
approach efficient because we predict rather than search. Second, as with
probing classifiers, this reveals what the network `knows' at the corresponding
layers. This lets us not only plot attribution heatmaps but also analyze how
decisions are formed across network layers. We use DiffMask to study BERT
models on sentiment classification and question answering.
| 2,021 | Computation and Language |
Segatron: Segment-Aware Transformer for Language Modeling and
Understanding | Transformers are powerful for sequence modeling. Nearly all state-of-the-art
language models and pre-trained language models are based on the Transformer
architecture. However, it distinguishes sequential tokens only with the token
position index. We hypothesize that better contextual representations can be
generated from the Transformer with richer positional information. To verify
this, we propose a segment-aware Transformer (Segatron), by replacing the
original token position encoding with a combined position encoding of
paragraph, sentence, and token. We first introduce the segment-aware mechanism
to Transformer-XL, which is a popular Transformer-based language model with
memory extension and relative position encoding. We find that our method can
further improve the Transformer-XL base model and large model, achieving 17.1
perplexity on the WikiText-103 dataset. We further investigate the pre-training
masked language modeling task with Segatron. Experimental results show that
BERT pre-trained with Segatron (SegaBERT) can outperform BERT with vanilla
Transformer on various NLP tasks, and outperforms RoBERTa on zero-shot sentence
representation learning.
| 2,020 | Computation and Language |
A Matter of Framing: The Impact of Linguistic Formalism on Probing
Results | Deep pre-trained contextualized encoders like BERT (Delvin et al., 2019)
demonstrate remarkable performance on a range of downstream tasks. A recent
line of research in probing investigates the linguistic knowledge implicitly
learned by these models during pre-training. While most work in probing
operates on the task level, linguistic tasks are rarely uniform and can be
represented in a variety of formalisms. Any linguistics-based probing study
thereby inevitably commits to the formalism used to annotate the underlying
data. Can the choice of formalism affect probing results? To investigate, we
conduct an in-depth cross-formalism layer probing study in role semantics. We
find linguistically meaningful differences in the encoding of semantic role-
and proto-role information by BERT depending on the formalism and demonstrate
that layer probing can detect subtle differences between the implementations of
the same linguistic formalism. Our results suggest that linguistic formalism is
an important dimension in probing studies, along with the commonly used
cross-task and cross-lingual experimental settings.
| 2,020 | Computation and Language |
Don't Use English Dev: On the Zero-Shot Cross-Lingual Evaluation of
Contextual Embeddings | Multilingual contextual embeddings have demonstrated state-of-the-art
performance in zero-shot cross-lingual transfer learning, where multilingual
BERT is fine-tuned on one source language and evaluated on a different target
language. However, published results for mBERT zero-shot accuracy vary as much
as 17 points on the MLDoc classification task across four papers. We show that
the standard practice of using English dev accuracy for model selection in the
zero-shot setting makes it difficult to obtain reproducible results on the
MLDoc and XNLI tasks. English dev accuracy is often uncorrelated (or even
anti-correlated) with target language accuracy, and zero-shot performance
varies greatly at different points in the same fine-tuning run and between
different fine-tuning runs. These reproducibility issues are also present for
other tasks with different pre-trained embeddings (e.g., MLQA with XLM-R). We
recommend providing oracle scores alongside zero-shot results: still fine-tune
using English data, but choose a checkpoint with the target dev set. Reporting
this upper bound makes results more consistent by avoiding arbitrarily bad
checkpoints.
| 2,020 | Computation and Language |
Word Rotator's Distance | A key principle in assessing textual similarity is measuring the degree of
semantic overlap between two texts by considering the word alignment. Such
alignment-based approaches are intuitive and interpretable; however, they are
empirically inferior to the simple cosine similarity between general-purpose
sentence vectors. To address this issue, we focus on and demonstrate the fact
that the norm of word vectors is a good proxy for word importance, and their
angle is a good proxy for word similarity. Alignment-based approaches do not
distinguish them, whereas sentence-vector approaches automatically use the norm
as the word importance. Accordingly, we propose a method that first decouples
word vectors into their norm and direction, and then computes alignment-based
similarity using earth mover's distance (i.e., optimal transport cost), which
we refer to as word rotator's distance. Besides, we find how to grow the norm
and direction of word vectors (vector converter), which is a new systematic
approach derived from sentence-vector estimation methods. On several textual
similarity datasets, the combination of these simple proposed methods
outperformed not only alignment-based approaches but also strong baselines. The
source code is available at https://github.com/eumesy/wrd
| 2,020 | Computation and Language |
Template Guided Text Generation for Task-Oriented Dialogue | Virtual assistants such as Google Assistant, Amazon Alexa, and Apple Siri
enable users to interact with a large number of services and APIs on the web
using natural language. In this work, we investigate two methods for Natural
Language Generation (NLG) using a single domain-independent model across a
large number of APIs. First, we propose a schema-guided approach which
conditions the generation on a schema describing the API in natural language.
Our second method investigates the use of a small number of templates, growing
linearly in number of slots, to convey the semantics of the API. To generate
utterances for an arbitrary slot combination, a few simple templates are first
concatenated to give a semantically correct, but possibly incoherent and
ungrammatical utterance. A pre-trained language model is subsequently employed
to rewrite it into coherent, natural sounding text. Through automatic metrics
and human evaluation, we show that our method improves over strong baselines,
is robust to out-of-domain inputs and shows improved sample efficiency.
| 2,020 | Computation and Language |
Lexical Semantic Recognition | In lexical semantics, full-sentence segmentation and segment labeling of
various phenomena are generally treated separately, despite their
interdependence. We hypothesize that a unified lexical semantic recognition
task is an effective way to encapsulate previously disparate styles of
annotation, including multiword expression identification / classification and
supersense tagging. Using the STREUSLE corpus, we train a neural CRF sequence
tagger and evaluate its performance along various axes of annotation. As the
label set generalizes that of previous tasks (PARSEME, DiMSUM), we additionally
evaluate how well the model generalizes to those test sets, finding that it
approaches or surpasses existing models despite training only on STREUSLE. Our
work also establishes baseline models and evaluation metrics for integrated and
accurate modeling of lexical semantics, facilitating future work in this area.
| 2,021 | Computation and Language |
TLDR: Extreme Summarization of Scientific Documents | We introduce TLDR generation, a new form of extreme summarization, for
scientific papers. TLDR generation involves high source compression and
requires expert background knowledge and understanding of complex
domain-specific language. To facilitate study on this task, we introduce
SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR
contains both author-written and expert-derived TLDRs, where the latter are
collected using a novel annotation protocol that produces high-quality
summaries while minimizing annotation burden. We propose CATTS, a simple yet
effective learning strategy for generating TLDRs that exploits titles as an
auxiliary training signal. CATTS improves upon strong baselines under both
automated metrics and human evaluations. Data and code are publicly available
at https://github.com/allenai/scitldr.
| 2,020 | Computation and Language |
Does Data Augmentation Improve Generalization in NLP? | Neural models often exploit superficial features to achieve good performance,
rather than deriving more general features. Overcoming this tendency is a
central challenge in areas such as representation learning and ML fairness.
Recent work has proposed using data augmentation, i.e., generating training
examples where the superficial features fail, as a means of encouraging models
to prefer the stronger features. We design a series of toy learning problems to
test the hypothesis that data augmentation leads models to unlearn weaker
heuristics, but not to learn stronger features in their place. We find partial
support for this hypothesis: Data augmentation often hurts before it helps, and
it is less effective when the preferred strong feature is much more difficult
to extract than the competing weak feature.
| 2,020 | Computation and Language |
Imitation Attacks and Defenses for Black-box Machine Translation Systems | Adversaries may look to steal or attack black-box NLP systems, either for
financial gain or to exploit model errors. One setting of particular interest
is machine translation (MT), where models have high commercial value and errors
can be costly. We investigate possible exploits of black-box MT systems and
explore a preliminary defense against such threats. We first show that MT
systems can be stolen by querying them with monolingual sentences and training
models to imitate their outputs. Using simulated experiments, we demonstrate
that MT model stealing is possible even when imitation models have different
input data or architectures than their target models. Applying these ideas, we
train imitation models that reach within 0.6 BLEU of three production MT
systems on both high-resource and low-resource language pairs. We then leverage
the similarity of our imitation models to transfer adversarial examples to the
production systems. We use gradient-based attacks that expose inputs which lead
to semantically-incorrect translations, dropped content, and vulgar model
outputs. To mitigate these vulnerabilities, we propose a defense that modifies
translation outputs in order to misdirect the optimization of imitation models.
This defense degrades the adversary's BLEU score and attack success rate at
some cost in the defender's BLEU and inference speed.
| 2,021 | Computation and Language |
WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words
in Context | We present WiC-TSV, a new multi-domain evaluation benchmark for Word Sense
Disambiguation. More specifically, we introduce a framework for Target Sense
Verification of Words in Context which grounds its uniqueness in the
formulation as a binary classification task thus being independent of external
sense inventories, and the coverage of various domains. This makes the dataset
highly flexible for the evaluation of a diverse set of models and systems in
and across domains. WiC-TSV provides three different evaluation settings,
depending on the input signals provided to the model. We set baseline
performance on the dataset using state-of-the-art language models. Experimental
results show that even though these models can perform decently on the task,
there remains a gap between machine and human performance, especially in
out-of-domain settings. WiC-TSV data is available at
https://competitions.codalab.org/competitions/23683
| 2,021 | Computation and Language |
Crisscrossed Captions: Extended Intramodal and Intermodal Semantic
Similarity Judgments for MS-COCO | By supporting multi-modal retrieval training and evaluation, image captioning
datasets have spurred remarkable progress on representation learning.
Unfortunately, datasets have limited cross-modal associations: images are not
paired with other images, captions are only paired with other captions of the
same image, there are no negative associations and there are missing positive
cross-modal associations. This undermines research into how inter-modality
learning impacts intra-modality tasks. We address this gap with Crisscrossed
Captions (CxC), an extension of the MS-COCO dataset with human semantic
similarity judgments for 267,095 intra- and inter-modality pairs. We report
baseline results on CxC for strong existing unimodal and multimodal models. We
also evaluate a multitask dual encoder trained on both image-caption and
caption-caption pairs that crucially demonstrates CxC's value for measuring the
influence of intra- and inter-modality learning.
| 2,021 | Computation and Language |
Representations of Syntax [MASK] Useful: Effects of Constituency and
Dependency Structure in Recursive LSTMs | Sequence-based neural networks show significant sensitivity to syntactic
structure, but they still perform less well on syntactic tasks than tree-based
networks. Such tree-based networks can be provided with a constituency parse, a
dependency parse, or both. We evaluate which of these two representational
schemes more effectively introduces biases for syntactic structure that
increase performance on the subject-verb agreement prediction task. We find
that a constituency-based network generalizes more robustly than a
dependency-based one, and that combining the two types of structure does not
yield further improvement. Finally, we show that the syntactic robustness of
sequential models can be substantially improved by fine-tuning on a small
amount of constructed data, suggesting that data augmentation is a viable
alternative to explicit constituency structure for imparting the syntactic
biases that sequential models are lacking.
| 2,020 | Computation and Language |
Fighting the COVID-19 Infodemic: Modeling the Perspective of
Journalists, Fact-Checkers, Social Media Platforms, Policy Makers, and the
Society | With the emergence of the COVID-19 pandemic, the political and the medical
aspects of disinformation merged as the problem got elevated to a whole new
level to become the first global infodemic. Fighting this infodemic has been
declared one of the most important focus areas of the World Health
Organization, with dangers ranging from promoting fake cures, rumors, and
conspiracy theories to spreading xenophobia and panic. Addressing the issue
requires solving a number of challenging problems such as identifying messages
containing claims, determining their check-worthiness and factuality, and their
potential to do harm as well as the nature of that harm, to mention just a few.
To address this gap, we release a large dataset of 16K manually annotated
tweets for fine-grained disinformation analysis that (i) focuses on COVID-19,
(ii) combines the perspectives and the interests of journalists, fact-checkers,
social media platforms, policy makers, and society, and (iii) covers Arabic,
Bulgarian, Dutch, and English. Finally, we show strong evaluation results using
pretrained Transformers, thus confirming the practical utility of the dataset
in monolingual vs. multilingual, and single task vs. multitask settings.
| 2,021 | Computation and Language |
Improving Factual Consistency Between a Response and Persona Facts | Neural models for response generation produce responses that are semantically
plausible but not necessarily factually consistent with facts describing the
speaker's persona. These models are trained with fully supervised learning
where the objective function barely captures factual consistency. We propose to
fine-tune these models by reinforcement learning and an efficient reward
function that explicitly captures the consistency between a response and
persona facts as well as semantic plausibility. Our automatic and human
evaluations on the PersonaChat corpus confirm that our approach increases the
rate of responses that are factually consistent with persona facts over its
supervised counterpart while retaining the language quality of responses.
| 2,021 | Computation and Language |
Progressively Pretrained Dense Corpus Index for Open-Domain Question
Answering | To extract answers from a large corpus, open-domain question answering (QA)
systems usually rely on information retrieval (IR) techniques to narrow the
search space. Standard inverted index methods such as TF-IDF are commonly used
as thanks to their efficiency. However, their retrieval performance is limited
as they simply use shallow and sparse lexical features. To break the IR
bottleneck, recent studies show that stronger retrieval performance can be
achieved by pretraining a effective paragraph encoder that index paragraphs
into dense vectors. Once trained, the corpus can be pre-encoded into
low-dimensional vectors and stored within an index structure where the
retrieval can be efficiently implemented as maximum inner product search.
Despite the promising results, pretraining such a dense index is expensive
and often requires a very large batch size. In this work, we propose a simple
and resource-efficient method to pretrain the paragraph encoder. First, instead
of using heuristically created pseudo question-paragraph pairs for pretraining,
we utilize an existing pretrained sequence-to-sequence model to build a strong
question generator that creates high-quality pretraining data. Second, we
propose a progressive pretraining algorithm to ensure the existence of
effective negative samples in each batch. Across three datasets, our method
outperforms an existing dense retrieval method that uses 7 times more
computational resources for pretraining.
| 2,021 | Computation and Language |
Context based Text-generation using LSTM networks | Long short-term memory(LSTM) units on sequence-based models are being used in
translation, question-answering systems, classification tasks due to their
capability of learning long-term dependencies. In Natural language generation,
LSTM networks are providing impressive results on text generation models by
learning language models with grammatically stable syntaxes. But the downside
is that the network does not learn about the context. The network only learns
the input-output function and generates text given a set of input words
irrespective of pragmatics. As the model is trained without any such context,
there is no semantic consistency among the generated sentences. The proposed
model is trained to generate text for a given set of input words along with a
context vector. A context vector is similar to a paragraph vector that grasps
the semantic meaning(context) of the sentence. Several methods of extracting
the context vectors are proposed in this work. While training a language model,
in addition to the input-output sequences, context vectors are also trained
along with the inputs. Due to this structure, the model learns the relation
among the input words, context vector and the target word. Given a set of
context terms, a well trained model will generate text around the provided
context. Based on the nature of computing context vectors, the model has been
tried out with two variations (word importance and word clustering). In the
word clustering method, the suitable embeddings among various domains are also
explored. The results are evaluated based on the semantic closeness of the
generated text to the given context.
| 2,020 | Computation and Language |
UiO-UvA at SemEval-2020 Task 1: Contextualised Embeddings for Lexical
Semantic Change Detection | We apply contextualised word embeddings to lexical semantic change detection
in the SemEval-2020 Shared Task 1. This paper focuses on Subtask 2, ranking
words by the degree of their semantic drift over time. We analyse the
performance of two contextualising architectures (BERT and ELMo) and three
change detection algorithms. We find that the most effective algorithms rely on
the cosine similarity between averaged token embeddings and the pairwise
distances between token embeddings. They outperform strong baselines by a large
margin (in the post-evaluation phase, we have the best Subtask 2 submission for
SemEval-2020 Task 1), but interestingly, the choice of a particular algorithm
depends on the distribution of gold scores in the test set.
| 2,020 | Computation and Language |
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer | The main goal behind state-of-the-art pre-trained multilingual models such as
multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in
low-resource languages through zero-shot or few-shot cross-lingual transfer.
However, due to limited model capacity, their transfer performance is the
weakest exactly on such low-resource languages and languages unseen during
pre-training. We propose MAD-X, an adapter-based framework that enables high
portability and parameter-efficient transfer to arbitrary tasks and languages
by learning modular language and task representations. In addition, we
introduce a novel invertible adapter architecture and a strong baseline method
for adapting a pre-trained multilingual model to a new language. MAD-X
outperforms the state of the art in cross-lingual transfer across a
representative set of typologically diverse languages on named entity
recognition and causal commonsense reasoning, and achieves competitive results
on question answering. Our code and adapters are available at AdapterHub.ml
| 2,020 | Computation and Language |
Attribution Analysis of Grammatical Dependencies in LSTMs | LSTM language models have been shown to capture syntax-sensitive grammatical
dependencies such as subject-verb agreement with a high degree of accuracy
(Linzen et al., 2016, inter alia). However, questions remain regarding whether
they do so using spurious correlations, or whether they are truly able to match
verbs with their subjects. This paper argues for the latter hypothesis. Using
layer-wise relevance propagation (Bach et al., 2015), a technique that
quantifies the contributions of input features to model behavior, we show that
LSTM performance on number agreement is directly correlated with the model's
ability to distinguish subjects from other nouns. Our results suggest that LSTM
language models are able to infer robust representations of syntactic
dependencies.
| 2,020 | Computation and Language |
Aspect-Controlled Neural Argument Generation | We rely on arguments in our daily lives to deliver our opinions and base them
on evidence, making them more convincing in turn. However, finding and
formulating arguments can be challenging. In this work, we train a language
model for argument generation that can be controlled on a fine-grained level to
generate sentence-level arguments for a given topic, stance, and aspect. We
define argument aspect detection as a necessary method to allow this
fine-granular control and crowdsource a dataset with 5,032 arguments annotated
with aspects. Our evaluation shows that our generation model is able to
generate high-quality, aspect-specific arguments. Moreover, these arguments can
be used to improve the performance of stance detection models via data
augmentation and to generate counter-arguments. We publish all datasets and
code to fine-tune the language model.
| 2,020 | Computation and Language |
AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for
Indic Languages | We present the IndicNLP corpus, a large-scale, general-domain corpus
containing 2.7 billion words for 10 Indian languages from two language
families. We share pre-trained word embeddings trained on these corpora. We
create news article category classification datasets for 9 languages to
evaluate the embeddings. We show that the IndicNLP embeddings significantly
outperform publicly available pre-trained embedding on multiple evaluation
tasks. We hope that the availability of the corpus will accelerate Indic NLP
research. The resources are available at
https://github.com/ai4bharat-indicnlp/indicnlp_corpus.
| 2,020 | Computation and Language |
Revisiting Unsupervised Relation Extraction | Unsupervised relation extraction (URE) extracts relations between named
entities from raw text without manually-labelled data and existing knowledge
bases (KBs). URE methods can be categorised into generative and discriminative
approaches, which rely either on hand-crafted features or surface form.
However, we demonstrate that by using only named entities to induce relation
types, we can outperform existing methods on two popular datasets. We conduct a
comparison and evaluation of our findings with other URE techniques, to
ascertain the important features in URE. We conclude that entity types provide
a strong inductive bias for URE.
| 2,020 | Computation and Language |
Linguistic Typology Features from Text: Inferring the Sparse Features of
World Atlas of Language Structures | The use of linguistic typological resources in natural language processing
has been steadily gaining more popularity. It has been observed that the use of
typological information, often combined with distributed language
representations, leads to significantly more powerful models. While linguistic
typology representations from various resources have mostly been used for
conditioning the models, there has been relatively little attention on
predicting features from these resources from the input data. In this paper we
investigate whether the various linguistic features from World Atlas of
Language Structures (WALS) can be reliably inferred from multi-lingual text.
Such a predictor can be used to infer structural features for a language never
observed in training data. We frame this task as a multi-label classification
involving predicting the set of non-mutually exclusive and extremely sparse
multi-valued labels (WALS features). We construct a recurrent neural network
predictor based on byte embeddings and convolutional layers and test its
performance on 556 languages, providing analysis for various linguistic types,
macro-areas, language families and individual features. We show that some
features from various linguistic types can be predicted reliably.
| 2,020 | Computation and Language |
On the Spontaneous Emergence of Discrete and Compositional Signals | We propose a general framework to study language emergence through signaling
games with neural agents. Using a continuous latent space, we are able to (i)
train using backpropagation, (ii) show that discrete messages nonetheless
naturally emerge. We explore whether categorical perception effects follow and
show that the messages are not compositional.
| 2,020 | Computation and Language |
Learning to Faithfully Rationalize by Construction | In many settings it is important for one to be able to understand why a model
made a particular prediction. In NLP this often entails extracting snippets of
an input text `responsible for' corresponding model output; when such a snippet
comprises tokens that indeed informed the model's prediction, it is a faithful
explanation. In some settings, faithfulness may be critical to ensure
transparency. Lei et al. (2016) proposed a model to produce faithful rationales
for neural text classification by defining independent snippet extraction and
prediction modules. However, the discrete selection over input tokens performed
by this method complicates training, leading to high variance and requiring
careful hyperparameter tuning. We propose a simpler variant of this approach
that provides faithful explanations by construction. In our scheme, named
FRESH, arbitrary feature importance scores (e.g., gradients from a trained
model) are used to induce binary labels over token inputs, which an extractor
can be trained to predict. An independent classifier module is then trained
exclusively on snippets provided by the extractor; these snippets thus
constitute faithful explanations, even if the classifier is arbitrarily
complex. In both automatic and manual evaluations we find that variants of this
simple framework yield predictive performance superior to `end-to-end'
approaches, while being more general and easier to train. Code is available at
https://github.com/successar/FRESH
| 2,020 | Computation and Language |
Incremental Neural Coreference Resolution in Constant Memory | We investigate modeling coreference resolution under a fixed memory
constraint by extending an incremental clustering algorithm to utilize
contextualized encoders and neural components. Given a new sentence, our
end-to-end algorithm proposes and scores each mention span against explicit
entity representations created from the earlier document context (if any).
These spans are then used to update the entity's representations before being
forgotten; we only retain a fixed set of salient entities throughout the
document. In this work, we successfully convert a high-performing model (Joshi
et al., 2020), asymptotically reducing its memory usage to constant space with
only a 0.3% relative loss in F1 on OntoNotes 5.0.
| 2,020 | Computation and Language |
Structure-Tags Improve Text Classification for Scholarly Document
Quality Prediction | Training recurrent neural networks on long texts, in particular scholarly
documents, causes problems for learning. While hierarchical attention networks
(HANs) are effective in solving these problems, they still lose important
information about the structure of the text. To tackle these problems, we
propose the use of HANs combined with structure-tags which mark the role of
sentences in the document. Adding tags to sentences, marking them as
corresponding to title, abstract or main body text, yields improvements over
the state-of-the-art for scholarly document quality prediction. The proposed
system is applied to the task of accept/reject prediction on the PeerRead
dataset and compared against a recent BiLSTM-based model and joint
textual+visual model as well as against plain HANs. Compared to plain HANs,
accuracy increases on all three domains. On the computation and language domain
our new model works best overall, and increases accuracy 4.7% over the best
literature result. We also obtain improvements when introducing the tags for
prediction of the number of citations for 88k scientific publications that we
compiled from the Allen AI S2ORC dataset. For our HAN-system with
structure-tags we reach 28.5% explained variance, an improvement of 1.8% over
our reimplementation of the BiLSTM-based model as well as 1.0% improvement over
plain HANs.
| 2,020 | Computation and Language |
Contextual Text Style Transfer | We introduce a new task, Contextual Text Style Transfer - translating a
sentence into a desired style with its surrounding context taken into account.
This brings two key challenges to existing style transfer approaches: ($i$) how
to preserve the semantic meaning of target sentence and its consistency with
surrounding context during transfer; ($ii$) how to train a robust model with
limited labeled data accompanied with context. To realize high-quality style
transfer with natural context preservation, we propose a Context-Aware Style
Transfer (CAST) model, which uses two separate encoders for each input sentence
and its surrounding context. A classifier is further trained to ensure
contextual consistency of the generated sentence. To compensate for the lack of
parallel data, additional self-reconstruction and back-translation losses are
introduced to leverage non-parallel data in a semi-supervised fashion. Two new
benchmarks, Enron-Context and Reddit-Context, are introduced for formality and
offensiveness style transfer. Experimental results on these datasets
demonstrate the effectiveness of the proposed CAST model over state-of-the-art
methods across style accuracy, content preservation and contextual consistency
metrics.
| 2,020 | Computation and Language |
Interpretable Entity Representations through Large-Scale Typing | In standard methodology for natural language processing, entities in text are
typically embedded in dense vector spaces with pre-trained models. The
embeddings produced this way are effective when fed into downstream models, but
they require end-task fine-tuning and are fundamentally difficult to interpret.
In this paper, we present an approach to creating entity representations that
are human readable and achieve high performance on entity-related tasks out of
the box. Our representations are vectors whose values correspond to posterior
probabilities over fine-grained entity types, indicating the confidence of a
typing model's decision that the entity belongs to the corresponding type. We
obtain these representations using a fine-grained entity typing model, trained
either on supervised ultra-fine entity typing data (Choi et al. 2018) or
distantly-supervised examples from Wikipedia. On entity probing tasks involving
recognizing entity identity, our embeddings used in parameter-free downstream
models achieve competitive performance with ELMo- and BERT-based embeddings in
trained models. We also show that it is possible to reduce the size of our type
set in a learning-based way for particular domains. Finally, we show that these
embeddings can be post-hoc modified through a small number of rules to
incorporate domain knowledge and improve performance.
| 2,020 | Computation and Language |
Neural Entity Summarization with Joint Encoding and Weak Supervision | In a large-scale knowledge graph (KG), an entity is often described by a
large number of triple-structured facts. Many applications require abridged
versions of entity descriptions, called entity summaries. Existing solutions to
entity summarization are mainly unsupervised. In this paper, we present a
supervised approach NEST that is based on our novel neural model to jointly
encode graph structure and text in KGs and generate high-quality diversified
summaries. Since it is costly to obtain manually labeled summaries for
training, our supervision is weak as we train with programmatically labeled
data which may contain noise but is free of manual work. Evaluation results
show that our approach significantly outperforms the state of the art on two
public benchmarks.
| 2,020 | Computation and Language |
Why and when should you pool? Analyzing Pooling in Recurrent
Architectures | Pooling-based recurrent neural architectures consistently outperform their
counterparts without pooling. However, the reasons for their enhanced
performance are largely unexamined. In this work, we examine three commonly
used pooling techniques (mean-pooling, max-pooling, and attention), and propose
max-attention, a novel variant that effectively captures interactions among
predictive tokens in a sentence. We find that pooling-based architectures
substantially differ from their non-pooling equivalents in their learning
ability and positional biases--which elucidate their performance benefits. By
analyzing the gradient propagation, we discover that pooling facilitates better
gradient flow compared to BiLSTMs. Further, we expose how BiLSTMs are
positionally biased towards tokens in the beginning and the end of a sequence.
Pooling alleviates such biases. Consequently, we identify settings where
pooling offers large benefits: (i) in low resource scenarios, and (ii) when
important words lie towards the middle of the sentence. Among the pooling
techniques studied, max-attention is the most effective, resulting in
significant performance gains on several text classification tasks.
| 2,020 | Computation and Language |
Recurrent Interaction Network for Jointly Extracting Entities and
Classifying Relations | The idea of using multi-task learning approaches to address the joint
extraction of entity and relation is motivated by the relatedness between the
entity recognition task and the relation classification task. Existing methods
using multi-task learning techniques to address the problem learn interactions
among the two tasks through a shared network, where the shared information is
passed into the task-specific networks for prediction. However, such an
approach hinders the model from learning explicit interactions between the two
tasks to improve the performance on the individual tasks. As a solution, we
design a multi-task learning model which we refer to as recurrent interaction
network which allows the learning of interactions dynamically, to effectively
model task-specific features for classification. Empirical studies on two
real-world datasets confirm the superiority of the proposed model.
| 2,020 | Computation and Language |
Attend to Medical Ontologies: Content Selection for Clinical Abstractive
Summarization | Sequence-to-sequence (seq2seq) network is a well-established model for text
summarization task. It can learn to produce readable content; however, it falls
short in effectively identifying key regions of the source. In this paper, we
approach the content selection problem for clinical abstractive summarization
by augmenting salient ontological terms into the summarizer. Our experiments on
two publicly available clinical data sets (107,372 reports of MIMIC-CXR, and
3,366 reports of OpenI) show that our model statistically significantly boosts
state-of-the-art results in terms of Rouge metrics (with improvements: 2.9%
RG-1, 2.5% RG-2, 1.9% RG-L), in the healthcare domain where any range of
improvement impacts patients' welfare.
| 2,020 | Computation and Language |
Recurrent Neural Network Language Models Always Learn English-Like
Relative Clause Attachment | A standard approach to evaluating language models analyzes how models assign
probabilities to valid versus invalid syntactic constructions (i.e. is a
grammatical sentence more probable than an ungrammatical sentence). Our work
uses ambiguous relative clause attachment to extend such evaluations to cases
of multiple simultaneous valid interpretations, where stark grammaticality
differences are absent. We compare model performance in English and Spanish to
show that non-linguistic biases in RNN LMs advantageously overlap with
syntactic structure in English but not Spanish. Thus, English models may appear
to acquire human-like syntactic preferences, while models trained on Spanish
fail to acquire comparable human-like preferences. We conclude by relating
these results to broader concerns about the relationship between comprehension
(i.e. typical language model use cases) and production (which generates the
training data for language models), suggesting that necessary linguistic biases
are not present in the training signal at all.
| 2,020 | Computation and Language |
Cross-lingual Entity Alignment with Incidental Supervision | Much research effort has been put to multilingual knowledge graph (KG)
embedding methods to address the entity alignment task, which seeks to match
entities in different languagespecific KGs that refer to the same real-world
object. Such methods are often hindered by the insufficiency of seed alignment
provided between KGs. Therefore, we propose an incidentally supervised model,
JEANS , which jointly represents multilingual KGs and text corpora in a shared
embedding scheme, and seeks to improve entity alignment with incidental
supervision signals from text. JEANS first deploys an entity grounding process
to combine each KG with the monolingual text corpus. Then, two learning
processes are conducted: (i) an embedding learning process to encode the KG and
text of each language in one embedding space, and (ii) a selflearning based
alignment learning process to iteratively induce the matching of entities and
that of lexemes between embeddings. Experiments on benchmark datasets show that
JEANS leads to promising improvement on entity alignment with incidental
supervision, and significantly outperforms state-of-the-art methods that solely
rely on internal information of KGs.
| 2,021 | Computation and Language |
Information Seeking in the Spirit of Learning: a Dataset for
Conversational Curiosity | Open-ended human learning and information-seeking are increasingly mediated
by digital assistants. However, such systems often ignore the user's
pre-existing knowledge. Assuming a correlation between engagement and user
responses such as "liking" messages or asking followup questions, we design a
Wizard-of-Oz dialog task that tests the hypothesis that engagement increases
when users are presented with facts related to what they know. Through
crowd-sourcing of this experiment, we collect and release 14K dialogs (181K
utterances) where users and assistants converse about geographic topics like
geopolitical entities and locations. This dataset is annotated with
pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia,
and user reactions to messages. Responses using a user's prior knowledge
increase engagement. We incorporate this knowledge into a multi-task model that
reproduces human assistant policies and improves over a BERT content model by
13 mean reciprocal rank points.
| 2,021 | Computation and Language |
Universal Adversarial Attacks with Natural Triggers for Text
Classification | Recent work has demonstrated the vulnerability of modern text classifiers to
universal adversarial attacks, which are input-agnostic sequences of words
added to text processed by classifiers. Despite being successful, the word
sequences produced in such attacks are often ungrammatical and can be easily
distinguished from natural text. We develop adversarial attacks that appear
closer to natural English phrases and yet confuse classification systems when
added to benign inputs. We leverage an adversarially regularized autoencoder
(ARAE) to generate triggers and propose a gradient-based search that aims to
maximize the downstream classifier's prediction loss. Our attacks effectively
reduce model accuracy on classification tasks while being less identifiable
than prior models as per automatic detection metrics and human-subject studies.
Our aim is to demonstrate that adversarial attacks can be made harder to detect
than previously thought and to enable the development of appropriate defenses.
| 2,021 | Computation and Language |
Selecting Informative Contexts Improves Language Model Finetuning | Language model fine-tuning is essential for modern natural language
processing, but is computationally expensive and time-consuming. Further, the
effectiveness of fine-tuning is limited by the inclusion of training examples
that negatively affect performance. Here we present a general fine-tuning
method that we call information gain filtration for improving the overall
training efficiency and final performance of language model fine-tuning. We
define the information gain of an example as the improvement on a test metric
after training on that example. A secondary learner is then trained to
approximate this quantity. During fine-tuning, this learner selects informative
examples and skips uninformative ones. We show that our method has consistent
improvement across datasets, fine-tuning tasks, and language model
architectures. For example, we achieve a median perplexity of 54.0 on a books
dataset compared to 57.3 for standard fine-tuning. We present statistical
evidence that offers insight into the improvements of our method over standard
fine-tuning. The generality of our method leads us to propose a new paradigm
for language model fine-tuning -- we encourage researchers to release
pretrained secondary learners on common corpora to promote efficient and
effective fine-tuning, thereby improving the performance and reducing the
overall energy footprint of language model fine-tuning.
| 2,022 | Computation and Language |
Sparse, Dense, and Attentional Representations for Text Retrieval | Dual encoders perform retrieval by encoding documents and queries into dense
lowdimensional vectors, scoring each document by its inner product with the
query. We investigate the capacity of this architecture relative to sparse
bag-of-words models and attentional neural networks. Using both theoretical and
empirical analysis, we establish connections between the encoding dimension,
the margin between gold and lower-ranked documents, and the document length,
suggesting limitations in the capacity of fixed-length encodings to support
precise retrieval of long documents. Building on these insights, we propose a
simple neural model that combines the efficiency of dual encoders with some of
the expressiveness of more costly attentional architectures, and explore
sparse-dense hybrids to capitalize on the precision of sparse retrieval. These
models outperform strong alternatives in large-scale retrieval.
| 2,021 | Computation and Language |
Cross-Linguistic Syntactic Evaluation of Word Prediction Models | A range of studies have concluded that neural word prediction models can
distinguish grammatical from ungrammatical sentences with high accuracy.
However, these studies are based primarily on monolingual evidence from
English. To investigate how these models' ability to learn syntax varies by
language, we introduce CLAMS (Cross-Linguistic Assessment of Models on Syntax),
a syntactic evaluation suite for monolingual and multilingual models. CLAMS
includes subject-verb agreement challenge sets for English, French, German,
Hebrew and Russian, generated from grammars we develop. We use CLAMS to
evaluate LSTM language models as well as monolingual and multilingual BERT.
Across languages, monolingual LSTMs achieved high accuracy on dependencies
without attractors, and generally poor accuracy on agreement across object
relative clauses. On other constructions, agreement accuracy was generally
higher in languages with richer morphology. Multilingual models generally
underperformed monolingual models. Multilingual BERT showed high syntactic
accuracy on English, but noticeable deficiencies in other languages.
| 2,020 | Computation and Language |
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs
and Adversarial Attacks | We evaluate machine comprehension models' robustness to noise and adversarial
attacks by performing novel perturbations at the character, word, and sentence
level. We experiment with different amounts of perturbations to examine model
confidence and misclassification rate, and contrast model performance in
adversarial training with different embedding types on two benchmark datasets.
We demonstrate improving model performance with ensembling. Finally, we analyze
factors that effect model behavior under adversarial training and develop a
model to predict model errors during adversarial attacks.
| 2,020 | Computation and Language |
KPQA: A Metric for Generative Question Answering Using Keyphrase Weights | In the automatic evaluation of generative question answering (GenQA) systems,
it is difficult to assess the correctness of generated answers due to the
free-form of the answer. Especially, widely used n-gram similarity metrics
often fail to discriminate the incorrect answers since they equally consider
all of the tokens. To alleviate this problem, we propose KPQA-metric, a new
metric for evaluating the correctness of GenQA. Specifically, our new metric
assigns different weights to each token via keyphrase prediction, thereby
judging whether a generated answer sentence captures the key meaning of the
reference answer. To evaluate our metric, we create high-quality human
judgments of correctness on two GenQA datasets. Using our human-evaluation
datasets, we show that our proposed metric has a significantly higher
correlation with human judgments than existing metrics. The code is available
at https://github.com/hwanheelee1993/KPQA.
| 2,021 | Computation and Language |
Multi-head Monotonic Chunkwise Attention For Online Speech Recognition | The attention mechanism of the Listen, Attend and Spell (LAS) model requires
the whole input sequence to calculate the attention context and thus is not
suitable for online speech recognition. To deal with this problem, we propose
multi-head monotonic chunk-wise attention (MTH-MoChA), an improved version of
MoChA. MTH-MoChA splits the input sequence into small chunks and computes
multi-head attentions over the chunks. We also explore useful training
strategies such as LSTM pooling, minimum world error rate training and
SpecAugment to further improve the performance of MTH-MoChA. Experiments on
AISHELL-1 data show that the proposed model, along with the training
strategies, improve the character error rate (CER) of MoChA from 8.96% to 7.68%
on test set. On another 18000 hours in-car speech data set, MTH-MoChA obtains
7.28% CER, which is significantly better than a state-of-the-art hybrid system.
| 2,020 | Computation and Language |
Biomedical Entity Representations with Synonym Marginalization | Biomedical named entities often play important roles in many biomedical text
mining tools. However, due to the incompleteness of provided synonyms and
numerous variations in their surface forms, normalization of biomedical
entities is very challenging. In this paper, we focus on learning
representations of biomedical entities solely based on the synonyms of
entities. To learn from the incomplete synonyms, we use a model-based candidate
selection and maximize the marginal likelihood of the synonyms present in top
candidates. Our model-based candidates are iteratively updated to contain more
difficult negative samples as our model evolves. In this way, we avoid the
explicit pre-selection of negative samples from more than 400K candidates. On
four biomedical entity normalization datasets having three different entity
types (disease, chemical, adverse reaction), our model BioSyn consistently
outperforms previous state-of-the-art models almost reaching the upper bound on
each dataset.
| 2,020 | Computation and Language |
TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions | A critical part of reading is being able to understand the temporal
relationships between events described in a passage of text, even when those
relationships are not explicitly stated. However, current machine reading
comprehension benchmarks have practically no questions that test temporal
phenomena, so systems trained on these benchmarks have no capacity to answer
questions such as "what happened before/after [some event]?" We introduce
TORQUE, a new English reading comprehension benchmark built on 3.2k news
snippets with 21k human-generated questions querying temporal relationships.
Results show that RoBERTa-large achieves an exact-match score of 51% on the
test set of TORQUE, about 30% behind human performance.
| 2,020 | Computation and Language |
Cross-modal Language Generation using Pivot Stabilization for Web-scale
Language Coverage | Cross-modal language generation tasks such as image captioning are directly
hurt in their ability to support non-English languages by the trend of
data-hungry models combined with the lack of non-English annotations. We
investigate potential solutions for combining existing language-generation
annotations in English with translation capabilities in order to create
solutions at web-scale in both domain and language coverage. We describe an
approach called Pivot-Language Generation Stabilization (PLuGS), which
leverages directly at training time both existing English annotations (gold
data) as well as their machine-translated versions (silver data); at run-time,
it generates first an English caption and then a corresponding target-language
caption. We show that PLuGS models outperform other candidate solutions in
evaluations performed over 5 different target languages, under a large-domain
testset using images from the Open Images dataset. Furthermore, we find an
interesting effect where the English captions generated by the PLuGS models are
better than the captions generated by the original, monolingual English model.
| 2,020 | Computation and Language |
AdapterFusion: Non-Destructive Task Composition for Transfer Learning | Sequential fine-tuning and multi-task learning are methods aiming to
incorporate knowledge from multiple tasks; however, they suffer from
catastrophic forgetting and difficulties in dataset balancing. To address these
shortcomings, we propose AdapterFusion, a new two stage learning algorithm that
leverages knowledge from multiple tasks. First, in the knowledge extraction
stage we learn task specific parameters called adapters, that encapsulate the
task-specific information. We then combine the adapters in a separate knowledge
composition step. We show that by separating the two stages, i.e., knowledge
extraction and knowledge composition, the classifier can effectively exploit
the representations learned from multiple tasks in a non-destructive manner. We
empirically evaluate AdapterFusion on 16 diverse NLU tasks, and find that it
effectively combines various types of knowledge at different layers of the
model. We show that our approach outperforms traditional strategies such as
full fine-tuning as well as multi-task learning. Our code and adapters are
available at AdapterHub.ml.
| 2,021 | Computation and Language |
Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic
Conditional Random Fields | We compare different models for low resource multi-task sequence tagging that
leverage dependencies between label sequences for different tasks. Our analysis
is aimed at datasets where each example has labels for multiple tasks. Current
approaches use either a separate model for each task or standard multi-task
learning to learn shared feature representations. However, these approaches
ignore correlations between label sequences, which can provide important
information in settings with small training datasets. To analyze which
scenarios can profit from modeling dependencies between labels in different
tasks, we revisit dynamic conditional random fields (CRFs) and combine them
with deep neural networks. We compare single-task, multi-task and dynamic CRF
setups for three diverse datasets at both sentence and document levels in
English and German low resource scenarios. We show that including silver labels
from pretrained part-of-speech taggers as auxiliary tasks can improve
performance on downstream tasks. We find that especially in low-resource
scenarios, the explicit modeling of inter-dependencies between task predictions
outperforms single-task as well as standard multi-task models.
| 2,020 | Computation and Language |
Towards Controllable Biases in Language Generation | We present a general approach towards controllable societal biases in natural
language generation (NLG). Building upon the idea of adversarial triggers, we
develop a method to induce societal biases in generated text when input prompts
contain mentions of specific demographic groups. We then analyze two scenarios:
1) inducing negative biases for one demographic and positive biases for another
demographic, and 2) equalizing biases between demographics. The former scenario
enables us to detect the types of biases present in the model. Specifically, we
show the effectiveness of our approach at facilitating bias analysis by finding
topics that correspond to demographic inequalities in generated text and
comparing the relative effectiveness of inducing biases for different
demographics. The second scenario is useful for mitigating biases in downstream
applications such as dialogue generation. In our experiments, the mitigation
technique proves to be effective at equalizing the amount of biases across
demographics while simultaneously generating less negatively biased text
overall.
| 2,020 | Computation and Language |
Unsupervised Transfer of Semantic Role Models from Verbal to Nominal
Domain | Semantic role labeling (SRL) is an NLP task involving the assignment of
predicate arguments to types, called semantic roles. Though research on SRL has
primarily focused on verbal predicates and many resources available for SRL
provide annotations only for verbs, semantic relations are often triggered by
other linguistic constructions, e.g., nominalizations. In this work, we
investigate a transfer scenario where we assume role-annotated data for the
source verbal domain but only unlabeled data for the target nominal domain. Our
key assumption, enabling the transfer between the two domains, is that
selectional preferences of a role (i.e., preferences or constraints on the
admissible arguments) do not strongly depend on whether the relation is
triggered by a verb or a noun. For example, the same set of arguments can fill
the Acquirer role for the verbal predicate `acquire' and its nominal form
`acquisition'. We approach the transfer task from the variational autoencoding
perspective. The labeler serves as an encoder (predicting role labels given a
sentence), whereas selectional preferences are captured in the decoder
component (generating arguments for the predicting roles). Nominal roles are
not labeled in the training data, and the learning objective instead pushes the
labeler to assign roles predictive of the arguments. Sharing the decoder
parameters across the domains encourages consistency between labels predicted
for both domains and facilitates the transfer. The method substantially
outperforms baselines, such as unsupervised and `direct transfer' methods, on
the English CoNLL-2009 dataset.
| 2,020 | Computation and Language |
Facilitating Access to Multilingual COVID-19 Information via Neural
Machine Translation | Every day, more people are becoming infected and dying from exposure to
COVID-19. Some countries in Europe like Spain, France, the UK and Italy have
suffered particularly badly from the virus. Others such as Germany appear to
have coped extremely well. Both health professionals and the general public are
keen to receive up-to-date information on the effects of the virus, as well as
treatments that have proven to be effective. In cases where language is a
barrier to access of pertinent information, machine translation (MT) may help
people assimilate information published in different languages. Our MT systems
trained on COVID-19 data are freely available for anyone to use to help
translate information published in German, French, Italian, Spanish into
English, as well as the reverse direction.
| 2,020 | Computation and Language |
Hitachi at SemEval-2020 Task 12: Offensive Language Identification with
Noisy Labels using Statistical Sampling and Post-Processing | In this paper, we present our participation in SemEval-2020 Task-12 Subtask-A
(English Language) which focuses on offensive language identification from
noisy labels. To this end, we developed a hybrid system with the BERT
classifier trained with tweets selected using Statistical Sampling Algorithm
(SA) and Post-Processed (PP) using an offensive wordlist. Our developed system
achieved 34 th position with Macro-averaged F1-score (Macro-F1) of 0.90913 over
both offensive and non-offensive classes. We further show comprehensive results
and error analysis to assist future research in offensive language
identification with noisy labels.
| 2,020 | Computation and Language |
Selecting Backtranslated Data from Multiple Sources for Improved Neural
Machine Translation | Machine translation (MT) has benefited from using synthetic training data
originating from translating monolingual corpora, a technique known as
backtranslation. Combining backtranslated data from different sources has led
to better results than when using such data in isolation. In this work we
analyse the impact that data translated with rule-based, phrase-based
statistical and neural MT systems has on new MT systems. We use a real-world
low-resource use-case (Basque-to-Spanish in the clinical domain) as well as a
high-resource language pair (German-to-English) to test different scenarios
with backtranslation and employ data selection to optimise the synthetic
corpora. We exploit different data selection strategies in order to reduce the
amount of data used, while at the same time maintaining high-quality MT
systems. We further tune the data selection method by taking into account the
quality of the MT systems used for backtranslation and lexical diversity of the
resulting corpora. Our experiments show that incorporating backtranslated data
from different sources can be beneficial, and that availing of data selection
can yield improved performance.
| 2,020 | Computation and Language |
Language (Re)modelling: Towards Embodied Language Understanding | While natural language understanding (NLU) is advancing rapidly, today's
technology differs from human-like language understanding in fundamental ways,
notably in its inferior efficiency, interpretability, and generalization. This
work proposes an approach to representation and learning based on the tenets of
embodied cognitive linguistics (ECL). According to ECL, natural language is
inherently executable (like programming languages), driven by mental simulation
and metaphoric mappings over hierarchical compositions of structures and
schemata learned through embodied interaction. This position paper argues that
the use of grounding by metaphoric inference and simulation will greatly
benefit NLU systems, and proposes a system architecture along with a roadmap
towards realizing this vision.
| 2,020 | Computation and Language |
Mind the Trade-off: Debiasing NLU Models without Degrading the
In-distribution Performance | Models for natural language understanding (NLU) tasks often rely on the
idiosyncratic biases of the dataset, which make them brittle against test cases
outside the training distribution. Recently, several proposed debiasing methods
are shown to be very effective in improving out-of-distribution performance.
However, their improvements come at the expense of performance drop when models
are evaluated on the in-distribution data, which contain examples with higher
diversity. This seemingly inevitable trade-off may not tell us much about the
changes in the reasoning and understanding capabilities of the resulting models
on broader types of examples beyond the small subset represented in the
out-of-distribution data. In this paper, we address this trade-off by
introducing a novel debiasing method, called confidence regularization, which
discourage models from exploiting biases while enabling them to receive enough
incentive to learn from all the training examples. We evaluate our method on
three NLU tasks and show that, in contrast to its predecessors, it improves the
performance on out-of-distribution datasets (e.g., 7pp gain on HANS dataset)
while maintaining the original in-distribution accuracy.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.