Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Self-supervised Knowledge Triplet Learning for Zero-shot Question
Answering | The aim of all Question Answering (QA) systems is to be able to generalize to
unseen questions. Current supervised methods are reliant on expensive data
annotation. Moreover, such annotations can introduce unintended annotator bias
which makes systems focus more on the bias than the actual task. In this work,
we propose Knowledge Triplet Learning (KTL), a self-supervised task over
knowledge graphs. We propose heuristics to create synthetic graphs for
commonsense and scientific knowledge. We propose methods of how to use KTL to
perform zero-shot QA and our experiments show considerable improvements over
large pre-trained transformer models.
| 2,020 | Computation and Language |
Can Multilingual Language Models Transfer to an Unseen Dialect? A Case
Study on North African Arabizi | Building natural language processing systems for non standardized and low
resource languages is a difficult challenge. The recent success of large-scale
multilingual pretrained language models provides new modeling tools to tackle
this. In this work, we study the ability of multilingual language models to
process an unseen dialect. We take user generated North-African Arabic as our
case study, a resource-poor dialectal variety of Arabic with frequent
code-mixing with French and written in Arabizi, a non-standardized
transliteration of Arabic to Latin script. Focusing on two tasks,
part-of-speech tagging and dependency parsing, we show in zero-shot and
unsupervised adaptation scenarios that multilingual language models are able to
transfer to such an unseen dialect, specifically in two extreme cases: (i)
across scripts, using Modern Standard Arabic as a source language, and (ii)
from a distantly related language, unseen during pretraining, namely Maltese.
Our results constitute the first successful transfer experiments on this
dialect, paving thus the way for the development of an NLP ecosystem for
resource-scarce, non-standardized and highly variable vernacular languages.
| 2,020 | Computation and Language |
CDL: Curriculum Dual Learning for Emotion-Controllable Response
Generation | Emotion-controllable response generation is an attractive and valuable task
that aims to make open-domain conversations more empathetic and engaging.
Existing methods mainly enhance the emotion expression by adding regularization
terms to standard cross-entropy loss and thus influence the training process.
However, due to the lack of further consideration of content consistency, the
common problem of response generation tasks, safe response, is intensified.
Besides, query emotions that can help model the relationship between query and
response are simply ignored in previous models, which would further hurt the
coherence. To alleviate these problems, we propose a novel framework named
Curriculum Dual Learning (CDL) which extends the emotion-controllable response
generation to a dual task to generate emotional responses and emotional queries
alternatively. CDL utilizes two rewards focusing on emotion and content to
improve the duality. Additionally, it applies curriculum learning to gradually
generate high-quality responses based on the difficulties of expressing various
emotions. Experimental results show that CDL significantly outperforms the
baselines in terms of coherence, diversity, and relation to emotion factors.
| 2,020 | Computation and Language |
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning | In order to simulate human language capacity, natural language processing
systems must be able to reason about the dynamics of everyday situations,
including their possible causes and effects. Moreover, they should be able to
generalise the acquired world knowledge to new languages, modulo cultural
differences. Advances in machine reasoning and cross-lingual transfer depend on
the availability of challenging evaluation benchmarks. Motivated by both
demands, we introduce Cross-lingual Choice of Plausible Alternatives (XCOPA), a
typologically diverse multilingual dataset for causal commonsense reasoning in
11 languages, which includes resource-poor languages like Eastern Apur\'imac
Quechua and Haitian Creole. We evaluate a range of state-of-the-art models on
this novel dataset, revealing that the performance of current methods based on
multilingual pretraining and zero-shot fine-tuning falls short compared to
translation-based transfer. Finally, we propose strategies to adapt
multilingual models to out-of-sample resource-lean languages where only a small
corpus or a bilingual dictionary is available, and report substantial
improvements over the random baseline. The XCOPA dataset is freely available at
github.com/cambridgeltl/xcopa.
| 2,020 | Computation and Language |
MUSS: Multilingual Unsupervised Sentence Simplification by Mining
Paraphrases | Progress in sentence simplification has been hindered by a lack of labeled
parallel simplification data, particularly in languages other than English. We
introduce MUSS, a Multilingual Unsupervised Sentence Simplification system that
does not require labeled simplification data. MUSS uses a novel approach to
sentence simplification that trains strong models using sentence-level
paraphrase data instead of proper simplification data. These models leverage
unsupervised pretraining and controllable generation mechanisms to flexibly
adjust attributes such as length and lexical complexity at inference time. We
further present a method to mine such paraphrase data in any language from
Common Crawl using semantic sentence embeddings, thus removing the need for
labeled data. We evaluate our approach on English, French, and Spanish
simplification benchmarks and closely match or outperform the previous best
supervised results, despite not using any labeled simplification data. We push
the state of the art further by incorporating labeled simplification data.
| 2,021 | Computation and Language |
Beneath the Tip of the Iceberg: Current Challenges and New Directions in
Sentiment Analysis Research | Sentiment analysis as a field has come a long way since it was first
introduced as a task nearly 20 years ago. It has widespread commercial
applications in various domains like marketing, risk management, market
research, and politics, to name a few. Given its saturation in specific
subtasks -- such as sentiment polarity classification -- and datasets, there is
an underlying perception that this field has reached its maturity. In this
article, we discuss this perception by pointing out the shortcomings and
under-explored, yet key aspects of this field that are necessary to attain true
sentiment understanding. We analyze the significant leaps responsible for its
current relevance. Further, we attempt to chart a possible course for this
field that covers many overlooked and unanswered questions.
| 2,020 | Computation and Language |
Will-They-Won't-They: A Very Large Dataset for Stance Detection on
Twitter | We present a new challenging stance detection dataset, called
Will-They-Won't-They (WT-WT), which contains 51,284 tweets in English, making
it by far the largest available dataset of the type. All the annotations are
carried out by experts; therefore, the dataset constitutes a high-quality and
reliable benchmark for future research in stance detection. Our experiments
with a wide range of recent state-of-the-art stance detection systems show that
the dataset poses a strong challenge to existing models in this domain.
| 2,020 | Computation and Language |
Identifying Necessary Elements for BERT's Multilinguality | It has been shown that multilingual BERT (mBERT) yields high quality
multilingual representations and enables effective zero-shot transfer. This is
surprising given that mBERT does not use any crosslingual signal during
training. While recent literature has studied this phenomenon, the reasons for
the multilinguality are still somewhat obscure. We aim to identify
architectural properties of BERT and linguistic properties of languages that
are necessary for BERT to become multilingual. To allow for fast
experimentation we propose an efficient setup with small BERT models trained on
a mix of synthetic and natural data. Overall, we identify four architectural
and two linguistic elements that influence multilinguality. Based on our
insights, we experiment with a multilingual pretraining setup that modifies the
masking strategy using VecMap, i.e., unsupervised embedding alignment.
Experiments on XNLI with three languages indicate that our findings transfer
from our small setup to larger scale settings.
| 2,021 | Computation and Language |
Topological Sort for Sentence Ordering | Sentence ordering is the task of arranging the sentences of a given text in
the correct order. Recent work using deep neural networks for this task has
framed it as a sequence prediction problem. In this paper, we propose a new
framing of this task as a constraint solving problem and introduce a new
technique to solve it. Additionally, we propose a human evaluation for this
task. The results on both automatic and human metrics across four different
datasets show that this new technique is better at capturing coherence in
documents.
| 2,020 | Computation and Language |
Defense of Word-level Adversarial Attacks via Random Substitution
Encoding | The adversarial attacks against deep neural networks on computer vision tasks
have spawned many new technologies that help protect models from avoiding false
predictions. Recently, word-level adversarial attacks on deep models of Natural
Language Processing (NLP) tasks have also demonstrated strong power, e.g.,
fooling a sentiment classification neural network to make wrong decisions.
Unfortunately, few previous literatures have discussed the defense of such
word-level synonym substitution based attacks since they are hard to be
perceived and detected. In this paper, we shed light on this problem and
propose a novel defense framework called Random Substitution Encoding (RSE),
which introduces a random substitution encoder into the training process of
original neural networks. Extensive experiments on text classification tasks
demonstrate the effectiveness of our framework on defense of word-level
adversarial attacks, under various base and attack models.
| 2,020 | Computation and Language |
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog
Generation | The lack of meaningful automatic evaluation metrics for dialog has impeded
open-domain dialog research. Standard language generation metrics have been
shown to be ineffective for evaluating dialog models. To this end, this paper
presents USR, an UnSupervised and Reference-free evaluation metric for dialog.
USR is a reference-free metric that trains unsupervised models to measure
several desirable qualities of dialog. USR is shown to strongly correlate with
human judgment on both Topical-Chat (turn-level: 0.42, system-level: 1.0) and
PersonaChat (turn-level: 0.48 and system-level: 1.0). USR additionally produces
interpretable measures for several desirable properties of dialog.
| 2,020 | Computation and Language |
Style Variation as a Vantage Point for Code-Switching | Code-Switching (CS) is a common phenomenon observed in several bilingual and
multilingual communities, thereby attaining prevalence in digital and social
media platforms. This increasing prominence demands the need to model CS
languages for critical downstream tasks. A major problem in this domain is the
dearth of annotated data and a substantial corpora to train large scale neural
models. Generating vast amounts of quality text assists several down stream
tasks that heavily rely on language modeling such as speech recognition,
text-to-speech synthesis etc,. We present a novel vantage point of CS to be
style variations between both the participating languages. Our approach does
not need any external annotations such as lexical language ids. It mainly
relies on easily obtainable monolingual corpora without any parallel alignment
and a limited set of naturally CS sentences. We propose a two-stage generative
adversarial training approach where the first stage generates competitive
negative examples for CS and the second stage generates more realistic CS
sentences. We present our experiments on the following pairs of languages:
Spanish-English, Mandarin-English, Hindi-English and Arabic-French. We show
that the trends in metrics for generated CS move closer to real CS data in each
of the above language pairs through the dual stage training process. We believe
this viewpoint of CS as style variations opens new perspectives for modeling
various tasks in CS text.
| 2,020 | Computation and Language |
Improving Broad-Coverage Medical Entity Linking with Semantic Type
Prediction and Large-Scale Datasets | Medical entity linking is the task of identifying and standardizing medical
concepts referred to in an unstructured text. Most of the existing methods
adopt a three-step approach of (1) detecting mentions, (2) generating a list of
candidate concepts, and finally (3) picking the best concept among them. In
this paper, we probe into alleviating the problem of overgeneration of
candidate concepts in the candidate generation module, the most under-studied
component of medical entity linking. For this, we present MedType, a fully
modular system that prunes out irrelevant candidate concepts based on the
predicted semantic type of an entity mention. We incorporate MedType into five
off-the-shelf toolkits for medical entity linking and demonstrate that it
consistently improves entity linking performance across several benchmark
datasets. To address the dearth of annotated training data for medical entity
linking, we present WikiMed and PubMedDS, two large-scale medical entity
linking datasets, and demonstrate that pre-training MedType on these datasets
further improves entity linking performance. We make our source code and
datasets publicly available for medical entity linking research.
| 2,021 | Computation and Language |
Regex Queries over Incomplete Knowledge Bases | We propose the novel task of answering regular expression queries (containing
disjunction ($\vee$) and Kleene plus ($+$) operators) over incomplete KBs. The
answer set of these queries potentially has a large number of entities, hence
previous works for single-hop queries in KBC that model a query as a point in
high-dimensional space are not as effective. In response, we develop RotatE-Box
-- a novel combination of RotatE and box embeddings. It can model more
relational inference patterns compared to existing embedding based models.
Furthermore, we define baseline approaches for embedding based KBC models to
handle regex operators. We demonstrate performance of RotatE-Box on two new
regex-query datasets introduced in this paper, including one where the queries
are harvested based on actual user query logs. We find that our final
RotatE-Box model significantly outperforms models based on just RotatE and just
box embeddings.
| 2,021 | Computation and Language |
ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification
Models with Multiple Rewriting Transformations | In order to simplify a sentence, human editors perform multiple rewriting
transformations: they split it into several shorter sentences, paraphrase words
(i.e. replacing complex words or phrases by simpler synonyms), reorder
components, and/or delete information deemed unnecessary. Despite these varied
range of possible text alterations, current models for automatic sentence
simplification are evaluated using datasets that are focused on a single
transformation, such as lexical paraphrasing or splitting. This makes it
impossible to understand the ability of simplification models in more realistic
settings. To alleviate this limitation, this paper introduces ASSET, a new
dataset for assessing sentence simplification in English. ASSET is a
crowdsourced multi-reference corpus where each simplification was produced by
executing several rewriting transformations. Through quantitative and
qualitative experiments, we show that simplifications in ASSET are better at
capturing characteristics of simplicity when compared to other standard
evaluation datasets for the task. Furthermore, we motivate the need for
developing better methods for automatic evaluation using ASSET, since we show
that current popular metrics may not be suitable when multiple simplification
transformations are performed.
| 2,020 | Computation and Language |
Structured Tuning for Semantic Role Labeling | Recent neural network-driven semantic role labeling (SRL) systems have shown
impressive improvements in F1 scores. These improvements are due to expressive
input representations, which, at least at the surface, are orthogonal to
knowledge-rich constrained decoding mechanisms that helped linear SRL models.
Introducing the benefits of structure to inform neural models presents a
methodological challenge. In this paper, we present a structured tuning
framework to improve models using softened constraints only at training time.
Our framework leverages the expressiveness of neural networks and provides
supervision with structured loss components. We start with a strong baseline
(RoBERTa) to validate the impact of our approach, and show that our framework
outperforms the baseline by learning to comply with declarative constraints.
Additionally, our experiments with smaller training sizes show that we can
achieve consistent improvements under low-resource scenarios.
| 2,020 | Computation and Language |
SciREX: A Challenge Dataset for Document-Level Information Extraction | Extracting information from full documents is an important problem in many
domains, but most previous work focus on identifying relationships within a
sentence or a paragraph. It is challenging to create a large-scale information
extraction (IE) dataset at the document level since it requires an
understanding of the whole document to annotate entities and their
document-level relationships that usually span beyond sentences or even
sections. In this paper, we introduce SciREX, a document level IE dataset that
encompasses multiple IE tasks, including salient entity identification and
document level $N$-ary relation identification from scientific articles. We
annotate our dataset by integrating automatic and human annotations, leveraging
existing scientific knowledge resources. We develop a neural model as a strong
baseline that extends previous state-of-the-art IE models to document-level IE.
Analyzing the model performance shows a significant gap between human
performance and current baselines, inviting the community to use our dataset as
a challenge to develop document-level IE models. Our data and code are publicly
available at https://github.com/allenai/SciREX
| 2,020 | Computation and Language |
Discourse-Aware Unsupervised Summarization of Long Scientific Documents | We propose an unsupervised graph-based ranking model for extractive
summarization of long scientific documents. Our method assumes a two-level
hierarchical graph representation of the source document, and exploits
asymmetrical positional cues to determine sentence importance. Results on the
PubMed and arXiv datasets show that our approach outperforms strong
unsupervised baselines by wide margins in automatic metrics and human
evaluation. In addition, it achieves performance comparable to many
state-of-the-art supervised approaches which are trained on hundreds of
thousands of examples. These results suggest that patterns in the discourse
structure are a strong signal for determining importance in scientific
articles.
| 2,021 | Computation and Language |
Why Overfitting Isn't Always Bad: Retrofitting Cross-Lingual Word
Embeddings to Dictionaries | Cross-lingual word embeddings (CLWE) are often evaluated on bilingual lexicon
induction (BLI). Recent CLWE methods use linear projections, which underfit the
training dictionary, to generalize on BLI. However, underfitting can hinder
generalization to other downstream tasks that rely on words from the training
dictionary. We address this limitation by retrofitting CLWE to the training
dictionary, which pulls training translation pairs closer in the embedding
space and overfits the training dictionary. This simple post-processing step
often improves accuracy on two downstream tasks, despite lowering BLI test
accuracy. We also retrofit to both the training dictionary and a synthetic
dictionary induced from CLWE, which sometimes generalizes even better on
downstream tasks. Our results confirm the importance of fully exploiting
training dictionary in downstream tasks and explains why BLI is a flawed CLWE
evaluation.
| 2,020 | Computation and Language |
GoEmotions: A Dataset of Fine-Grained Emotions | Understanding emotion expressed in language has a wide range of applications,
from building empathetic chatbots to detecting harmful online behavior.
Advancement in this area can be improved using large-scale datasets with a
fine-grained typology, adaptable to multiple downstream tasks. We introduce
GoEmotions, the largest manually annotated dataset of 58k English Reddit
comments, labeled for 27 emotion categories or Neutral. We demonstrate the high
quality of the annotations via Principal Preserved Component Analysis. We
conduct transfer learning experiments with existing emotion benchmarks to show
that our dataset generalizes well to other domains and different emotion
taxonomies. Our BERT-based model achieves an average F1-score of .46 across our
proposed taxonomy, leaving much room for improvement.
| 2,020 | Computation and Language |
POINTER: Constrained Progressive Text Generation via Insertion-based
Generative Pre-training | Large-scale pre-trained language models, such as BERT and GPT-2, have
achieved excellent performance in language representation learning and
free-form text generation. However, these models cannot be directly employed to
generate text under specified lexical constraints. To address this challenge,
we present POINTER (PrOgressive INsertion-based TransformER), a simple yet
novel insertion-based approach for hard-constrained text generation. The
proposed method operates by progressively inserting new tokens between existing
tokens in a parallel manner. This procedure is recursively applied until a
sequence is completed. The resulting coarse-to-fine hierarchy makes the
generation process intuitive and interpretable. We pre-train our model with the
proposed progressive insertion-based objective on a 12GB Wikipedia dataset, and
fine-tune it on downstream hard-constrained generation tasks.
Non-autoregressive decoding yields an empirically logarithmic time complexity
during inference time. Experimental results on both News and Yelp datasets
demonstrate that POINTER achieves state-of-the-art performance on constrained
text generation. We released the pre-trained models and the source code to
facilitate future research (https://github.com/dreasysnail/POINTER).
| 2,020 | Computation and Language |
When BERT Plays the Lottery, All Tickets Are Winning | Large Transformer-based models were shown to be reducible to a smaller number
of self-attention heads and layers. We consider this phenomenon from the
perspective of the lottery ticket hypothesis, using both structured and
magnitude pruning. For fine-tuned BERT, we show that (a) it is possible to find
subnetworks achieving performance that is comparable with that of the full
model, and (b) similarly-sized subnetworks sampled from the rest of the model
perform worse. Strikingly, with structured pruning even the worst possible
subnetworks remain highly trainable, indicating that most pre-trained BERT
weights are potentially useful. We also study the "good" subnetworks to see if
their success can be attributed to superior linguistic knowledge, but find them
unstable, and not explained by meaningful self-attention patterns.
| 2,020 | Computation and Language |
Exploring Pre-training with Alignments for RNN Transducer based
End-to-End Speech Recognition | Recently, the recurrent neural network transducer (RNN-T) architecture has
become an emerging trend in end-to-end automatic speech recognition research
due to its advantages of being capable for online streaming speech recognition.
However, RNN-T training is made difficult by the huge memory requirements, and
complicated neural structure. A common solution to ease the RNN-T training is
to employ connectionist temporal classification (CTC) model along with RNN
language model (RNNLM) to initialize the RNN-T parameters. In this work, we
conversely leverage external alignments to seed the RNN-T model. Two different
pre-training solutions are explored, referred to as encoder pre-training, and
whole-network pre-training respectively. Evaluated on Microsoft 65,000 hours
anonymized production data with personally identifiable information removed,
our proposed methods can obtain significant improvement. In particular, the
encoder pre-training solution achieved a 10% and a 8% relative word error rate
reduction when compared with random initialization and the widely used
CTC+RNNLM initialization strategy, respectively. Our solutions also
significantly reduce the RNN-T model latency from the baseline.
| 2,020 | Computation and Language |
Clinical Reading Comprehension: A Thorough Analysis of the emrQA Dataset | Machine reading comprehension has made great progress in recent years owing
to large-scale annotated datasets. In the clinical domain, however, creating
such datasets is quite difficult due to the domain expertise required for
annotation. Recently, Pampari et al. (EMNLP'18) tackled this issue by using
expert-annotated question templates and existing i2b2 annotations to create
emrQA, the first large-scale dataset for question answering (QA) based on
clinical notes. In this paper, we provide an in-depth analysis of this dataset
and the clinical reading comprehension (CliniRC) task. From our qualitative
analysis, we find that (i) emrQA answers are often incomplete, and (ii) emrQA
questions are often answerable without using domain knowledge. From our
quantitative experiments, surprising results include that (iii) using a small
sampled subset (5%-20%), we can obtain roughly equal performance compared to
the model trained on the entire dataset, (iv) this performance is close to
human expert's performance, and (v) BERT models do not beat the best performing
base model. Following our analysis of the emrQA, we further explore two desired
aspects of CliniRC systems: the ability to utilize clinical domain knowledge
and to generalize to unseen questions and contexts. We argue that both should
be considered when creating future datasets.
| 2,020 | Computation and Language |
Evaluating Robustness to Input Perturbations for Neural Machine
Translation | Neural Machine Translation (NMT) models are sensitive to small perturbations
in the input. Robustness to such perturbations is typically measured using
translation quality metrics such as BLEU on the noisy input. This paper
proposes additional metrics which measure the relative degradation and changes
in translation when small perturbations are added to the input. We focus on a
class of models employing subword regularization to address robustness and
perform extensive evaluations of these models using the robustness measures
proposed. Results show that our proposed metrics reveal a clear trend of
improved robustness to perturbations when subword regularization methods are
used.
| 2,020 | Computation and Language |
Multi-scale Transformer Language Models | We investigate multi-scale transformer language models that learn
representations of text at multiple scales, and present three different
architectures that have an inductive bias to handle the hierarchical nature of
language. Experiments on large-scale language modeling benchmarks empirically
demonstrate favorable likelihood vs memory footprint trade-offs, e.g. we show
that it is possible to train a hierarchical variant with 30 layers that has 23%
smaller memory footprint and better perplexity, compared to a vanilla
transformer with less than half the number of layers, on the Toronto
BookCorpus. We analyze the advantages of learned representations at multiple
scales in terms of memory footprint, compute time, and perplexity, which are
particularly appealing given the quadratic scaling of transformers' run time
and memory usage with respect to sequence length.
| 2,020 | Computation and Language |
Learning an Unreferenced Metric for Online Dialogue Evaluation | Evaluating the quality of a dialogue interaction between two agents is a
difficult task, especially in open-domain chit-chat style dialogue. There have
been recent efforts to develop automatic dialogue evaluation metrics, but most
of them do not generalize to unseen datasets and/or need a human-generated
reference response during inference, making it infeasible for online
evaluation. Here, we propose an unreferenced automated evaluation metric that
uses large pre-trained language models to extract latent representations of
utterances, and leverages the temporal transitions that exist between them. We
show that our model achieves higher correlation with human annotations in an
online setting, while not requiring true responses for comparison during
inference.
| 2,020 | Computation and Language |
A Controllable Model of Grounded Response Generation | Current end-to-end neural conversation models inherently lack the flexibility
to impose semantic control in the response generation process, often resulting
in uninteresting responses. Attempts to boost informativeness alone come at the
expense of factual accuracy, as attested by pretrained language models'
propensity to "hallucinate" facts. While this may be mitigated by access to
background knowledge, there is scant guarantee of relevance and informativeness
in generated responses. We propose a framework that we call controllable
grounded response generation (CGRG), in which lexical control phrases are
either provided by a user or automatically extracted by a control phrase
predictor from dialogue context and grounding knowledge. Quantitative and
qualitative results show that, using this framework, a transformer based model
with a novel inductive attention mechanism, trained on a conversation-like
Reddit dataset, outperforms strong generation baselines.
| 2,021 | Computation and Language |
Multi-Dimensional Gender Bias Classification | Machine learning models are trained to find patterns in data. NLP models can
inadvertently learn socially undesirable patterns when training on gender
biased text. In this work, we propose a general framework that decomposes
gender bias in text along several pragmatic and semantic dimensions: bias from
the gender of the person being spoken about, bias from the gender of the person
being spoken to, and bias from the gender of the speaker. Using this
fine-grained framework, we automatically annotate eight large scale datasets
with gender information. In addition, we collect a novel, crowdsourced
evaluation benchmark of utterance-level gender rewrites. Distinguishing between
gender bias along multiple dimensions is important, as it enables us to train
finer-grained gender bias classifiers. We show our classifiers prove valuable
for a variety of important applications, such as controlling for gender bias in
generative models, detecting gender bias in arbitrary text, and shed light on
offensive language in terms of genderedness.
| 2,020 | Computation and Language |
Probing Contextual Language Models for Common Ground with Visual
Representations | The success of large-scale contextual language models has attracted great
interest in probing what is encoded in their representations. In this work, we
consider a new question: to what extent contextual representations of concrete
nouns are aligned with corresponding visual representations? We design a
probing model that evaluates how effective are text-only representations in
distinguishing between matching and non-matching visual representations. Our
findings show that language representations alone provide a strong signal for
retrieving image patches from the correct object categories. Moreover, they are
effective in retrieving specific instances of image patches; textual context
plays an important role in this process. Visually grounded language models
slightly outperform text-only language models in instance retrieval, but
greatly under-perform humans. We hope our analyses inspire future research in
understanding and improving the visual capabilities of language models.
| 2,021 | Computation and Language |
Minimally Supervised Categorization of Text with Metadata | Document categorization, which aims to assign a topic label to each document,
plays a fundamental role in a wide variety of applications. Despite the success
of existing studies in conventional supervised document classification, they
are less concerned with two real problems: (1) the presence of metadata: in
many domains, text is accompanied by various additional information such as
authors and tags. Such metadata serve as compelling topic indicators and should
be leveraged into the categorization framework; (2) label scarcity: labeled
training samples are expensive to obtain in some cases, where categorization
needs to be performed using only a small set of annotated data. In recognition
of these two challenges, we propose MetaCat, a minimally supervised framework
to categorize text with metadata. Specifically, we develop a generative process
describing the relationships between words, documents, labels, and metadata.
Guided by the generative model, we embed text and metadata into the same
semantic space to encode heterogeneous signals. Then, based on the same
generative process, we synthesize training samples to address the bottleneck of
label scarcity. We conduct a thorough evaluation on a wide range of datasets.
Experimental results prove the effectiveness of MetaCat over many competitive
baselines.
| 2,023 | Computation and Language |
Predicting Declension Class from Form and Meaning | The noun lexica of many natural languages are divided into several declension
classes with characteristic morphological properties. Class membership is far
from deterministic, but the phonological form of a noun and/or its meaning can
often provide imperfect clues. Here, we investigate the strength of those
clues. More specifically, we operationalize this by measuring how much
information, in bits, we can glean about declension class from knowing the form
and/or meaning of nouns. We know that form and meaning are often also
indicative of grammatical gender---which, as we quantitatively verify, can
itself share information with declension class---so we also control for gender.
We find for two Indo-European languages (Czech and German) that form and
meaning respectively share significant amounts of information with class (and
contribute additional information above and beyond gender). The three-way
interaction between class, form, and meaning (given gender) is also
significant. Our study is important for two reasons: First, we introduce a new
method that provides additional quantitative support for a classic linguistic
finding that form and meaning are relevant for the classification of nouns into
declensions. Secondly, we show not only that individual declensions classes
vary in the strength of their clues within a language, but also that these
variations themselves vary across languages.
| 2,020 | Computation and Language |
Intermediate-Task Transfer Learning with Pretrained Models for Natural
Language Understanding: When and Why Does It Work? | While pretrained models such as BERT have shown large gains across natural
language understanding tasks, their performance can be improved by further
training the model on a data-rich intermediate task, before fine-tuning it on a
target task. However, it is still poorly understood when and why
intermediate-task training is beneficial for a given target task. To
investigate this, we perform a large-scale study on the pretrained RoBERTa
model with 110 intermediate-target task combinations. We further evaluate all
trained models with 25 probing tasks meant to reveal the specific skills that
drive transfer. We observe that intermediate tasks requiring high-level
inference and reasoning abilities tend to work best. We also observe that
target task performance is strongly correlated with higher-level abilities such
as coreference resolution. However, we fail to observe more granular
correlations between probing and target task performance, highlighting the need
for further work on broad-coverage probing benchmarks. We also observe evidence
that the forgetting of knowledge learned during pretraining may limit our
analysis, highlighting the need for further work on transfer learning methods
in these settings.
| 2,020 | Computation and Language |
KLEJ: Comprehensive Benchmark for Polish Language Understanding | In recent years, a series of Transformer-based models unlocked major
improvements in general natural language understanding (NLU) tasks. Such a fast
pace of research would not be possible without general NLU benchmarks, which
allow for a fair comparison of the proposed methods. However, such benchmarks
are available only for a handful of languages. To alleviate this issue, we
introduce a comprehensive multi-task benchmark for the Polish language
understanding, accompanied by an online leaderboard. It consists of a diverse
set of tasks, adopted from existing datasets for named entity recognition,
question-answering, textual entailment, and others. We also introduce a new
sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR).
To ensure a common evaluation scheme and promote models that generalize to
different NLU tasks, the benchmark includes datasets from varying domains and
applications. Additionally, we release HerBERT, a Transformer-based model
trained specifically for the Polish language, which has the best average
performance and obtains the best results for three out of nine tasks. Finally,
we provide an extensive evaluation, including several standard baselines and
recently proposed, multilingual Transformer-based models.
| 2,020 | Computation and Language |
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual
Transfer with Multilingual Transformers | Massively multilingual transformers pretrained with language modeling
objectives (e.g., mBERT, XLM-R) have become a de facto default transfer
paradigm for zero-shot cross-lingual transfer in NLP, offering unmatched
transfer performance. Current downstream evaluations, however, verify their
efficacy predominantly in transfer settings involving languages with sufficient
amounts of pretraining data, and with lexically and typologically close
languages. In this work, we analyze their limitations and show that
cross-lingual transfer via massively multilingual transformers, much like
transfer via cross-lingual word embeddings, is substantially less effective in
resource-lean scenarios and for distant languages. Our experiments,
encompassing three lower-level tasks (POS tagging, dependency parsing, NER), as
well as two high-level semantic tasks (NLI, QA), empirically correlate transfer
performance with linguistic similarity between the source and target languages,
but also with the size of pretraining corpora of target languages. We also
demonstrate a surprising effectiveness of inexpensive few-shot transfer (i.e.,
fine-tuning on a few target-language instances after fine-tuning in the source)
across the board. This suggests that additional research efforts should be
invested to reach beyond the limiting zero-shot conditions.
| 2,020 | Computation and Language |
Using Noisy Self-Reports to Predict Twitter User Demographics | Computational social science studies often contextualize content analysis
within standard demographics. Since demographics are unavailable on many social
media platforms (e.g. Twitter) numerous studies have inferred demographics
automatically. Despite many studies presenting proof of concept inference of
race and ethnicity, training of practical systems remains elusive since there
are few annotated datasets. Existing datasets are small, inaccurate, or fail to
cover the four most common racial and ethnic groups in the United States. We
present a method to identify self-reports of race and ethnicity from Twitter
profile descriptions. Despite errors inherent in automated supervision, we
produce models with good performance when measured on gold standard self-report
survey data. The result is a reproducible method for creating large-scale
training resources for race and ethnicity.
| 2,021 | Computation and Language |
We Need to Talk About Random Splits | Gorman and Bedrick (2019) argued for using random splits rather than standard
splits in NLP experiments. We argue that random splits, like standard splits,
lead to overly optimistic performance estimates. We can also split data in
biased or adversarial ways, e.g., training on short sentences and evaluating on
long ones. Biased sampling has been used in domain adaptation to simulate
real-world drift; this is known as the covariate shift assumption. In NLP,
however, even worst-case splits, maximizing bias, often under-estimate the
error observed on new samples of in-domain data, i.e., the data that models
should minimally generalize to at test time. This invalidates the covariate
shift assumption. Instead of using multiple random splits, future benchmarks
should ideally include multiple, independent test sets instead; if infeasible,
we argue that multiple biased splits leads to more realistic performance
estimates than multiple random splits.
| 2,021 | Computation and Language |
Explainable Link Prediction for Emerging Entities in Knowledge Graphs | Despite their large-scale coverage, cross-domain knowledge graphs invariably
suffer from inherent incompleteness and sparsity. Link prediction can alleviate
this by inferring a target entity, given a source entity and a query relation.
Recent embedding-based approaches operate in an uninterpretable latent semantic
vector space of entities and relations, while path-based approaches operate in
the symbolic space, making the inference process explainable. However, these
approaches typically consider static snapshots of the knowledge graphs,
severely restricting their applicability for evolving knowledge graphs with
newly emerging entities. To overcome this issue, we propose an inductive
representation learning framework that is able to learn representations of
previously unseen entities. Our method finds reasoning paths between source and
target entities, thereby making the link prediction for unseen entities
interpretable and providing support evidence for the inferred link.
| 2,020 | Computation and Language |
Spatial Dependency Parsing for Semi-Structured Document Information
Extraction | Information Extraction (IE) for semi-structured document images is often
approached as a sequence tagging problem by classifying each recognized input
token into one of the IOB (Inside, Outside, and Beginning) categories. However,
such problem setup has two inherent limitations that (1) it cannot easily
handle complex spatial relationships and (2) it is not suitable for highly
structured information, which are nevertheless frequently observed in
real-world document images. To tackle these issues, we first formulate the IE
task as spatial dependency parsing problem that focuses on the relationship
among text tokens in the documents. Under this setup, we then propose SPADE
(SPAtial DEpendency parser) that models highly complex spatial relationships
and an arbitrary number of information layers in the documents in an end-to-end
manner. We evaluate it on various kinds of documents such as receipts, name
cards, forms, and invoices, and show that it achieves a similar or better
performance compared to strong baselines including BERT-based IOB taggger.
| 2,021 | Computation and Language |
Syntactic Question Abstraction and Retrieval for Data-Scarce Semantic
Parsing | Deep learning approaches to semantic parsing require a large amount of
labeled data, but annotating complex logical forms is costly. Here, we propose
Syntactic Question Abstraction and Retrieval (SQAR), a method to build a neural
semantic parser that translates a natural language (NL) query to a SQL logical
form (LF) with less than 1,000 annotated examples. SQAR first retrieves a
logical pattern from the train data by computing the similarity between NL
queries and then grounds a lexical information on the retrieved pattern in
order to generate the final LF. We validate SQAR by training models using
various small subsets of WikiSQL train data achieving up to 4.9% higher LF
accuracy compared to the previous state-of-the-art models on WikiSQL test set.
We also show that by using query-similarity to retrieve logical pattern, SQAR
can leverage a paraphrasing dataset achieving up to 5.9% higher LF accuracy
compared to the case where SQAR is trained by using only WikiSQL data. In
contrast to a simple pattern classification approach, SQAR can generate unseen
logical patterns upon the addition of new examples without re-training the
model. We also discuss an ideal way to create cost efficient and robust train
datasets when the data distribution can be approximated under a data-hungry
setting.
| 2,020 | Computation and Language |
Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question
Answering | Existing work on augmenting question answering (QA) models with external
knowledge (e.g., knowledge graphs) either struggle to model multi-hop relations
efficiently, or lack transparency into the model's prediction rationale. In
this paper, we propose a novel knowledge-aware approach that equips pre-trained
language models (PTLMs) with a multi-hop relational reasoning module, named
multi-hop graph relation network (MHGRN). It performs multi-hop,
multi-relational reasoning over subgraphs extracted from external knowledge
graphs. The proposed reasoning module unifies path-based reasoning methods and
graph neural networks to achieve better interpretability and scalability. We
also empirically show its effectiveness and scalability on CommonsenseQA and
OpenbookQA datasets, and interpret its behaviors with case studies.
| 2,020 | Computation and Language |
Text and Causal Inference: A Review of Using Text to Remove Confounding
from Causal Estimates | Many applications of computational social science aim to infer causal
conclusions from non-experimental data. Such observational data often contains
confounders, variables that influence both potential causes and potential
effects. Unmeasured or latent confounders can bias causal estimates, and this
has motivated interest in measuring potential confounders from observed text.
For example, an individual's entire history of social media posts or the
content of a news article could provide a rich measurement of multiple
confounders. Yet, methods and applications for this problem are scattered
across different communities and evaluation practices are inconsistent. This
review is the first to gather and categorize these examples and provide a guide
to data-processing and evaluation decisions. Despite increased attention on
adjusting for confounding using text, there are still many open problems, which
we highlight in this paper.
| 2,020 | Computation and Language |
An Information Bottleneck Approach for Controlling Conciseness in
Rationale Extraction | Decisions of complex language understanding models can be rationalized by
limiting their inputs to a relevant subsequence of the original text. A
rationale should be as concise as possible without significantly degrading task
performance, but this balance can be difficult to achieve in practice. In this
paper, we show that it is possible to better manage this trade-off by
optimizing a bound on the Information Bottleneck (IB) objective. Our fully
unsupervised approach jointly learns an explainer that predicts sparse binary
masks over sentences, and an end-task predictor that considers only the
extracted rationale. Using IB, we derive a learning objective that allows
direct control of mask sparsity levels through a tunable sparse prior.
Experiments on ERASER benchmark tasks demonstrate significant gains over
norm-minimization techniques for both task performance and agreement with human
rationales. Furthermore, we find that in the semi-supervised setting, a modest
amount of gold rationales (25% of training examples) closes the gap with a
model that uses the full input.
| 2,020 | Computation and Language |
GenericsKB: A Knowledge Base of Generic Statements | We present a new resource for the NLP community, namely a large (3.5M+
sentence) knowledge base of *generic statements*, e.g., "Trees remove carbon
dioxide from the atmosphere", collected from multiple corpora. This is the
first large resource to contain *naturally occurring* generic sentences, as
opposed to extracted or crowdsourced triples, and thus is rich in high-quality,
general, semantically complete statements. All GenericsKB sentences are
annotated with their topical term, surrounding context (sentences), and a
(learned) confidence. We also release GenericsKB-Best (1M+ sentences),
containing the best-quality generics in GenericsKB augmented with selected,
synthesized generics from WordNet and ConceptNet. In tests on two existing
datasets requiring multihop reasoning (OBQA and QASC), we find using GenericsKB
can result in higher scores and better explanations than using a much larger
corpus. This demonstrates that GenericsKB can be a useful resource for NLP
applications, as well as providing data for linguistic studies of generics and
their semantics. GenericsKB is available at
https://allenai.org/data/genericskb.
| 2,020 | Computation and Language |
On Faithfulness and Factuality in Abstractive Summarization | It is well known that the standard likelihood training and approximate
decoding objectives in neural text generation models lead to less human-like
responses for open-ended tasks such as language modeling and story generation.
In this paper we have analyzed limitations of these models for abstractive
document summarization and found that these models are highly prone to
hallucinate content that is unfaithful to the input document. We conducted a
large scale human evaluation of several neural abstractive summarization
systems to better understand the types of hallucinations they produce. Our
human annotators found substantial amounts of hallucinated content in all model
generated summaries. However, our analysis does show that pretrained models are
better summarizers not only in terms of raw metrics, i.e., ROUGE, but also in
generating faithful and factual summaries as evaluated by humans. Furthermore,
we show that textual entailment measures better correlate with faithfulness
than standard metrics, potentially leading the way to automatic evaluation
metrics as well as training and decoding criteria.
| 2,020 | Computation and Language |
Benchmarking Multimodal Regex Synthesis with Complex Structures | Existing datasets for regular expression (regex) generation from natural
language are limited in complexity; compared to regex tasks that users post on
StackOverflow, the regexes in these datasets are simple, and the language used
to describe them is not diverse. We introduce StructuredRegex, a new regex
synthesis dataset differing from prior ones in three aspects. First, to obtain
structurally complex and realistic regexes, we generate the regexes using a
probabilistic grammar with pre-defined macros observed from real-world
StackOverflow posts. Second, to obtain linguistically diverse natural language
descriptions, we show crowdworkers abstract depictions of the underlying regex
and ask them to describe the pattern they see, rather than having them
paraphrase synthetic language. Third, we augment each regex example with a
collection of strings that are and are not matched by the ground truth regex,
similar to how real users give examples. Our quantitative and qualitative
analysis demonstrates the advantages of StructuredRegex over prior datasets.
Further experimental results using various multimodal synthesis techniques
highlight the challenge presented by our dataset, including non-local
constraints and multi-modal inputs.
| 2,020 | Computation and Language |
Contrastive Self-Supervised Learning for Commonsense Reasoning | We propose a self-supervised method to solve Pronoun Disambiguation and
Winograd Schema Challenge problems. Our approach exploits the characteristic
structure of training corpora related to so-called "trigger" words, which are
responsible for flipping the answer in pronoun disambiguation. We achieve such
commonsense reasoning by constructing pair-wise contrastive auxiliary
predictions. To this end, we leverage a mutual exclusive loss regularized by a
contrastive margin. Our architecture is based on the recently introduced
transformer networks, BERT, that exhibits strong performance on many NLP
benchmarks. Empirical results show that our method alleviates the limitation of
current supervised approaches for commonsense reasoning. This study opens up
avenues for exploiting inexpensive self-supervision to achieve performance gain
in commonsense reasoning tasks.
| 2,020 | Computation and Language |
DagoBERT: Generating Derivational Morphology with a Pretrained Language
Model | Can pretrained language models (PLMs) generate derivationally complex words?
We present the first study investigating this question, taking BERT as the
example PLM. We examine BERT's derivational capabilities in different settings,
ranging from using the unmodified pretrained model to full finetuning. Our best
model, DagoBERT (Derivationally and generatively optimized BERT), clearly
outperforms the previous state of the art in derivation generation (DG).
Furthermore, our experiments show that the input segmentation crucially impacts
BERT's derivational knowledge, suggesting that the performance of PLMs could be
further improved if a morphologically informed vocabulary of units were used.
| 2,020 | Computation and Language |
Opportunistic Decoding with Timely Correction for Simultaneous
Translation | Simultaneous translation has many important application scenarios and
attracts much attention from both academia and industry recently. Most existing
frameworks, however, have difficulties in balancing between the translation
quality and latency, i.e., the decoding policy is usually either too aggressive
or too conservative. We propose an opportunistic decoding technique with timely
correction ability, which always (over-)generates a certain mount of extra
words at each step to keep the audience on track with the latest information.
At the same time, it also corrects, in a timely fashion, the mistakes in the
former overgenerated words when observing more source context to ensure high
translation quality. Experiments show our technique achieves substantial
reduction in latency and up to +3.1 increase in BLEU, with revision rate under
8% in Chinese-to-English and English-to-Chinese translation.
| 2,020 | Computation and Language |
Birds have four legs?! NumerSense: Probing Numerical Commonsense
Knowledge of Pre-trained Language Models | Recent works show that pre-trained language models (PTLMs), such as BERT,
possess certain commonsense and factual knowledge. They suggest that it is
promising to use PTLMs as "neural knowledge bases" via predicting masked words.
Surprisingly, we find that this may not work for numerical commonsense
knowledge (e.g., a bird usually has two legs). In this paper, we investigate
whether and to what extent we can induce numerical commonsense knowledge from
PTLMs as well as the robustness of this process. To study this, we introduce a
novel probing task with a diagnostic dataset, NumerSense, containing 13.6k
masked-word-prediction probes (10.5k for fine-tuning and 3.1k for testing). Our
analysis reveals that: (1) BERT and its stronger variant RoBERTa perform poorly
on the diagnostic dataset prior to any fine-tuning; (2) fine-tuning with
distant supervision brings some improvement; (3) the best supervised model
still performs poorly as compared to human performance (54.06% vs 96.3% in
accuracy).
| 2,020 | Computation and Language |
An Imitation Game for Learning Semantic Parsers from User Interaction | Despite the widely successful applications, bootstrapping and fine-tuning
semantic parsers are still a tedious process with challenges such as costly
data annotation and privacy risks. In this paper, we suggest an alternative,
human-in-the-loop methodology for learning semantic parsers directly from
users. A semantic parser should be introspective of its uncertainties and
prompt for user demonstration when uncertain. In doing so it also gets to
imitate the user behavior and continue improving itself autonomously with the
hope that eventually it may become as good as the user in interpreting their
questions. To combat the sparsity of demonstration, we propose a novel
annotation-efficient imitation learning algorithm, which iteratively collects
new datasets by mixing demonstrated states and confident predictions and
re-trains the semantic parser in a Dataset Aggregation fashion (Ross et al.,
2011). We provide a theoretical analysis of its cost bound and also empirically
demonstrate its promising performance on the text-to-SQL problem. Code will be
available at https://github.com/sunlab-osu/MISP.
| 2,020 | Computation and Language |
Connecting the Dots: A Knowledgeable Path Generator for Commonsense
Question Answering | Commonsense question answering (QA) requires background knowledge which is
not explicitly stated in a given context. Prior works use commonsense knowledge
graphs (KGs) to obtain this knowledge for reasoning. However, relying entirely
on these KGs may not suffice, considering their limited coverage and the
contextual dependence of their knowledge. In this paper, we augment a general
commonsense QA framework with a knowledgeable path generator. By extrapolating
over existing paths in a KG with a state-of-the-art language model, our
generator learns to connect a pair of entities in text with a dynamic, and
potentially novel, multi-hop relational path. Such paths can provide structured
evidence for solving commonsense questions without fine-tuning the path
generator. Experiments on two datasets show the superiority of our method over
previous works which fully rely on knowledge from KGs (with up to 6%
improvement in accuracy), across various amounts of training data. Further
evaluation suggests that the generated paths are typically interpretable,
novel, and relevant to the task.
| 2,020 | Computation and Language |
Design Challenges in Low-resource Cross-lingual Entity Linking | Cross-lingual Entity Linking (XEL), the problem of grounding mentions of
entities in a foreign language text into an English knowledge base such as
Wikipedia, has seen a lot of research in recent years, with a range of
promising techniques. However, current techniques do not rise to the challenges
introduced by text in low-resource languages (LRL) and, surprisingly, fail to
generalize to text not taken from Wikipedia, on which they are usually trained.
This paper provides a thorough analysis of low-resource XEL techniques,
focusing on the key step of identifying candidate English Wikipedia titles that
correspond to a given foreign language mention. Our analysis indicates that
current methods are limited by their reliance on Wikipedia's interlanguage
links and thus suffer when the foreign language's Wikipedia is small. We
conclude that the LRL setting requires the use of outside-Wikipedia
cross-lingual resources and present a simple yet effective zero-shot XEL
system, QuEL, that utilizes search engines query logs. With experiments on 25
languages, QuEL~shows an average increase of 25\% in gold candidate recall and
of 13\% in end-to-end linking accuracy over state-of-the-art baselines.
| 2,020 | Computation and Language |
Are Emojis Emotional? A Study to Understand the Association between
Emojis and Emotions | Given the growing ubiquity of emojis in language, there is a need for methods
and resources that shed light on their meaning and communicative role. One
conspicuous aspect of emojis is their use to convey affect in ways that may
otherwise be non-trivial to achieve. In this paper, we seek to explore the
connection between emojis and emotions by means of a new dataset consisting of
human-solicited association ratings. We additionally conduct experiments to
assess to what extent such associations can be inferred from existing data,
such that similar associations can be predicted for a larger set of emojis. Our
experiments show that this succeeds when high-quality word-level information is
available.
| 2,020 | Computation and Language |
Robust and Interpretable Grounding of Spatial References with Relation
Networks | Learning representations of spatial references in natural language is a key
challenge in tasks like autonomous navigation and robotic manipulation. Recent
work has investigated various neural architectures for learning multi-modal
representations for spatial concepts. However, the lack of explicit reasoning
over entities makes such approaches vulnerable to noise in input text or state
observations. In this paper, we develop effective models for understanding
spatial references in text that are robust and interpretable, without
sacrificing performance. We design a text-conditioned \textit{relation network}
whose parameters are dynamically computed with a cross-modal attention module
to capture fine-grained spatial relations between entities. This design choice
provides interpretability of learned intermediate outputs. Experiments across
three tasks demonstrate that our model achieves superior performance, with a
17\% improvement in predicting goal locations and a 15\% improvement in
robustness compared to state-of-the-art systems.
| 2,020 | Computation and Language |
DeFormer: Decomposing Pre-trained Transformers for Faster Question
Answering | Transformer-based QA models use input-wide self-attention -- i.e. across both
the question and the input passage -- at all layers, causing them to be slow
and memory-intensive. It turns out that we can get by without input-wide
self-attention at all layers, especially in the lower layers. We introduce
DeFormer, a decomposed transformer, which substitutes the full self-attention
with question-wide and passage-wide self-attentions in the lower layers. This
allows for question-independent processing of the input text representations,
which in turn enables pre-computing passage representations reducing runtime
compute drastically. Furthermore, because DeFormer is largely similar to the
original model, we can initialize DeFormer with the pre-training weights of a
standard transformer, and directly fine-tune on the target QA dataset. We show
DeFormer versions of BERT and XLNet can be used to speed up QA by over 4.3x and
with simple distillation-based losses they incur only a 1% drop in accuracy. We
open source the code at https://github.com/StonyBrookNLP/deformer.
| 2,020 | Computation and Language |
Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer | Multilingual representations embed words from many languages into a single
semantic space such that words with similar meanings are close to each other
regardless of the language. These embeddings have been widely used in various
settings, such as cross-lingual transfer, where a natural language processing
(NLP) model trained on one language is deployed to another language. While the
cross-lingual transfer techniques are powerful, they carry gender bias from the
source to target languages. In this paper, we study gender bias in multilingual
embeddings and how it affects transfer learning for NLP applications. We create
a multilingual dataset for bias analysis and propose several ways for
quantifying bias in multilingual representations from both the intrinsic and
extrinsic perspectives. Experimental results show that the magnitude of bias in
the multilingual representations changes differently when we align the
embeddings to different target spaces and that the alignment direction can also
have an influence on the bias in transfer learning. We further provide
recommendations for using the multilingual word representations for downstream
tasks.
| 2,020 | Computation and Language |
UnifiedQA: Crossing Format Boundaries With a Single QA System | Question answering (QA) tasks have been posed using a variety of formats,
such as extractive span selection, multiple choice, etc. This has led to
format-specialized models, and even to an implicit division in the QA
community. We argue that such boundaries are artificial and perhaps
unnecessary, given the reasoning abilities we seek to teach are not governed by
the format. As evidence, we use the latest advances in language modeling to
build a single pre-trained QA model, UnifiedQA, that performs surprisingly well
across 17 QA datasets spanning 4 diverse formats. UnifiedQA performs on par
with 9 different models that were trained on individual datasets themselves.
Even when faced with 12 unseen datasets of observed formats, UnifiedQA performs
surprisingly well, showing strong generalization from its out-of-format
training data. Finally, simply fine-tuning this pre-trained QA model into
specialized models results in a new state of the art on 6 datasets,
establishing UnifiedQA as a strong starting point for building QA systems.
| 2,020 | Computation and Language |
Expertise Style Transfer: A New Task Towards Better Communication
between Experts and Laymen | The curse of knowledge can impede communication between experts and laymen.
We propose a new task of expertise style transfer and contribute a manually
annotated dataset with the goal of alleviating such cognitive biases. Solving
this task not only simplifies the professional language, but also improves the
accuracy and expertise level of laymen descriptions using simple words. This is
a challenging task, unaddressed in previous work, as it requires the models to
have expert intelligence in order to modify text with a deep understanding of
domain knowledge and structures. We establish the benchmark performance of five
state-of-the-art models for style transfer and text simplification. The results
demonstrate a significant gap between machine and human performance. We also
discuss the challenges of automatic evaluation, to provide insights into future
research directions. The dataset is publicly available at
https://srhthu.github.io/expertise-style-transfer.
| 2,020 | Computation and Language |
A Girl Has A Name: Detecting Authorship Obfuscation | Authorship attribution aims to identify the author of a text based on the
stylometric analysis. Authorship obfuscation, on the other hand, aims to
protect against authorship attribution by modifying a text's style. In this
paper, we evaluate the stealthiness of state-of-the-art authorship obfuscation
methods under an adversarial threat model. An obfuscator is stealthy to the
extent an adversary finds it challenging to detect whether or not a text
modified by the obfuscator is obfuscated - a decision that is key to the
adversary interested in authorship attribution. We show that the existing
authorship obfuscation methods are not stealthy as their obfuscated texts can
be identified with an average F1 score of 0.87. The reason for the lack of
stealthiness is that these obfuscators degrade text smoothness, as ascertained
by neural language models, in a detectable manner. Our results highlight the
need to develop stealthy authorship obfuscation methods that can better protect
the identity of an author seeking anonymity.
| 2,020 | Computation and Language |
AVA: an Automatic eValuation Approach to Question Answering Systems | We introduce AVA, an automatic evaluation approach for Question Answering,
which given a set of questions associated with Gold Standard answers, can
estimate system Accuracy. AVA uses Transformer-based language models to encode
question, answer, and reference text. This allows for effectively measuring the
similarity between the reference and an automatic answer, biased towards the
question semantics. To design, train and test AVA, we built multiple large
training, development, and test sets on both public and industrial benchmarks.
Our innovative solutions achieve up to 74.7% in F1 score in predicting human
judgement for single answers. Additionally, AVA can be used to evaluate the
overall system Accuracy with an RMSE, ranging from 0.02 to 0.09, depending on
the availability of multiple references.
| 2,021 | Computation and Language |
A Benchmark for Structured Procedural Knowledge Extraction from Cooking
Videos | Watching instructional videos are often used to learn about procedures. Video
captioning is one way of automatically collecting such knowledge. However, it
provides only an indirect, overall evaluation of multimodal models with no
finer-grained quantitative measure of what they have learned. We propose
instead, a benchmark of structured procedural knowledge extracted from cooking
videos. This work is complementary to existing tasks, but requires models to
produce interpretable structured knowledge in the form of verb-argument tuples.
Our manually annotated open-vocabulary resource includes 356 instructional
cooking videos and 15,523 video clip/sentence-level annotations. Our analysis
shows that the proposed task is challenging and standard modeling approaches
like unsupervised segmentation, semantic role labeling, and visual action
detection perform poorly when forced to predict every action of a procedure in
a structured form.
| 2,020 | Computation and Language |
Probing the Probing Paradigm: Does Probing Accuracy Entail Task
Relevance? | Although neural models have achieved impressive results on several NLP
benchmarks, little is understood about the mechanisms they use to perform
language tasks. Thus, much recent attention has been devoted to analyzing the
sentence representations learned by neural encoders, through the lens of
`probing' tasks. However, to what extent was the information encoded in
sentence representations, as discovered through a probe, actually used by the
model to perform its task? In this work, we examine this probing paradigm
through a case study in Natural Language Inference, showing that models can
learn to encode linguistic properties even if they are not needed for the task
on which the model was trained. We further identify that pretrained word
embeddings play a considerable role in encoding these properties rather than
the training task itself, highlighting the importance of careful controls when
designing probing experiments. Finally, through a set of controlled synthetic
tasks, we demonstrate models can encode these properties considerably above
chance-level even when distributed in the data as random noise, calling into
question the interpretation of absolute claims on probing tasks.
| 2,021 | Computation and Language |
Obtaining Faithful Interpretations from Compositional Neural Networks | Neural module networks (NMNs) are a popular approach for modeling
compositionality: they achieve high accuracy when applied to problems in
language and vision, while reflecting the compositional structure of the
problem in the network architecture. However, prior work implicitly assumed
that the structure of the network modules, describing the abstract reasoning
process, provides a faithful explanation of the model's reasoning; that is,
that all modules perform their intended behaviour. In this work, we propose and
conduct a systematic evaluation of the intermediate outputs of NMNs on NLVR2
and DROP, two datasets which require composing multiple reasoning steps. We
find that the intermediate outputs differ from the expected output,
illustrating that the network structure does not provide a faithful explanation
of model behaviour. To remedy that, we train the model with auxiliary
supervision and propose particular choices for module architecture that yield
much better faithfulness, at a minimal cost to accuracy.
| 2,020 | Computation and Language |
RMM: A Recursive Mental Model for Dialog Navigation | Language-guided robots must be able to both ask humans questions and
understand answers. Much existing work focuses only on the latter. In this
paper, we go beyond instruction following and introduce a two-agent task where
one agent navigates and asks questions that a second, guiding agent answers.
Inspired by theory of mind, we propose the Recursive Mental Model (RMM). The
navigating agent models the guiding agent to simulate answers given candidate
generated questions. The guiding agent in turn models the navigating agent to
simulate navigation steps it would take to generate answers. We use the
progress agents make towards the goal as a reinforcement learning reward signal
to directly inform not only navigation actions, but also both question and
answer generation. We demonstrate that RMM enables better generalization to
novel environments. Interlocutor modelling may be a way forward for human-agent
dialogue where robots need to both ask and answer questions.
| 2,020 | Computation and Language |
ESPRIT: Explaining Solutions to Physical Reasoning Tasks | Neural networks lack the ability to reason about qualitative physics and so
cannot generalize to scenarios and tasks unseen during training. We propose
ESPRIT, a framework for commonsense reasoning about qualitative physics in
natural language that generates interpretable descriptions of physical events.
We use a two-step approach of first identifying the pivotal physical events in
an environment and then generating natural language descriptions of those
events using a data-to-text approach. Our framework learns to generate
explanations of how the physical simulation will causally evolve so that an
agent or a human can easily reason about a solution using those interpretable
descriptions. Human evaluations indicate that ESPRIT produces crucial
fine-grained details and has high coverage of physical concepts compared to
even human annotations. Dataset, code and documentation are available at
https://github.com/salesforce/esprit.
| 2,020 | Computation and Language |
Hard-Coded Gaussian Attention for Neural Machine Translation | Recent work has questioned the importance of the Transformer's multi-headed
attention for achieving high translation quality. We push further in this
direction by developing a "hard-coded" attention variant without any learned
parameters. Surprisingly, replacing all learned self-attention heads in the
encoder and decoder with fixed, input-agnostic Gaussian distributions minimally
impacts BLEU scores across four different language pairs. However, additionally
hard-coding cross attention (which connects the decoder to the encoder)
significantly lowers BLEU, suggesting that it is more important than
self-attention. Much of this BLEU drop can be recovered by adding just a single
learned cross attention head to an otherwise hard-coded Transformer. Taken as a
whole, our results offer insight into which components of the Transformer are
actually important, which we hope will guide future work into the development
of simpler and more efficient attention-based models.
| 2,020 | Computation and Language |
Synthesizer: Rethinking Self-Attention in Transformer Models | The dot product self-attention is known to be central and indispensable to
state-of-the-art Transformer models. But is it really required? This paper
investigates the true importance and contribution of the dot product-based
self-attention mechanism on the performance of Transformer models. Via
extensive experiments, we find that (1) random alignment matrices surprisingly
perform quite competitively and (2) learning attention weights from token-token
(query-key) interactions is useful but not that important after all. To this
end, we propose \textsc{Synthesizer}, a model that learns synthetic attention
weights without token-token interactions. In our experiments, we first show
that simple Synthesizers achieve highly competitive performance when compared
against vanilla Transformer models across a range of tasks, including machine
translation, language modeling, text generation and GLUE/SuperGLUE benchmarks.
When composed with dot product attention, we find that Synthesizers
consistently outperform Transformers. Moreover, we conduct additional
comparisons of Synthesizers against Dynamic Convolutions, showing that simple
Random Synthesizer is not only $60\%$ faster but also improves perplexity by a
relative $3.5\%$. Finally, we show that simple factorized Synthesizers can
outperform Linformers on encoding only tasks.
| 2,021 | Computation and Language |
BERT-kNN: Adding a kNN Search Component to Pretrained Language Models
for Better QA | Khandelwal et al. (2020) use a k-nearest-neighbor (kNN) component to improve
language model performance. We show that this idea is beneficial for
open-domain question answering (QA). To improve the recall of facts encountered
during training, we combine BERT (Devlin et al., 2019) with a traditional
information retrieval step (IR) and a kNN search over a large datastore of an
embedded text collection. Our contributions are as follows: i) BERT-kNN
outperforms BERT on cloze-style QA by large margins without any further
training. ii) We show that BERT often identifies the correct response category
(e.g., US city), but only kNN recovers the factually correct answer (e.g.,
"Miami"). iii) Compared to BERT, BERT-kNN excels for rare facts. iv) BERT-kNN
can easily handle facts not covered by BERT's training set, e.g., recent
events.
| 2,020 | Computation and Language |
Exploring and Predicting Transferability across NLP Tasks | Recent advances in NLP demonstrate the effectiveness of training large-scale
language models and transferring them to downstream tasks. Can fine-tuning
these models on tasks other than language modeling further improve performance?
In this paper, we conduct an extensive study of the transferability between 33
NLP tasks across three broad classes of problems (text classification, question
answering, and sequence labeling). Our results show that transfer learning is
more beneficial than previously thought, especially when target task data is
scarce, and can improve performance even when the source task is small or
differs substantially from the target task (e.g., part-of-speech tagging
transfers well to the DROP QA dataset). We also develop task embeddings that
can be used to predict the most transferable source tasks for a given target
task, and we validate their effectiveness in experiments controlled for source
and target data size. Overall, our experiments reveal that factors such as
source data size, task and domain similarity, and task complexity all play a
role in determining transferability.
| 2,020 | Computation and Language |
ProtoQA: A Question Answering Dataset for Prototypical Common-Sense
Reasoning | Given questions regarding some prototypical situation such as Name something
that people usually do before they leave the house for work? a human can easily
answer them via acquired experiences. There can be multiple right answers for
such questions, with some more common for a situation than others. This paper
introduces a new question answering dataset for training and evaluating common
sense reasoning capabilities of artificial intelligence systems in such
prototypical situations. The training set is gathered from an existing set of
questions played in a long-running international game show FAMILY- FEUD. The
hidden evaluation set is created by gathering answers for each question from
100 crowd-workers. We also propose a generative evaluation task where a model
has to output a ranked list of answers, ideally covering all prototypical
answers for a question. After presenting multiple competitive baseline models,
we find that human performance still exceeds model scores on all evaluation
metrics with a meaningful gap, supporting the challenging nature of the task.
| 2,020 | Computation and Language |
RICA: Evaluating Robust Inference Capabilities Based on Commonsense
Axioms | Pre-trained language models (PTLMs) have achieved impressive performance on
commonsense inference benchmarks, but their ability to employ commonsense to
make robust inferences, which is crucial for effective communications with
humans, is debated. In the pursuit of advancing fluid human-AI communication,
we propose a new challenge, RICA: Robust Inference capability based on
Commonsense Axioms, that evaluates robust commonsense inference despite textual
perturbations. To generate data for this challenge, we develop a systematic and
scalable procedure using commonsense knowledge bases and probe PTLMs across two
different evaluation settings. Extensive experiments on our generated probe
sets with more than 10k statements show that PTLMs perform no better than
random guessing on the zero-shot setting, are heavily impacted by statistical
biases, and are not robust to perturbation attacks. We also find that
fine-tuning on similar statements offer limited gains, as PTLMs still fail to
generalize to unseen inferences. Our new large-scale benchmark exposes a
significant gap between PTLMs and human-level language understanding and offers
a new challenge for PTLMs to demonstrate commonsense.
| 2,021 | Computation and Language |
Visually Grounded Continual Learning of Compositional Phrases | Humans acquire language continually with much more limited access to data
samples at a time, as compared to contemporary NLP systems. To study this
human-like language acquisition ability, we present VisCOLL, a visually
grounded language learning task, which simulates the continual acquisition of
compositional phrases from streaming visual scenes. In the task, models are
trained on a paired image-caption stream which has shifting object
distribution; while being constantly evaluated by a visually-grounded masked
language prediction task on held-out test sets. VisCOLL compounds the
challenges of continual learning (i.e., learning from continuously shifting
data distribution) and compositional generalization (i.e., generalizing to
novel compositions). To facilitate research on VisCOLL, we construct two
datasets, COCO-shift and Flickr-shift, and benchmark them using different
continual learning methods. Results reveal that SoTA continual learning
approaches provide little to no improvements on VisCOLL, since storing examples
of all possible compositions is infeasible. We conduct further ablations and
analysis to guide future work.
| 2,020 | Computation and Language |
Is Multihop QA in DiRe Condition? Measuring and Reducing Disconnected
Reasoning | Has there been real progress in multi-hop question-answering? Models often
exploit dataset artifacts to produce correct answers, without connecting
information across multiple supporting facts. This limits our ability to
measure true progress and defeats the purpose of building multi-hop QA
datasets. We make three contributions towards addressing this. First, we
formalize such undesirable behavior as disconnected reasoning across subsets of
supporting facts. This allows developing a model-agnostic probe for measuring
how much any model can cheat via disconnected reasoning. Second, using a notion
of \emph{contrastive support sufficiency}, we introduce an automatic
transformation of existing datasets that reduces the amount of disconnected
reasoning. Third, our experiments suggest that there hasn't been much progress
in multi-hop QA in the reading comprehension setting. For a recent large-scale
model (XLNet), we show that only 18 points out of its answer F1 score of 72 on
HotpotQA are obtained through multifact reasoning, roughly the same as that of
a simpler RNN baseline. Our transformation substantially reduces disconnected
reasoning (19 points in answer F1). It is complementary to adversarial
approaches, yielding further reductions in conjunction.
| 2,020 | Computation and Language |
KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis | Cross-domain sentiment analysis has received significant attention in recent
years, prompted by the need to combat the domain gap between different
applications that make use of sentiment analysis. In this paper, we take a
novel perspective on this task by exploring the role of external commonsense
knowledge. We introduce a new framework, KinGDOM, which utilizes the ConceptNet
knowledge graph to enrich the semantics of a document by providing both
domain-specific and domain-general background concepts. These concepts are
learned by training a graph convolutional autoencoder that leverages
inter-domain concepts in a domain-invariant manner. Conditioning a popular
domain-adversarial baseline method with these learned concepts helps improve
its performance over state-of-the-art approaches, demonstrating the efficacy of
our proposed framework.
| 2,020 | Computation and Language |
A Simple Language Model for Task-Oriented Dialogue | Task-oriented dialogue is often decomposed into three tasks: understanding
user input, deciding actions, and generating a response. While such
decomposition might suggest a dedicated model for each sub-task, we find a
simple, unified approach leads to state-of-the-art performance on the MultiWOZ
dataset. SimpleTOD is a simple approach to task-oriented dialogue that uses a
single, causal language model trained on all sub-tasks recast as a single
sequence prediction problem. This allows SimpleTOD to fully leverage transfer
learning from pre-trained, open domain, causal language models such as GPT-2.
SimpleTOD improves over the prior state-of-the-art in joint goal accuracy for
dialogue state tracking, and our analysis reveals robustness to noisy
annotations in this setting. SimpleTOD also improves the main metrics used to
evaluate action decisions and response generation in an end-to-end setting:
inform rate by 8.1 points, success rate by 9.7 points, and combined score by
7.2 points.
| 2,022 | Computation and Language |
Treebank Embedding Vectors for Out-of-domain Dependency Parsing | A recent advance in monolingual dependency parsing is the idea of a treebank
embedding vector, which allows all treebanks for a particular language to be
used as training data while at the same time allowing the model to prefer
training data from one treebank over others and to select the preferred
treebank at test time. We build on this idea by 1) introducing a method to
predict a treebank vector for sentences that do not come from a treebank used
in training, and 2) exploring what happens when we move away from predefined
treebank embedding vectors during test time and instead devise tailored
interpolations. We show that 1) there are interpolated vectors that are
superior to the predefined ones, and 2) treebank vectors can be predicted with
sufficient accuracy, for nine out of ten test languages, to match the
performance of an oracle approach that knows the most suitable predefined
treebank embedding for the test set.
| 2,020 | Computation and Language |
Teaching Machine Comprehension with Compositional Explanations | Advances in machine reading comprehension (MRC) rely heavily on the
collection of large scale human-annotated examples in the form of (question,
paragraph, answer) triples. In contrast, humans are typically able to
generalize with only a few examples, relying on deeper underlying world
knowledge, linguistic sophistication, and/or simply superior deductive powers.
In this paper, we focus on "teaching" machines reading comprehension, using a
small number of semi-structured explanations that explicitly inform machines
why answer spans are correct. We extract structured variables and rules from
explanations and compose neural module teachers that annotate instances for
training downstream MRC models. We use learnable neural modules and soft logic
to handle linguistic variation and overcome sparse coverage; the modules are
jointly optimized with the MRC model to improve final performance. On the SQuAD
dataset, our proposed method achieves 70.14% F1 score with supervision from 26
explanations, comparable to plain supervised learning using 1,100 labeled
instances, yielding a 12x speed up.
| 2,020 | Computation and Language |
MultiQT: Multimodal Learning for Real-Time Question Tracking in Speech | We address a challenging and practical task of labeling questions in speech
in real time during telephone calls to emergency medical services in English,
which embeds within a broader decision support system for emergency
call-takers. We propose a novel multimodal approach to real-time sequence
labeling in speech. Our model treats speech and its own textual representation
as two separate modalities or views, as it jointly learns from streamed audio
and its noisy transcription into text via automatic speech recognition. Our
results show significant gains of jointly learning from the two modalities when
compared to text or audio only, under adverse noise and limited volume of
training data. The results generalize to medical symptoms detection where we
observe a similar pattern of improvements with multimodal learning.
| 2,020 | Computation and Language |
Social Biases in NLP Models as Barriers for Persons with Disabilities | Building equitable and inclusive NLP technologies demands consideration of
whether and how social attitudes are represented in ML models. In particular,
representations encoded in models often inadvertently perpetuate undesirable
social biases from the data on which they are trained. In this paper, we
present evidence of such undesirable biases towards mentions of disability in
two different English language models: toxicity prediction and sentiment
analysis. Next, we demonstrate that the neural embeddings that are the critical
first step in most NLP pipelines similarly contain undesirable biases towards
mentions of disability. We end by highlighting topical biases in the discourse
about disability which may contribute to the observed model biases; for
instance, gun violence, homelessness, and drug addiction are over-represented
in texts discussing mental illness.
| 2,020 | Computation and Language |
DQI: Measuring Data Quality in NLP | Neural language models have achieved human level performance across several
NLP datasets. However, recent studies have shown that these models are not
truly learning the desired task; rather, their high performance is attributed
to overfitting using spurious biases, which suggests that the capabilities of
AI systems have been over-estimated. We introduce a generic formula for Data
Quality Index (DQI) to help dataset creators create datasets free of such
unwanted biases. We evaluate this formula using a recently proposed approach
for adversarial filtering, AFLite. We propose a new data creation paradigm
using DQI to create higher quality data. The data creation paradigm consists of
several data visualizations to help data creators (i) understand the quality of
data and (ii) visualize the impact of the created data instance on the overall
quality. It also has a couple of automation methods to (i) assist data creators
and (ii) make the model more robust to adversarial attacks. We use DQI along
with these automation methods to renovate biased examples in SNLI. We show that
models trained on the renovated SNLI dataset generalize better to out of
distribution tasks. Renovation results in reduced model performance, exposing a
large gap with respect to human performance. DQI systematically helps in
creating harder benchmarks using active learning. Our work takes the process of
dynamic dataset creation forward, wherein datasets evolve together with the
evolving state of the art, therefore serving as a means of benchmarking the
true progress of AI.
| 2,020 | Computation and Language |
Generalized Entropy Regularization or: There's Nothing Special about
Label Smoothing | Prior work has explored directly regularizing the output distributions of
probabilistic models to alleviate peaky (i.e. over-confident) predictions, a
common sign of overfitting. This class of techniques, of which label smoothing
is one, has a connection to entropy regularization. Despite the consistent
success of label smoothing across architectures and data sets in language
generation tasks, two problems remain open: (1) there is little understanding
of the underlying effects entropy regularizers have on models, and (2) the full
space of entropy regularization techniques is largely unexplored. We introduce
a parametric family of entropy regularizers, which includes label smoothing as
a special case, and use it to gain a better understanding of the relationship
between the entropy of a model and its performance on language generation
tasks. We also find that variance in model performance can be explained largely
by the resulting entropy of the model. Lastly, we find that label smoothing
provably does not allow for sparsity in an output distribution, an undesirable
property for language generation models, and therefore advise the use of other
entropy regularization methods in its place.
| 2,020 | Computation and Language |
Language Models as an Alternative Evaluator of Word Order Hypotheses: A
Case Study in Japanese | We examine a methodology using neural language models (LMs) for analyzing the
word order of language. This LM-based method has the potential to overcome the
difficulties existing methods face, such as the propagation of preprocessor
errors in count-based methods. In this study, we explore whether the LM-based
method is valid for analyzing the word order. As a case study, this study
focuses on Japanese due to its complex and flexible word order. To validate the
LM-based method, we test (i) parallels between LMs and human word order
preference, and (ii) consistency of the results obtained using the LM-based
method with previous linguistic studies. Through our experiments, we
tentatively conclude that LMs display sufficient word order knowledge for usage
as an analysis tool. Finally, using the LM-based method, we demonstrate the
relationship between the canonical word order and topicalization, which had yet
to be analyzed by large-scale experiments.
| 2,020 | Computation and Language |
Sources of Transfer in Multilingual Named Entity Recognition | Named-entities are inherently multilingual, and annotations in any given
language may be limited. This motivates us to consider polyglot named-entity
recognition (NER), where one model is trained using annotated data drawn from
more than one language. However, a straightforward implementation of this
simple idea does not always work in practice: naive training of NER models
using annotated data drawn from multiple languages consistently underperforms
models trained on monolingual data alone, despite having access to more
training data. The starting point of this paper is a simple solution to this
problem, in which polyglot models are fine-tuned on monolingual data to
consistently and significantly outperform their monolingual counterparts. To
explain this phenomena, we explore the sources of multilingual transfer in
polyglot NER models and examine the weight structure of polyglot models
compared to their monolingual counterparts. We find that polyglot models
efficiently share many parameters across languages and that fine-tuning may
utilize a large number of those parameters.
| 2,020 | Computation and Language |
ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine
Translation | We propose to train a non-autoregressive machine translation model to
minimize the energy defined by a pretrained autoregressive model. In
particular, we view our non-autoregressive translation system as an inference
network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher
energy. This contrasts with the popular approach of training a
non-autoregressive model on a distilled corpus consisting of the beam-searched
outputs of such a teacher model. Our approach, which we call ENGINE
(ENerGy-based Inference NEtworks), achieves state-of-the-art non-autoregressive
results on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, approaching the
performance of autoregressive models.
| 2,020 | Computation and Language |
A language score based output selection method for multilingual speech
recognition | The quality of a multilingual speech recognition system can be improved by
adaptation methods if the input language is specified. For systems that can
accept multilingual inputs, the popular approach is to apply a language
identifier to the input then switch or configure decoders in the next step, or
use one more subsequence model to select the output from a set of candidates.
Motivated by the goal of reducing the latency for real-time applications, in
this paper, a language model rescoring method is firstly applied to produce all
possible candidates for target languages, then a simple score is proposed to
automatically select the output without any identifier model or language
specification of the input language. The main point is that this score can be
simply and automatically estimated on-the-fly so that the whole decoding
pipeline is more simple and compact. Experimental results showed that this
method can achieve the same quality as when the input language is specified. In
addition, we present to design an English and Vietnamese End-to-End model to
deal with not only the problem of cross-lingual speakers but also as a solution
to improve the accuracy of borrowed words of English in Vietnamese.
| 2,020 | Computation and Language |
Predicting Performance for Natural Language Processing Tasks | Given the complexity of combinations of tasks, languages, and domains in
natural language processing (NLP) research, it is computationally prohibitive
to exhaustively test newly proposed models on each possible experimental
setting. In this work, we attempt to explore the possibility of gaining
plausible judgments of how well an NLP model can perform under an experimental
setting, without actually training or testing the model. To do so, we build
regression models to predict the evaluation score of an NLP experiment given
the experimental settings as input. Experimenting on 9 different NLP tasks, we
find that our predictors can produce meaningful predictions over unseen
languages and different modeling architectures, outperforming reasonable
baselines as well as human experts. Going further, we outline how our predictor
can be used to find a small subset of representative experiments that should be
run in order to obtain plausible predictions for all other experimental
settings.
| 2,020 | Computation and Language |
Single Model Ensemble using Pseudo-Tags and Distinct Vectors | Model ensemble techniques often increase task performance in neural networks;
however, they require increased time, memory, and management effort. In this
study, we propose a novel method that replicates the effects of a model
ensemble with a single model. Our approach creates K-virtual models within a
single parameter space using K-distinct pseudo-tags and K-distinct vectors.
Experiments on text classification and sequence labeling tasks on several
datasets demonstrate that our method emulates or outperforms a traditional
model ensemble with 1/K-times fewer parameters.
| 2,020 | Computation and Language |
Improving Truthfulness of Headline Generation | Most studies on abstractive summarization report ROUGE scores between system
and reference summaries. However, we have a concern about the truthfulness of
generated summaries: whether all facts of a generated summary are mentioned in
the source text. This paper explores improving the truthfulness in headline
generation on two popular datasets. Analyzing headlines generated by the
state-of-the-art encoder-decoder model, we show that the model sometimes
generates untruthful headlines. We conjecture that one of the reasons lies in
untruthful supervision data used for training the model. In order to quantify
the truthfulness of article-headline pairs, we consider the textual entailment
of whether an article entails its headline. After confirming quite a few
untruthful instances in the datasets, this study hypothesizes that removing
untruthful instances from the supervision data may remedy the problem of the
untruthful behaviors of the model. Building a binary classifier that predicts
an entailment relation between an article and its headline, we filter out
untruthful instances from the supervision data. Experimental results
demonstrate that the headline generation model trained on filtered supervision
data shows no clear difference in ROUGE scores but remarkable improvements in
automatic and manual evaluations of the generated headlines.
| 2,020 | Computation and Language |
Rationalizing Medical Relation Prediction from Corpus-level Statistics | Nowadays, the interpretability of machine learning models is becoming
increasingly important, especially in the medical domain. Aiming to shed some
light on how to rationalize medical relation prediction, we present a new
interpretable framework inspired by existing theories on how human memory
works, e.g., theories of recall and recognition. Given the corpus-level
statistics, i.e., a global co-occurrence graph of a clinical text corpus, to
predict the relations between two entities, we first recall rich contexts
associated with the target entities, and then recognize relational interactions
between these contexts to form model rationales, which will contribute to the
final prediction. We conduct experiments on a real-world public clinical
dataset and show that our framework can not only achieve competitive predictive
performance against a comprehensive list of neural baseline models, but also
present rationales to justify its prediction. We further collaborate with
medical experts deeply to verify the usefulness of our model rationales for
clinical decision making.
| 2,020 | Computation and Language |
Zero-Shot Transfer Learning with Synthesized Data for Multi-Domain
Dialogue State Tracking | Zero-shot transfer learning for multi-domain dialogue state tracking can
allow us to handle new domains without incurring the high cost of data
acquisition. This paper proposes new zero-short transfer learning technique for
dialogue state tracking where the in-domain training data are all synthesized
from an abstract dialogue model and the ontology of the domain. We show that
data augmentation through synthesized data can improve the accuracy of
zero-shot learning for both the TRADE model and the BERT-based SUMBT model on
the MultiWOZ 2.1 dataset. We show training with only synthesized in-domain data
on the SUMBT model can reach about 2/3 of the accuracy obtained with the full
training dataset. We improve the zero-shot learning state of the art on average
across domains by 21%.
| 2,020 | Computation and Language |
Clue: Cross-modal Coherence Modeling for Caption Generation | We use coherence relations inspired by computational models of discourse to
study the information needs and goals of image captioning. Using an annotation
protocol specifically devised for capturing image--caption coherence relations,
we annotate 10,000 instances from publicly-available image--caption pairs. We
introduce a new task for learning inferences in imagery and text, coherence
relation prediction, and show that these coherence annotations can be exploited
to learn relation classifiers as an intermediary step, and also train
coherence-aware, controllable image captioning models. The results show a
dramatic improvement in the consistency and quality of the generated captions
with respect to information needs specified via coherence relations.
| 2,022 | Computation and Language |
Improving Non-autoregressive Neural Machine Translation with Monolingual
Data | Non-autoregressive (NAR) neural machine translation is usually done via
knowledge distillation from an autoregressive (AR) model. Under this framework,
we leverage large monolingual corpora to improve the NAR model's performance,
with the goal of transferring the AR model's generalization ability while
preventing overfitting. On top of a strong NAR baseline, our experimental
results on the WMT14 En-De and WMT16 En-Ro news translation tasks confirm that
monolingual data augmentation consistently improves the performance of the NAR
model to approach the teacher AR model's performance, yields comparable or
better results than the best non-iterative NAR methods in the literature and
helps reduce overfitting in the training process.
| 2,020 | Computation and Language |
How Can We Accelerate Progress Towards Human-like Linguistic
Generalization? | This position paper describes and critiques the Pretraining-Agnostic
Identically Distributed (PAID) evaluation paradigm, which has become a central
tool for measuring progress in natural language understanding. This paradigm
consists of three stages: (1) pre-training of a word prediction model on a
corpus of arbitrary size; (2) fine-tuning (transfer learning) on a training set
representing a classification task; (3) evaluation on a test set drawn from the
same distribution as that training set. This paradigm favors simple, low-bias
architectures, which, first, can be scaled to process vast amounts of data, and
second, can capture the fine-grained statistical properties of a particular
data set, regardless of whether those properties are likely to generalize to
examples of the task outside the data set. This contrasts with humans, who
learn language from several orders of magnitude less data than the systems
favored by this evaluation paradigm, and generalize to new tasks in a
consistent way. We advocate for supplementing or replacing PAID with paradigms
that reward architectures that generalize as quickly and robustly as humans.
| 2,020 | Computation and Language |
Bootstrapping Techniques for Polysynthetic Morphological Analysis | Polysynthetic languages have exceptionally large and sparse vocabularies,
thanks to the number of morpheme slots and combinations in a word. This
complexity, together with a general scarcity of written data, poses a challenge
to the development of natural language technologies. To address this challenge,
we offer linguistically-informed approaches for bootstrapping a neural
morphological analyzer, and demonstrate its application to Kunwinjku, a
polysynthetic Australian language. We generate data from a finite state
transducer to train an encoder-decoder model. We improve the model by
"hallucinating" missing linguistic structure into the training data, and by
resampling from a Zipf distribution to simulate a more natural distribution of
morphemes. The best model accounts for all instances of reduplication in the
test set and achieves an accuracy of 94.7% overall, a 10 percentage point
improvement over the FST baseline. This process demonstrates the feasibility of
bootstrapping a neural morph analyzer from minimal resources.
| 2,020 | Computation and Language |
On the Inference Calibration of Neural Machine Translation | Confidence calibration, which aims to make model predictions equal to the
true correctness measures, is important for neural machine translation (NMT)
because it is able to offer useful indicators of translation errors in the
generated output. While prior studies have shown that NMT models trained with
label smoothing are well-calibrated on the ground-truth training data, we find
that miscalibration still remains a severe challenge for NMT during inference
due to the discrepancy between training and inference. By carefully designing
experiments on three language pairs, our work provides in-depth analyses of the
correlation between calibration and translation performance as well as
linguistic properties of miscalibration and reports a number of interesting
findings that might help humans better analyze, understand and improve NMT
models. Based on these observations, we further propose a new graduated label
smoothing method that can improve both inference calibration and translation
performance.
| 2,020 | Computation and Language |
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation | Word embeddings derived from human-generated corpora inherit strong gender
bias which can be further amplified by downstream models. Some commonly adopted
debiasing approaches, including the seminal Hard Debias algorithm, apply
post-processing procedures that project pre-trained word embeddings into a
subspace orthogonal to an inferred gender subspace. We discover that
semantic-agnostic corpus regularities such as word frequency captured by the
word embeddings negatively impact the performance of these algorithms. We
propose a simple but effective technique, Double Hard Debias, which purifies
the word embeddings against such corpus regularities prior to inferring and
removing the gender subspace. Experiments on three bias mitigation benchmarks
show that our approach preserves the distributional semantics of the
pre-trained word embeddings while reducing gender bias to a significantly
larger degree than prior approaches.
| 2,020 | Computation and Language |
Towards Faithful Neural Table-to-Text Generation with Content-Matching
Constraints | Text generation from a knowledge base aims to translate knowledge triples to
natural language descriptions. Most existing methods ignore the faithfulness
between a generated text description and the original table, leading to
generated information that goes beyond the content of the table. In this paper,
for the first time, we propose a novel Transformer-based generation framework
to achieve the goal. The core techniques in our method to enforce faithfulness
include a new table-text optimal-transport matching loss and a table-text
embedding similarity loss based on the Transformer model. Furthermore, to
evaluate faithfulness, we propose a new automatic metric specialized to the
table-to-text generation problem. We also provide detailed analysis on each
component of our model in our experiments. Automatic and human evaluations show
that our framework can significantly outperform state-of-the-art by a large
margin.
| 2,020 | Computation and Language |
Unsupervised Morphological Paradigm Completion | We propose the task of unsupervised morphological paradigm completion. Given
only raw text and a lemma list, the task consists of generating the
morphological paradigms, i.e., all inflected forms, of the lemmas. From a
natural language processing (NLP) perspective, this is a challenging
unsupervised task, and high-performing systems have the potential to improve
tools for low-resource languages or to assist linguistic annotators. From a
cognitive science perspective, this can shed light on how children acquire
morphological knowledge. We further introduce a system for the task, which
generates morphological paradigms via the following steps: (i) EDIT TREE
retrieval, (ii) additional lemma retrieval, (iii) paradigm size discovery, and
(iv) inflection generation. We perform an evaluation on 14 typologically
diverse languages. Our system outperforms trivial baselines with ease and, for
some languages, even obtains a higher accuracy than minimally supervised
systems.
| 2,020 | Computation and Language |
Efficient Second-Order TreeCRF for Neural Dependency Parsing | In the deep learning (DL) era, parsing models are extremely simplified with
little hurt on performance, thanks to the remarkable capability of multi-layer
BiLSTMs in context representation. As the most popular graph-based dependency
parser due to its high efficiency and performance, the biaffine parser directly
scores single dependencies under the arc-factorization assumption, and adopts a
very simple local token-wise cross-entropy training loss. This paper for the
first time presents a second-order TreeCRF extension to the biaffine parser.
For a long time, the complexity and inefficiency of the inside-outside
algorithm hinder the popularity of TreeCRF. To address this issue, we propose
an effective way to batchify the inside and Viterbi algorithms for direct large
matrix operation on GPUs, and to avoid the complex outside algorithm via
efficient back-propagation. Experiments and analysis on 27 datasets from 13
languages clearly show that techniques developed before the DL era, such as
structural learning (global TreeCRF loss) and high-order modeling are still
useful, and can further boost parsing performance over the state-of-the-art
biaffine parser, especially for partially annotated training data. We release
our code at https://github.com/yzhangcs/crfpar.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.