Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
PRover: Proof Generation for Interpretable Reasoning over Rules | Recent work by Clark et al. (2020) shows that transformers can act as 'soft
theorem provers' by answering questions over explicitly provided knowledge in
natural language. In our work, we take a step closer to emulating formal
theorem provers, by proposing PROVER, an interpretable transformer-based model
that jointly answers binary questions over rule-bases and generates the
corresponding proofs. Our model learns to predict nodes and edges corresponding
to proof graphs in an efficient constrained training paradigm. During
inference, a valid proof, satisfying a set of global constraints is generated.
We conduct experiments on synthetic, hand-authored, and human-paraphrased
rule-bases to show promising results for QA and proof generation, with strong
generalization performance. First, PROVER generates proofs with an accuracy of
87%, while retaining or improving performance on the QA task, compared to
RuleTakers (up to 6% improvement on zero-shot evaluation). Second, when trained
on questions requiring lower depths of reasoning, it generalizes significantly
better to higher depths (up to 15% improvement). Third, PROVER obtains near
perfect QA accuracy of 98% using only 40% of the training data. However,
generating proofs for questions requiring higher depths of reasoning becomes
challenging, and the accuracy drops to 65% for 'depth 5', indicating
significant scope for future work. Our code and models are publicly available
at https://github.com/swarnaHub/PRover
| 2,020 | Computation and Language |
Semantic Evaluation for Text-to-SQL with Distilled Test Suites | We propose test suite accuracy to approximate semantic accuracy for
Text-to-SQL models. Our method distills a small test suite of databases that
achieves high code coverage for the gold query from a large number of randomly
generated databases. At evaluation time, it computes the denotation accuracy of
the predicted queries on the distilled test suite, hence calculating a tight
upper-bound for semantic accuracy efficiently. We use our proposed method to
evaluate 21 models submitted to the Spider leader board and manually verify
that our method is always correct on 100 examples. In contrast, the current
Spider metric leads to a 2.5% false negative rate on average and 8.1% in the
worst case, indicating that test suite accuracy is needed. Our implementation,
along with distilled test suites for eleven Text-to-SQL datasets, is publicly
available.
| 2,020 | Computation and Language |
Robustness and Reliability of Gender Bias Assessment in Word Embeddings:
The Role of Base Pairs | It has been shown that word embeddings can exhibit gender bias, and various
methods have been proposed to quantify this. However, the extent to which the
methods are capturing social stereotypes inherited from the data has been
debated. Bias is a complex concept and there exist multiple ways to define it.
Previous work has leveraged gender word pairs to measure bias and extract
biased analogies. We show that the reliance on these gendered pairs has strong
limitations: bias measures based off of them are not robust and cannot identify
common types of real-world bias, whilst analogies utilising them are unsuitable
indicators of bias. In particular, the well-known analogy "man is to
computer-programmer as woman is to homemaker" is due to word similarity rather
than societal bias. This has important implications for work on measuring bias
in embeddings and related work debiasing embeddings.
| 2,020 | Computation and Language |
A Novel Challenge Set for Hebrew Morphological Disambiguation and
Diacritics Restoration | One of the primary tasks of morphological parsers is the disambiguation of
homographs. Particularly difficult are cases of unbalanced ambiguity, where one
of the possible analyses is far more frequent than the others. In such cases,
there may not exist sufficient examples of the minority analyses in order to
properly evaluate performance, nor to train effective classifiers. In this
paper we address the issue of unbalanced morphological ambiguities in Hebrew.
We offer a challenge set for Hebrew homographs -- the first of its kind --
containing substantial attestation of each analysis of 21 Hebrew homographs. We
show that the current SOTA of Hebrew disambiguation performs poorly on cases of
unbalanced ambiguity. Leveraging our new dataset, we achieve a new
state-of-the-art for all 21 words, improving the overall average F1 score from
0.67 to 0.95. Our resulting annotated datasets are made publicly available for
further research.
| 2,020 | Computation and Language |
LOGAN: Local Group Bias Detection by Clustering | Machine learning techniques have been widely used in natural language
processing (NLP). However, as revealed by many recent studies, machine learning
models often inherit and amplify the societal biases in data. Various metrics
have been proposed to quantify biases in model predictions. In particular,
several of them evaluate disparity in model performance between protected
groups and advantaged groups in the test corpus. However, we argue that
evaluating bias at the corpus level is not enough for understanding how biases
are embedded in a model. In fact, a model with similar aggregated performance
between different groups on the entire data may behave differently on instances
in a local region. To analyze and detect such local bias, we propose LOGAN, a
new bias detection technique based on clustering. Experiments on toxicity
classification and object classification tasks show that LOGAN identifies bias
in a local region and allows us to better analyze the biases in model
predictions.
| 2,020 | Computation and Language |
COD3S: Diverse Generation with Discrete Semantic Signatures | We present COD3S, a novel method for generating semantically diverse
sentences using neural sequence-to-sequence (seq2seq) models. Conditioned on an
input, seq2seq models typically produce semantically and syntactically
homogeneous sets of sentences and thus perform poorly on one-to-many sequence
generation tasks. Our two-stage approach improves output diversity by
conditioning generation on locality-sensitive hash (LSH)-based semantic
sentence codes whose Hamming distances highly correlate with human judgments of
semantic textual similarity. Though it is generally applicable, we apply COD3S
to causal generation, the task of predicting a proposition's plausible causes
or effects. We demonstrate through automatic and human evaluation that
responses produced using our method exhibit improved diversity without
degrading task performance.
| 2,020 | Computation and Language |
Keep CALM and Explore: Language Models for Action Generation in
Text-based Games | Text-based games present a unique challenge for autonomous agents to operate
in natural language and handle enormous action spaces. In this paper, we
propose the Contextual Action Language Model (CALM) to generate a compact set
of action candidates at each game state. Our key insight is to train language
models on human gameplay, where people demonstrate linguistic priors and a
general game sense for promising actions conditioned on game history. We
combine CALM with a reinforcement learning agent which re-ranks the generated
action candidates to maximize in-game rewards. We evaluate our approach using
the Jericho benchmark, on games unseen by CALM during training. Our method
obtains a 69% relative improvement in average game score over the previous
state-of-the-art model. Surprisingly, on half of these games, CALM is
competitive with or better than other models that have access to ground truth
admissible actions. Code and data are available at
https://github.com/princeton-nlp/calm-textgame.
| 2,020 | Computation and Language |
Supervised Seeded Iterated Learning for Interactive Language Learning | Language drift has been one of the major obstacles to train language models
through interaction. When word-based conversational agents are trained towards
completing a task, they tend to invent their language rather than leveraging
natural language. In recent literature, two general methods partially counter
this phenomenon: Supervised Selfplay (S2P) and Seeded Iterated Learning (SIL).
While S2P jointly trains interactive and supervised losses to counter the
drift, SIL changes the training dynamics to prevent language drift from
occurring. In this paper, we first highlight their respective weaknesses, i.e.,
late-stage training collapses and higher negative likelihood when evaluated on
human corpus. Given these observations, we introduce Supervised Seeded Iterated
Learning to combine both methods to minimize their respective weaknesses. We
then show the effectiveness of \algo in the language-drift translation game.
| 2,020 | Computation and Language |
Are "Undocumented Workers" the Same as "Illegal Aliens"? Disentangling
Denotation and Connotation in Vector Spaces | In politics, neologisms are frequently invented for partisan objectives. For
example, "undocumented workers" and "illegal aliens" refer to the same group of
people (i.e., they have the same denotation), but they carry clearly different
connotations. Examples like these have traditionally posed a challenge to
reference-based semantic theories and led to increasing acceptance of
alternative theories (e.g., Two-Factor Semantics) among philosophers and
cognitive scientists. In NLP, however, popular pretrained models encode both
denotation and connotation as one entangled representation. In this study, we
propose an adversarial neural network that decomposes a pretrained
representation as independent denotation and connotation representations. For
intrinsic interpretability, we show that words with the same denotation but
different connotations (e.g., "immigrants" vs. "aliens", "estate tax" vs.
"death tax") move closer to each other in denotation space while moving further
apart in connotation space. For extrinsic application, we train an information
retrieval system with our disentangled representations and show that the
denotation vectors improve the viewpoint diversity of document rankings.
| 2,020 | Computation and Language |
Plug and Play Autoencoders for Conditional Text Generation | Text autoencoders are commonly used for conditional generation tasks such as
style transfer. We propose methods which are plug and play, where any
pretrained autoencoder can be used, and only require learning a mapping within
the autoencoder's embedding space, training embedding-to-embedding (Emb2Emb).
This reduces the need for labeled training data for the task and makes the
training procedure more efficient. Crucial to the success of this method is a
loss term for keeping the mapped embedding on the manifold of the autoencoder
and a mapping which is trained to navigate the manifold by learning offset
vectors. Evaluations on style transfer tasks both with and without
sequence-to-sequence supervision show that our method performs better than or
comparable to strong baselines while being up to four times faster.
| 2,020 | Computation and Language |
Compositional Demographic Word Embeddings | Word embeddings are usually derived from corpora containing text from many
individuals, thus leading to general purpose representations rather than
individually personalized representations. While personalized embeddings can be
useful to improve language model performance and other language processing
tasks, they can only be computed for people with a large amount of longitudinal
data, which is not the case for new users. We propose a new form of
personalized word embeddings that use demographic-specific word representations
derived compositionally from full or partial demographic information for a user
(i.e., gender, age, location, religion). We show that the resulting
demographic-aware word representations outperform generic word representations
on two tasks for English: language modeling and word associations. We further
explore the trade-off between the number of available attributes and their
relative effectiveness and discuss the ethical implications of using them.
| 2,020 | Computation and Language |
A Review on Fact Extraction and Verification | We study the fact checking problem, which aims to identify the veracity of a
given claim. Specifically, we focus on the task of Fact Extraction and
VERification (FEVER) and its accompanied dataset. The task consists of the
subtasks of retrieving the relevant documents (and sentences) from Wikipedia
and validating whether the information in the documents supports or refutes a
given claim. This task is essential and can be the building block of
applications such as fake news detection and medical claim verification. In
this paper, we aim at a better understanding of the challenges of the task by
presenting the literature in a structured and comprehensive way. We describe
the proposed methods by analyzing the technical perspectives of the different
approaches and discussing the performance results on the FEVER dataset, which
is the most well-studied and formally structured dataset on the fact extraction
and verification task. We also conduct the largest experimental study to date
on identifying beneficial loss functions for the sentence retrieval component.
Our analysis indicates that sampling negative sentences is important for
improving the performance and decreasing the computational complexity. Finally,
we describe open issues and future challenges, and we motivate future research
in the task.
| 2,021 | Computation and Language |
GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and
Event Extraction | Recent progress in cross-lingual relation and event extraction use graph
convolutional networks (GCNs) with universal dependency parses to learn
language-agnostic sentence representations such that models trained on one
language can be applied to other languages. However, GCNs struggle to model
words with long-range dependencies or are not directly connected in the
dependency tree. To address these challenges, we propose to utilize the
self-attention mechanism where we explicitly fuse structural information to
learn the dependencies between words with different syntactic distances. We
introduce GATE, a {\bf G}raph {\bf A}ttention {\bf T}ransformer {\bf E}ncoder,
and test its cross-lingual transferability on relation and event extraction
tasks. We perform experiments on the ACE05 dataset that includes three
typologically different languages: English, Chinese, and Arabic. The evaluation
results show that GATE outperforms three recently proposed methods by a large
margin. Our detailed analysis reveals that due to the reliance on syntactic
dependencies, GATE produces robust representations that facilitate transfer
across languages.
| 2,021 | Computation and Language |
Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic
Priming | Models trained to estimate word probabilities in context have become
ubiquitous in natural language processing. How do these models use lexical cues
in context to inform their word probabilities? To answer this question, we
present a case study analyzing the pre-trained BERT model with tests informed
by semantic priming. Using English lexical stimuli that show priming in humans,
we find that BERT too shows "priming," predicting a word with greater
probability when the context includes a related word versus an unrelated one.
This effect decreases as the amount of information provided by the context
increases. Follow-up analysis shows BERT to be increasingly distracted by
related prime words as context becomes more informative, assigning lower
probabilities to related words. Our findings highlight the importance of
considering contextual constraint effects when studying word prediction in
these models, and highlight possible parallels with human processing.
| 2,021 | Computation and Language |
On Negative Interference in Multilingual Models: Findings and A
Meta-Learning Treatment | Modern multilingual models are trained on concatenated text from multiple
languages in hopes of conferring benefits to each (positive transfer), with the
most pronounced benefits accruing to low-resource languages. However, recent
work has shown that this approach can degrade performance on high-resource
languages, a phenomenon known as negative interference. In this paper, we
present the first systematic study of negative interference. We show that,
contrary to previous belief, negative interference also impacts low-resource
languages. While parameters are maximally shared to learn language-universal
structures, we demonstrate that language-specific parameters do exist in
multilingual models and they are a potential cause of negative interference.
Motivated by these observations, we also present a meta-learning algorithm that
obtains better cross-lingual transferability and alleviates negative
interference, by adding language-specific layers as meta-parameters and
training them in a manner that explicitly improves shared layers'
generalization on all languages. Overall, our results show that negative
interference is more common than previously known, suggesting new directions
for improving multilingual representations.
| 2,020 | Computation and Language |
Resource-Enhanced Neural Model for Event Argument Extraction | Event argument extraction (EAE) aims to identify the arguments of an event
and classify the roles that those arguments play. Despite great efforts made in
prior work, there remain many challenges: (1) Data scarcity. (2) Capturing the
long-range dependency, specifically, the connection between an event trigger
and a distant event argument. (3) Integrating event trigger information into
candidate argument representation. For (1), we explore using unlabeled data in
different ways. For (2), we propose to use a syntax-attending Transformer that
can utilize dependency parses to guide the attention mechanism. For (3), we
propose a trigger-aware sequence encoder with several types of
trigger-dependent sequence representations. We also support argument extraction
either from text annotated with gold entities or from plain text. Experiments
on the English ACE2005 benchmark show that our approach achieves a new
state-of-the-art.
| 2,020 | Computation and Language |
Why Skip If You Can Combine: A Simple Knowledge Distillation Technique
for Intermediate Layers | With the growth of computing power neural machine translation (NMT) models
also grow accordingly and become better. However, they also become harder to
deploy on edge devices due to memory constraints. To cope with this problem, a
common practice is to distill knowledge from a large and accurately-trained
teacher network (T) into a compact student network (S). Although knowledge
distillation (KD) is useful in most cases, our study shows that existing KD
techniques might not be suitable enough for deep NMT engines, so we propose a
novel alternative. In our model, besides matching T and S predictions we have a
combinatorial mechanism to inject layer-level supervision from T to S. In this
paper, we target low-resource settings and evaluate our translation engines for
Portuguese--English, Turkish--English, and English--German directions. Students
trained using our technique have 50% fewer parameters and can still deliver
comparable results to those of 12-layer teachers.
| 2,020 | Computation and Language |
A Survey on Recognizing Textual Entailment as an NLP Evaluation | Recognizing Textual Entailment (RTE) was proposed as a unified evaluation
framework to compare semantic understanding of different NLP systems. In this
survey paper, we provide an overview of different approaches for evaluating and
understanding the reasoning capabilities of NLP systems. We then focus our
discussion on RTE by highlighting prominent RTE datasets as well as advances in
RTE dataset that focus on specific linguistic phenomena that can be used to
evaluate NLP systems on a fine-grained level. We conclude by arguing that when
evaluating NLP systems, the community should utilize newly introduced RTE
datasets that focus on specific linguistic phenomena.
| 2,020 | Computation and Language |
Anubhuti -- An annotated dataset for emotional analysis of Bengali short
stories | Thousands of short stories and articles are being written in many different
languages all around the world today. Bengali, or Bangla, is the second highest
spoken language in India after Hindi and is the national language of the
country of Bangladesh. This work reports in detail the creation of Anubhuti --
the first and largest text corpus for analyzing emotions expressed by writers
of Bengali short stories. We explain the data collection methods, the manual
annotation process and the resulting high inter-annotator agreement of the
dataset due to the linguistic expertise of the annotators and the clear
methodology of labelling followed. We also address some of the challenges faced
in the collection of raw data and annotation process of a low resource language
like Bengali. We have verified the performance of our dataset with baseline
Machine Learning as well as a Deep Learning model for emotion classification
and have found that these standard models have a high accuracy and relevant
feature selection on Anubhuti. In addition, we also explain how this dataset
can be of interest to linguists and data analysts to study the flow of emotions
as expressed by writers of Bengali literature.
| 2,020 | Computation and Language |
RoFT: A Tool for Evaluating Human Detection of Machine-Generated Text | In recent years, large neural networks for natural language generation (NLG)
have made leaps and bounds in their ability to generate fluent text. However,
the tasks of evaluating quality differences between NLG systems and
understanding how humans perceive the generated text remain both crucial and
difficult. In this system demonstration, we present Real or Fake Text (RoFT), a
website that tackles both of these challenges by inviting users to try their
hand at detecting machine-generated text in a variety of domains. We introduce
a novel evaluation task based on detecting the boundary at which a text passage
that starts off human-written transitions to being machine-generated. We show
preliminary results of using RoFT to evaluate detection of machine-generated
news articles.
| 2,020 | Computation and Language |
Beyond [CLS] through Ranking by Generation | Generative models for Information Retrieval, where ranking of documents is
viewed as the task of generating a query from a document's language model, were
very successful in various IR tasks in the past. However, with the advent of
modern deep neural networks, attention has shifted to discriminative ranking
functions that model the semantic similarity of documents and queries instead.
Recently, deep generative models such as GPT2 and BART have been shown to be
excellent text generators, but their effectiveness as rankers have not been
demonstrated yet. In this work, we revisit the generative framework for
information retrieval and show that our generative approaches are as effective
as state-of-the-art semantic similarity-based discriminative models for the
answer selection task. Additionally, we demonstrate the effectiveness of
unlikelihood losses for IR.
| 2,020 | Computation and Language |
Is the Best Better? Bayesian Statistical Model Comparison for Natural
Language Processing | Recent work raises concerns about the use of standard splits to compare
natural language processing models. We propose a Bayesian statistical model
comparison technique which uses k-fold cross-validation across multiple data
sets to estimate the likelihood that one model will outperform the other, or
that the two will produce practically equivalent results. We use this technique
to rank six English part-of-speech taggers across two data sets and three
evaluation metrics.
| 2,020 | Computation and Language |
WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive
Summarization | We introduce WikiLingua, a large-scale, multilingual dataset for the
evaluation of crosslingual abstractive summarization systems. We extract
article and summary pairs in 18 languages from WikiHow, a high quality,
collaborative resource of how-to guides on a diverse set of topics written by
human authors. We create gold-standard article-summary alignments across
languages by aligning the images that are used to describe each how-to step in
an article. As a set of baselines for further studies, we evaluate the
performance of existing cross-lingual abstractive summarization methods on our
dataset. We further propose a method for direct crosslingual summarization
(i.e., without requiring translation at inference time) by leveraging synthetic
data and Neural Machine Translation as a pre-training step. Our method
significantly outperforms the baseline approaches, while being more cost
efficient during inference.
| 2,020 | Computation and Language |
Knowledge-aware Method for Confusing Charge Prediction | Automatic charge prediction task aims to determine the final charges based on
fact descriptions of criminal cases, which is a vital application of legal
assistant systems. Conventional works usually depend on fact descriptions to
predict charges while ignoring the legal schematic knowledge, which makes it
difficult to distinguish confusing charges. In this paper, we propose a
knowledge-attentive neural network model, which introduces legal schematic
knowledge about charges and exploit the knowledge hierarchical representation
as the discriminative features to differentiate confusing charges. Our model
takes the textual fact description as the input and learns fact representation
through a graph convolutional network. A legal schematic knowledge transformer
is utilized to generate crucial knowledge representations oriented to the legal
schematic knowledge at both the schema and charge levels. We apply a knowledge
matching network for effectively incorporating charge information into the fact
to learn knowledge-aware fact representation. Finally, we use the
knowledge-aware fact representation for charge prediction. We create two
real-world datasets and experimental results show that our proposed model can
outperform other state-of-the-art baselines on accuracy and F1 score,
especially on dealing with confusing charges.
| 2,020 | Computation and Language |
DiPair: Fast and Accurate Distillation for Trillion-Scale Text Matching
and Pair Modeling | Pre-trained models like BERT (Devlin et al., 2018) have dominated NLP / IR
applications such as single sentence classification, text pair classification,
and question answering. However, deploying these models in real systems is
highly non-trivial due to their exorbitant computational costs. A common remedy
to this is knowledge distillation (Hinton et al., 2015), leading to faster
inference. However -- as we show here -- existing works are not optimized for
dealing with pairs (or tuples) of texts. Consequently, they are either not
scalable or demonstrate subpar performance. In this work, we propose DiPair --
a novel framework for distilling fast and accurate models on text pair tasks.
Coupled with an end-to-end training strategy, DiPair is both highly scalable
and offers improved quality-speed tradeoffs. Empirical studies conducted on
both academic and real-world e-commerce benchmarks demonstrate the efficacy of
the proposed approach with speedups of over 350x and minimal quality drop
relative to the cross-attention teacher BERT model.
| 2,021 | Computation and Language |
VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word
Representations for Improved Definition Modeling | In this paper, we tackle the task of definition modeling, where the goal is
to learn to generate definitions of words and phrases. Existing approaches for
this task are discriminative, combining distributional and lexical semantics in
an implicit rather than direct way. To tackle this issue we propose a
generative model for the task, introducing a continuous latent variable to
explicitly model the underlying relationship between a phrase used within a
context and its definition. We rely on variational inference for estimation and
leverage contextualized word embeddings for improved performance. Our approach
is evaluated on four existing challenging benchmarks with the addition of two
new datasets, "Cambridge" and the first non-English corpus "Robert", which we
release to complement our empirical study. Our Variational Contextual
Definition Modeler (VCDM) achieves state-of-the-art performance in terms of
automatic and human evaluation metrics, demonstrating the effectiveness of our
approach.
| 2,020 | Computation and Language |
A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial
Expressions | Recent models achieve promising results in visually grounded dialogues.
However, existing datasets often contain undesirable biases and lack
sophisticated linguistic analyses, which make it difficult to understand how
well current models recognize their precise linguistic structures. To address
this problem, we make two design choices: first, we focus on OneCommon Corpus
\citep{udagawa2019natural,udagawa2020annotated}, a simple yet challenging
common grounding dataset which contains minimal bias by design. Second, we
analyze their linguistic structures based on \textit{spatial expressions} and
provide comprehensive and reliable annotation for 600 dialogues. We show that
our annotation captures important linguistic structures including
predicate-argument structure, modification and ellipsis. In our experiments, we
assess the model's understanding of these structures through reference
resolution. We demonstrate that our annotation can reveal both the strengths
and weaknesses of baseline models in essential levels of detail. Overall, we
propose a novel framework and resource for investigating fine-grained language
understanding in visually grounded dialogues.
| 2,020 | Computation and Language |
Improving Context Modeling in Neural Topic Segmentation | Topic segmentation is critical in key NLP tasks and recent works favor highly
effective neural supervised approaches. However, current neural solutions are
arguably limited in how they model context. In this paper, we enhance a
segmenter based on a hierarchical attention BiLSTM network to better model
context, by adding a coherence-related auxiliary task and restricted
self-attention. Our optimized segmenter outperforms SOTA approaches when
trained and tested on three datasets. We also the robustness of our proposed
model in domain transfer setting by training a model on a large-scale dataset
and testing it on four challenging real-world benchmarks. Furthermore, we apply
our proposed strategy to two other languages (German and Chinese), and show its
effectiveness in multilingual scenarios.
| 2,020 | Computation and Language |
Pre-training Multilingual Neural Machine Translation by Leveraging
Alignment Information | We investigate the following question for machine translation (MT): can we
develop a single universal MT model to serve as the common seed and obtain
derivative and improved models on arbitrary language pairs? We propose mRASP,
an approach to pre-train a universal multilingual neural machine translation
model. Our key idea in mRASP is its novel technique of random aligned
substitution, which brings words and phrases with similar meanings across
multiple languages closer in the representation space. We pre-train a mRASP
model on 32 language pairs jointly with only public datasets. The model is then
fine-tuned on downstream language pairs to obtain specialized MT models. We
carry out extensive experiments on 42 translation directions across a diverse
settings, including low, medium, rich resource, and as well as transferring to
exotic language pairs. Experimental results demonstrate that mRASP achieves
significant performance improvement compared to directly training on those
target pairs. It is the first time to verify that multiple low-resource
language pairs can be utilized to improve rich resource MT. Surprisingly, mRASP
is even able to improve the translation quality on exotic languages that never
occur in the pre-training corpus. Code, data, and pre-trained models are
available at https://github.com/linzehui/mRASP.
| 2,021 | Computation and Language |
Unsupervised Parsing via Constituency Tests | We propose a method for unsupervised parsing based on the linguistic notion
of a constituency test. One type of constituency test involves modifying the
sentence via some transformation (e.g. replacing the span with a pronoun) and
then judging the result (e.g. checking if it is grammatical). Motivated by this
idea, we design an unsupervised parser by specifying a set of transformations
and using an unsupervised neural acceptability model to make grammaticality
decisions. To produce a tree given a sentence, we score each span by
aggregating its constituency test judgments, and we choose the binary tree with
the highest total score. While this approach already achieves performance in
the range of current methods, we further improve accuracy by fine-tuning the
grammaticality model through a refinement procedure, where we alternate between
improving the estimated trees and improving the grammaticality model. The
refined model achieves 62.8 F1 on the Penn Treebank test set, an absolute
improvement of 7.6 points over the previous best published result.
| 2,020 | Computation and Language |
OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open
Information Extraction | A recent state-of-the-art neural open information extraction (OpenIE) system
generates extractions iteratively, requiring repeated encoding of partial
outputs. This comes at a significant computational cost. On the other hand,
sequence labeling approaches for OpenIE are much faster, but worse in
extraction quality. In this paper, we bridge this trade-off by presenting an
iterative labeling-based system that establishes a new state of the art for
OpenIE, while extracting 10x faster. This is achieved through a novel Iterative
Grid Labeling (IGL) architecture, which treats OpenIE as a 2-D grid labeling
task. We improve its performance further by applying coverage (soft)
constraints on the grid at training time.
Moreover, on observing that the best OpenIE systems falter at handling
coordination structures, our OpenIE system also incorporates a new coordination
analyzer built with the same IGL architecture. This IGL based coordination
analyzer helps our OpenIE system handle complicated coordination structures,
while also establishing a new state of the art on the task of coordination
analysis, with a 12.3 pts improvement in F1 over previous analyzers. Our OpenIE
system, OpenIE6, beats the previous systems by as much as 4 pts in F1, while
being much faster.
| 2,020 | Computation and Language |
Fortifying Toxic Speech Detectors Against Veiled Toxicity | Modern toxic speech detectors are incompetent in recognizing disguised
offensive language, such as adversarial attacks that deliberately avoid known
toxic lexicons, or manifestations of implicit bias. Building a large annotated
dataset for such veiled toxicity can be very expensive. In this work, we
propose a framework aimed at fortifying existing toxic speech detectors without
a large labeled corpus of veiled toxicity. Just a handful of probing examples
are used to surface orders of magnitude more disguised offenses. We augment the
toxic speech detector's training data with these discovered offensive examples,
thereby making it more robust to veiled toxicity while preserving its utility
in detecting overt toxicity.
| 2,020 | Computation and Language |
A Self-Refinement Strategy for Noise Reduction in Grammatical Error
Correction | Existing approaches for grammatical error correction (GEC) largely rely on
supervised learning with manually created GEC datasets. However, there has been
little focus on verifying and ensuring the quality of the datasets, and on how
lower-quality data might affect GEC performance. We indeed found that there is
a non-negligible amount of "noise" where errors were inappropriately edited or
left uncorrected. To address this, we designed a self-refinement method where
the key idea is to denoise these datasets by leveraging the prediction
consistency of existing models, and outperformed strong denoising baseline
methods. We further applied task-specific techniques and achieved
state-of-the-art performance on the CoNLL-2014, JFLEG, and BEA-2019 benchmarks.
We then analyzed the effect of the proposed denoising method, and found that
our approach leads to improved coverage of corrections and facilitated fluency
edits which are reflected in higher recall and overall performance.
| 2,020 | Computation and Language |
Knowledge-enriched, Type-constrained and Grammar-guided Question
Generation over Knowledge Bases | Question generation over knowledge bases (KBQG) aims at generating
natural-language questions about a subgraph, i.e. a set of (connected) triples.
Two main challenges still face the current crop of encoder-decoder-based
methods, especially on small subgraphs: (1) low diversity and poor fluency due
to the limited information contained in the subgraphs, and (2) semantic drift
due to the decoder's oblivion of the semantics of the answer entity. We propose
an innovative knowledge-enriched, type-constrained and grammar-guided KBQG
model, named KTG, to addresses the above challenges. In our model, the encoder
is equipped with auxiliary information from the KB, and the decoder is
constrained with word types during QG. Specifically, entity domain and
description, as well as relation hierarchy information are considered to
construct question contexts, while a conditional copy mechanism is incorporated
to modulate question semantics according to current word types. Besides, a
novel reward function featuring grammatical similarity is designed to improve
both generative richness and syntactic correctness via reinforcement learning.
Extensive experiments show that our proposed model outperforms existing methods
by a significant margin on two widely-used benchmark datasets SimpleQuestion
and PathQuestion.
| 2,020 | Computation and Language |
Multilingual Knowledge Graph Completion via Ensemble Knowledge Transfer | Predicting missing facts in a knowledge graph (KG) is a crucial task in
knowledge base construction and reasoning, and it has been the subject of much
research in recent works using KG embeddings. While existing KG embedding
approaches mainly learn and predict facts within a single KG, a more plausible
solution would benefit from the knowledge in multiple language-specific KGs,
considering that different KGs have their own strengths and limitations on data
quality and coverage. This is quite challenging, since the transfer of
knowledge among multiple independently maintained KGs is often hindered by the
insufficiency of alignment information and the inconsistency of described
facts. In this paper, we propose KEnS, a novel framework for embedding learning
and ensemble knowledge transfer across a number of language-specific KGs. KEnS
embeds all KGs in a shared embedding space, where the association of entities
is captured based on self-learning. Then, KEnS performs ensemble inference to
combine prediction results from embeddings of multiple language-specific KGs,
for which multiple ensemble techniques are investigated. Experiments on five
real-world language-specific KGs show that KEnS consistently improves
state-of-the-art methods on KG completion, via effectively identifying and
leveraging complementary knowledge.
| 2,020 | Computation and Language |
Transfer Learning and Distant Supervision for Multilingual Transformer
Models: A Study on African Languages | Multilingual transformer models like mBERT and XLM-RoBERTa have obtained
great improvements for many NLP tasks on a variety of languages. However,
recent works also showed that results from high-resource languages could not be
easily transferred to realistic, low-resource scenarios. In this work, we study
trends in performance for different amounts of available resources for the
three African languages Hausa, isiXhosa and Yor\`ub\'a on both NER and topic
classification. We show that in combination with transfer learning or distant
supervision, these models can achieve with as little as 10 or 100 labeled
sentences the same performance as baselines with much more supervised training
data. However, we also find settings where this does not hold. Our discussions
and additional experiments on assumptions such as time and hardware
restrictions highlight challenges and opportunities in low-resource learning.
| 2,020 | Computation and Language |
Theedhum Nandrum@Dravidian-CodeMix-FIRE2020: A Sentiment Polarity
Classifier for YouTube Comments with Code-switching between Tamil, Malayalam
and English | Theedhum Nandrum is a sentiment polarity detection system using two
approaches--a Stochastic Gradient Descent (SGD) based classifier and a Long
Short-term Memory (LSTM) based Classifier. Our approach utilises language
features like use of emoji, choice of scripts and code mixing which appeared
quite marked in the datasets specified for the Dravidian Codemix - FIRE 2020
task. The hyperparameters for the SGD were tuned using GridSearchCV. Our system
was ranked 4th in Tamil-English with a weighted average F1 score of 0.62 and
9th in Malayalam-English with a score of 0.65. We achieved a weighted average
F1 score of 0.77 for Tamil-English using a Logistic Regression based model
after the task deadline. This performance betters the top ranked classifier on
this dataset by a wide margin. Our use of language-specific Soundex to
harmonise the spelling variants in code-mixed data appears to be a novel
application of Soundex. Our complete code is published in github at
https://github.com/oligoglot/theedhum-nandrum.
| 2,020 | Computation and Language |
Rank and run-time aware compression of NLP Applications | Sequence model based NLP applications can be large. Yet, many applications
that benefit from them run on small devices with very limited compute and
storage capabilities, while still having run-time constraints. As a result,
there is a need for a compression technique that can achieve significant
compression without negatively impacting inference run-time and task accuracy.
This paper proposes a new compression technique called Hybrid Matrix
Factorization that achieves this dual objective. HMF improves low-rank matrix
factorization (LMF) techniques by doubling the rank of the matrix using an
intelligent hybrid-structure leading to better accuracy than LMF. Further, by
preserving dense matrices, it leads to faster inference run-time than pruning
or structure matrix based compression technique. We evaluate the impact of this
technique on 5 NLP benchmarks across multiple tasks (Translation, Intent
Detection, Language Modeling) and show that for similar accuracy values and
compression factors, HMF can achieve more than 2.32x faster inference run-time
than pruning and 16.77% better accuracy than LMF.
| 2,020 | Computation and Language |
Like hiking? You probably enjoy nature: Persona-grounded Dialog with
Commonsense Expansions | Existing persona-grounded dialog models often fail to capture simple
implications of given persona descriptions, something which humans are able to
do seamlessly. For example, state-of-the-art models cannot infer that interest
in hiking might imply love for nature or longing for a break. In this paper, we
propose to expand available persona sentences using existing commonsense
knowledge bases and paraphrasing resources to imbue dialog models with access
to an expanded and richer set of persona descriptions. Additionally, we
introduce fine-grained grounding on personas by encouraging the model to make a
discrete choice among persona sentences while synthesizing a dialog response.
Since such a choice is not observed in the data, we model it using a discrete
latent random variable and use variational learning to sample from hundreds of
persona expansions. Our model outperforms competitive baselines on the
PersonaChat dataset in terms of dialog quality and diversity while achieving
persona-consistent and controllable dialog generation.
| 2,020 | Computation and Language |
Unsupervised Evaluation for Question Answering with Transformers | It is challenging to automatically evaluate the answer of a QA model at
inference time. Although many models provide confidence scores, and simple
heuristics can go a long way towards indicating answer correctness, such
measures are heavily dataset-dependent and are unlikely to generalize. In this
work, we begin by investigating the hidden representations of questions,
answers, and contexts in transformer-based QA architectures. We observe a
consistent pattern in the answer representations, which we show can be used to
automatically evaluate whether or not a predicted answer span is correct. Our
method does not require any labeled data and outperforms strong heuristic
baselines, across 2 datasets and 7 domains. We are able to predict whether or
not a model's answer is correct with 91.37% accuracy on SQuAD, and 80.7%
accuracy on SubjQA. We expect that this method will have broad applications,
e.g., in the semi-automatic development of QA datasets
| 2,020 | Computation and Language |
Transformer-GCRF: Recovering Chinese Dropped Pronouns with General
Conditional Random Fields | Pronouns are often dropped in Chinese conversations and recovering the
dropped pronouns is important for NLP applications such as Machine Translation.
Existing approaches usually formulate this as a sequence labeling task of
predicting whether there is a dropped pronoun before each token and its type.
Each utterance is considered to be a sequence and labeled independently.
Although these approaches have shown promise, labeling each utterance
independently ignores the dependencies between pronouns in neighboring
utterances. Modeling these dependencies is critical to improving the
performance of dropped pronoun recovery. In this paper, we present a novel
framework that combines the strength of Transformer network with General
Conditional Random Fields (GCRF) to model the dependencies between pronouns in
neighboring utterances. Results on three Chinese conversation datasets show
that the Transformer-GCRF model outperforms the state-of-the-art dropped
pronoun recovery models. Exploratory analysis also demonstrates that the GCRF
did help to capture the dependencies between pronouns in neighboring
utterances, thus contributes to the performance improvements.
| 2,020 | Computation and Language |
Exploring and Evaluating Attributes, Values, and Structures for Entity
Alignment | Entity alignment (EA) aims at building a unified Knowledge Graph (KG) of rich
content by linking the equivalent entities from various KGs. GNN-based EA
methods present promising performances by modeling the KG structure defined by
relation triples. However, attribute triples can also provide crucial alignment
signal but have not been well explored yet. In this paper, we propose to
utilize an attributed value encoder and partition the KG into subgraphs to
model the various types of attribute triples efficiently. Besides, the
performances of current EA methods are overestimated because of the name-bias
of existing EA datasets. To make an objective evaluation, we propose a hard
experimental setting where we select equivalent entity pairs with very
different names as the test set. Under both the regular and hard settings, our
method achieves significant improvements ($5.10\%$ on average Hits@$1$ in
DBP$15$k) over $12$ baselines in cross-lingual and monolingual datasets.
Ablation studies on different subgraphs and a case study about attribute types
further demonstrate the effectiveness of our method. Source code and data can
be found at https://github.com/thunlp/explore-and-evaluate.
| 2,021 | Computation and Language |
Improving the Efficiency of Grammatical Error Correction with Erroneous
Span Detection and Correction | We propose a novel language-independent approach to improve the efficiency
for Grammatical Error Correction (GEC) by dividing the task into two subtasks:
Erroneous Span Detection (ESD) and Erroneous Span Correction (ESC). ESD
identifies grammatically incorrect text spans with an efficient sequence
tagging model. Then, ESC leverages a seq2seq model to take the sentence with
annotated erroneous spans as input and only outputs the corrected text for
these spans. Experiments show our approach performs comparably to conventional
seq2seq approaches in both English and Chinese GEC benchmarks with less than
50% time cost for inference.
| 2,020 | Computation and Language |
Narrative Text Generation with a Latent Discrete Plan | Past work on story generation has demonstrated the usefulness of conditioning
on a generation plan to generate coherent stories. However, these approaches
have used heuristics or off-the-shelf models to first tag training stories with
the desired type of plan, and then train generation models in a supervised
fashion. In this paper, we propose a deep latent variable model that first
samples a sequence of anchor words, one per sentence in the story, as part of
its generative process. During training, our model treats the sequence of
anchor words as a latent variable and attempts to induce anchoring sequences
that help guide generation in an unsupervised fashion. We conduct experiments
with several types of sentence decoder distributions: left-to-right and
non-monotonic, with different degrees of restriction. Further, since we use
amortized variational inference to train our model, we introduce two
corresponding types of inference network for predicting the posterior on anchor
words. We conduct human evaluations which demonstrate that the stories produced
by our model are rated better in comparison with baselines which do not
consider story plans, and are similar or better in quality relative to
baselines which use external supervision for plans. Additionally, the proposed
model gets favorable scores when evaluated on perplexity, diversity, and
control of story via discrete plan.
| 2,020 | Computation and Language |
Learning to Explain: Datasets and Models for Identifying Valid Reasoning
Chains in Multihop Question-Answering | Despite the rapid progress in multihop question-answering (QA), models still
have trouble explaining why an answer is correct, with limited explanation
training data available to learn from. To address this, we introduce three
explanation datasets in which explanations formed from corpus facts are
annotated. Our first dataset, eQASC, contains over 98K explanation annotations
for the multihop question answering dataset QASC, and is the first that
annotates multiple candidate explanations for each answer. The second dataset
eQASC-perturbed is constructed by crowd-sourcing perturbations (while
preserving their validity) of a subset of explanations in QASC, to test
consistency and generalization of explanation prediction models. The third
dataset eOBQA is constructed by adding explanation annotations to the OBQA
dataset to test generalization of models trained on eQASC. We show that this
data can be used to significantly improve explanation quality (+14% absolute F1
over a strong retrieval baseline) using a BERT-based classifier, but still
behind the upper bound, offering a new challenge for future research. We also
explore a delexicalized chain representation in which repeated noun phrases are
replaced by variables, thus turning them into generalized reasoning chains (for
example: "X is a Y" AND "Y has Z" IMPLIES "X has Z"). We find that generalized
chains maintain performance while also being more robust to certain
perturbations.
| 2,020 | Computation and Language |
ZEST: Zero-shot Learning from Text Descriptions using Textual Similarity
and Visual Summarization | We study the problem of recognizing visual entities from the textual
descriptions of their classes. Specifically, given birds' images with free-text
descriptions of their species, we learn to classify images of previously-unseen
species based on specie descriptions. This setup has been studied in the vision
community under the name zero-shot learning from text, focusing on learning to
transfer knowledge about visual aspects of birds from seen classes to
previously-unseen ones. Here, we suggest focusing on the textual description
and distilling from the description the most relevant information to
effectively match visual features to the parts of the text that discuss them.
Specifically, (1) we propose to leverage the similarity between species,
reflected in the similarity between text descriptions of the species. (2) we
derive visual summaries of the texts, i.e., extractive summaries that focus on
the visual features that tend to be reflected in images. We propose a simple
attention-based model augmented with the similarity and visual summaries
components. Our empirical results consistently and significantly outperform the
state-of-the-art on the largest benchmarks for text-based zero-shot learning,
illustrating the critical importance of texts for zero-shot image-recognition.
| 2,020 | Computation and Language |
COMETA: A Corpus for Medical Entity Linking in the Social Media | Whilst there has been growing progress in Entity Linking (EL) for general
language, existing datasets fail to address the complex nature of health
terminology in layman's language. Meanwhile, there is a growing need for
applications that can understand the public's voice in the health domain. To
address this we introduce a new corpus called COMETA, consisting of 20k English
biomedical entity mentions from Reddit expert-annotated with links to SNOMED
CT, a widely-used medical knowledge graph. Our corpus satisfies a combination
of desirable properties, from scale and coverage to diversity and quality, that
to the best of our knowledge has not been met by any of the existing resources
in the field. Through benchmark experiments on 20 EL baselines from string- to
neural-based models we shed light on the ability of these systems to perform
complex inference on entities and concepts under 2 challenging evaluation
scenarios. Our experimental results on COMETA illustrate that no golden bullet
exists and even the best mainstream techniques still have a significant
performance gap to fill, while the best solution relies on combining different
views of data.
| 2,020 | Computation and Language |
Improving QA Generalization by Concurrent Modeling of Multiple Biases | Existing NLP datasets contain various biases that models can easily exploit
to achieve high performances on the corresponding evaluation sets. However,
focusing on dataset-specific biases limits their ability to learn more
generalizable knowledge about the task from more general data patterns. In this
paper, we investigate the impact of debiasing methods for improving
generalization and propose a general framework for improving the performance on
both in-domain and out-of-domain datasets by concurrent modeling of multiple
biases in the training data. Our framework weights each example based on the
biases it contains and the strength of those biases in the training data. It
then uses these weights in the training objective so that the model relies less
on examples with high bias weights. We extensively evaluate our framework on
extractive question answering with training data from various domains with
multiple biases of different strengths. We perform the evaluations in two
different settings, in which the model is trained on a single domain or
multiple domains simultaneously, and show its effectiveness in both settings
compared to state-of-the-art debiasing methods.
| 2,020 | Computation and Language |
Toward Stance-based Personas for Opinionated Dialogues | In the context of chit-chat dialogues it has been shown that endowing systems
with a persona profile is important to produce more coherent and meaningful
conversations. Still, the representation of such personas has thus far been
limited to a fact-based representation (e.g. "I have two cats."). We argue that
these representations remain superficial w.r.t. the complexity of human
personality. In this work, we propose to make a step forward and investigate
stance-based persona, trying to grasp more profound characteristics, such as
opinions, values, and beliefs to drive language generation. To this end, we
introduce a novel dataset allowing to explore different stance-based persona
representations and their impact on claim generation, showing that they are
able to grasp abstract and profound aspects of the author persona.
| 2,020 | Computation and Language |
Why do you think that? Exploring Faithful Sentence-Level Rationales
Without Supervision | Evaluating the trustworthiness of a model's prediction is essential for
differentiating between `right for the right reasons' and `right for the wrong
reasons'. Identifying textual spans that determine the target label, known as
faithful rationales, usually relies on pipeline approaches or reinforcement
learning. However, such methods either require supervision and thus costly
annotation of the rationales or employ non-differentiable models. We propose a
differentiable training-framework to create models which output faithful
rationales on a sentence level, by solely applying supervision on the target
task. To achieve this, our model solves the task based on each rationale
individually and learns to assign high scores to those which solved the task
best. Our evaluation on three different datasets shows competitive results
compared to a standard BERT blackbox while exceeding a pipeline counterpart's
performance in two cases. We further exploit the transparent decision-making
process of these models to prefer selecting the correct rationales by applying
direct supervision, thereby boosting the performance on the rationale-level.
| 2,020 | Computation and Language |
Dual Reconstruction: a Unifying Objective for Semi-Supervised Neural
Machine Translation | While Iterative Back-Translation and Dual Learning effectively incorporate
monolingual training data in neural machine translation, they use different
objectives and heuristic gradient approximation strategies, and have not been
extensively compared. We introduce a novel dual reconstruction objective that
provides a unified view of Iterative Back-Translation and Dual Learning. It
motivates a theoretical analysis and controlled empirical study on
German-English and Turkish-English tasks, which both suggest that Iterative
Back-Translation is more effective than Dual Learning despite its relative
simplicity.
| 2,020 | Computation and Language |
Cross-lingual Extended Named Entity Classification of Wikipedia Articles | The FPT.AI team participated in the SHINRA2020-ML subtask of the NTCIR-15
SHINRA task. This paper describes our method to solving the problem and
discusses the official results. Our method focuses on learning cross-lingual
representations, both on the word level and document level for page
classification. We propose a three-stage approach including multilingual model
pre-training, monolingual model fine-tuning and cross-lingual voting. Our
system is able to achieve the best scores for 25 out of 30 languages; and its
accuracy gaps to the best performing systems of the other five languages are
relatively small.
| 2,020 | Computation and Language |
WER we are and WER we think we are | Natural language processing of conversational speech requires the
availability of high-quality transcripts. In this paper, we express our
skepticism towards the recent reports of very low Word Error Rates (WERs)
achieved by modern Automatic Speech Recognition (ASR) systems on benchmark
datasets. We outline several problems with popular benchmarks and compare three
state-of-the-art commercial ASR systems on an internal dataset of real-life
spontaneous human conversations and HUB'05 public benchmark. We show that WERs
are significantly higher than the best reported results. We formulate a set of
guidelines which may aid in the creation of real-life, multi-domain datasets
with high quality annotations for training and testing of robust ASR systems.
| 2,020 | Computation and Language |
Analogies minus analogy test: measuring regularities in word embeddings | Vector space models of words have long been claimed to capture linguistic
regularities as simple vector translations, but problems have been raised with
this claim. We decompose and empirically analyze the classic arithmetic word
analogy test, to motivate two new metrics that address the issues with the
standard test, and which distinguish between class-wise offset concentration
(similar directions between pairs of words drawn from different broad classes,
such as France--London, China--Ottawa, ...) and pairing consistency (the
existence of a regular transformation between correctly-matched pairs such as
France:Paris::China:Beijing). We show that, while the standard analogy test is
flawed, several popular word embeddings do nevertheless encode linguistic
regularities.
| 2,020 | Computation and Language |
"I'd rather just go to bed": Understanding Indirect Answers | We revisit a pragmatic inference problem in dialog: understanding indirect
responses to questions. Humans can interpret 'I'm starving.' in response to
'Hungry?', even without direct cue words such as 'yes' and 'no'. In dialog
systems, allowing natural responses rather than closed vocabularies would be
similarly beneficial. However, today's systems are only as sensitive to these
pragmatic moves as their language model allows. We create and release the first
large-scale English language corpus 'Circa' with 34,268 (polar question,
indirect answer) pairs to enable progress on this task. The data was collected
via elaborate crowdsourcing, and contains utterances with yes/no meaning, as
well as uncertain, middle-ground, and conditional responses. We also present
BERT-based neural models to predict such categories for a question-answer pair.
We find that while transfer learning from entailment works reasonably,
performance is not yet sufficient for robust dialog. Our models reach 82-88%
accuracy for a 4-class distinction, and 74-85% for 6 classes.
| 2,020 | Computation and Language |
Learning a Cost-Effective Annotation Policy for Question Answering | State-of-the-art question answering (QA) relies upon large amounts of
training data for which labeling is time consuming and thus expensive. For this
reason, customizing QA systems is challenging. As a remedy, we propose a novel
framework for annotating QA datasets that entails learning a cost-effective
annotation policy and a semi-supervised annotation scheme. The latter reduces
the human effort: it leverages the underlying QA system to suggest potential
candidate annotations. Human annotators then simply provide binary feedback on
these candidates. Our system is designed such that past annotations
continuously improve the future performance and thus overall annotation cost.
To the best of our knowledge, this is the first paper to address the problem of
annotating questions with minimal annotation cost. We compare our framework
against traditional manual annotations in an extensive set of experiments. We
find that our approach can reduce up to 21.1% of the annotation cost.
| 2,020 | Computation and Language |
ELMo and BERT in semantic change detection for Russian | We study the effectiveness of contextualized embeddings for the task of
diachronic semantic change detection for Russian language data. Evaluation test
sets consist of Russian nouns and adjectives annotated based on their
occurrences in texts created in pre-Soviet, Soviet and post-Soviet time
periods. ELMo and BERT architectures are compared on the task of ranking
Russian words according to the degree of their semantic change over time. We
use several methods for aggregation of contextualized embeddings from these
architectures and evaluate their performance. Finally, we compare unsupervised
and supervised techniques in this task.
| 2,020 | Computation and Language |
Improving Sentiment Analysis over non-English Tweets using Multilingual
Transformers and Automatic Translation for Data-Augmentation | Tweets are specific text data when compared to general text. Although
sentiment analysis over tweets has become very popular in the last decade for
English, it is still difficult to find huge annotated corpora for non-English
languages. The recent rise of the transformer models in Natural Language
Processing allows to achieve unparalleled performances in many tasks, but these
models need a consequent quantity of text to adapt to the tweet domain. We
propose the use of a multilingual transformer model, that we pre-train over
English tweets and apply data-augmentation using automatic translation to adapt
the model to non-English languages. Our experiments in French, Spanish, German
and Italian suggest that the proposed technique is an efficient way to improve
the results of the transformers over small corpora of tweets in a non-English
language.
| 2,020 | Computation and Language |
TeaForN: Teacher-Forcing with N-grams | Sequence generation models trained with teacher-forcing suffer from issues
related to exposure bias and lack of differentiability across timesteps. Our
proposed method, Teacher-Forcing with N-grams (TeaForN), addresses both these
problems directly, through the use of a stack of N decoders trained to decode
along a secondary time axis that allows model parameter updates based on N
prediction steps. TeaForN can be used with a wide class of decoder
architectures and requires minimal modifications from a standard
teacher-forcing setup. Empirically, we show that TeaForN boosts generation
quality on one Machine Translation benchmark, WMT 2014 English-French, and two
News Summarization benchmarks, CNN/Dailymail and Gigaword.
| 2,020 | Computation and Language |
Inductive Entity Representations from Text via Link Prediction | Knowledge Graphs (KG) are of vital importance for multiple applications on
the web, including information retrieval, recommender systems, and metadata
annotation. Regardless of whether they are built manually by domain experts or
with automatic pipelines, KGs are often incomplete. Recent work has begun to
explore the use of textual descriptions available in knowledge graphs to learn
vector representations of entities in order to preform link prediction.
However, the extent to which these representations learned for link prediction
generalize to other tasks is unclear. This is important given the cost of
learning such representations. Ideally, we would prefer representations that do
not need to be trained again when transferring to a different task, while
retaining reasonable performance.
In this work, we propose a holistic evaluation protocol for entity
representations learned via a link prediction objective. We consider the
inductive link prediction and entity classification tasks, which involve
entities not seen during training. We also consider an information retrieval
task for entity-oriented search. We evaluate an architecture based on a
pretrained language model, that exhibits strong generalization to entities not
observed during training, and outperforms related state-of-the-art methods (22%
MRR improvement in link prediction on average). We further provide evidence
that the learned representations transfer well to other tasks without
fine-tuning. In the entity classification task we obtain an average improvement
of 16% in accuracy compared with baselines that also employ pre-trained models.
In the information retrieval task, we obtain significant improvements of up to
8.8% in NDCG@10 for natural language queries. We thus show that the learned
representations are not limited KG-specific tasks, and have greater
generalization properties than evaluated in previous work.
| 2,021 | Computation and Language |
What Can We Learn from Collective Human Opinions on Natural Language
Inference Data? | Despite the subjective nature of many NLP tasks, most NLU evaluations have
focused on using the majority label with presumably high agreement as the
ground truth. Less attention has been paid to the distribution of human
opinions. We collect ChaosNLI, a dataset with a total of 464,500 annotations to
study Collective HumAn OpinionS in oft-used NLI evaluation sets. This dataset
is created by collecting 100 annotations per example for 3,113 examples in SNLI
and MNLI and 1,532 examples in Abductive-NLI. Analysis reveals that: (1) high
human disagreement exists in a noticeable amount of examples in these datasets;
(2) the state-of-the-art models lack the ability to recover the distribution
over human labels; (3) models achieve near-perfect accuracy on the subset of
data with a high level of human agreement, whereas they can barely beat a
random guess on the data with low levels of human agreement, which compose most
of the common errors made by state-of-the-art models on the evaluation sets.
This questions the validity of improving model performance on old metrics for
the low-agreement part of evaluation datasets. Hence, we argue for a detailed
examination of human agreement in future data collection efforts, and
evaluating model outputs against the distribution over collective human
opinions. The ChaosNLI dataset and experimental scripts are available at
https://github.com/easonnie/ChaosNLI
| 2,020 | Computation and Language |
Exploring the Role of Argument Structure in Online Debate Persuasion | Online debate forums provide users a platform to express their opinions on
controversial topics while being exposed to opinions from diverse set of
viewpoints. Existing work in Natural Language Processing (NLP) has shown that
linguistic features extracted from the debate text and features encoding the
characteristics of the audience are both critical in persuasion studies. In
this paper, we aim to further investigate the role of discourse structure of
the arguments from online debates in their persuasiveness. In particular, we
use the factor graph model to obtain features for the argument structure of
debates from an online debating platform and incorporate these features to an
LSTM-based model to predict the debater that makes the most convincing
arguments. We find that incorporating argument structure features play an
essential role in achieving the better predictive performance in assessing the
persuasiveness of the arguments in online debates.
| 2,020 | Computation and Language |
Galileo at SemEval-2020 Task 12: Multi-lingual Learning for Offensive
Language Identification using Pre-trained Language Models | This paper describes Galileo's performance in SemEval-2020 Task 12 on
detecting and categorizing offensive language in social media. For Offensive
Language Identification, we proposed a multi-lingual method using Pre-trained
Language Models, ERNIE and XLM-R. For offensive language categorization, we
proposed a knowledge distillation method trained on soft labels generated by
several supervised models. Our team participated in all three sub-tasks. In
Sub-task A - Offensive Language Identification, we ranked first in terms of
average F1 scores in all languages. We are also the only team which ranked
among the top three across all languages. We also took the first place in
Sub-task B - Automatic Categorization of Offense Types and Sub-task C - Offence
Target Identification.
| 2,020 | Computation and Language |
Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic
Parsing | Task-oriented semantic parsing is a critical component of virtual assistants,
which is responsible for understanding the user's intents (set reminder, play
music, etc.). Recent advances in deep learning have enabled several approaches
to successfully parse more complex queries (Gupta et al., 2018; Rongali et
al.,2020), but these models require a large amount of annotated training data
to parse queries on new domains (e.g. reminder, music).
In this paper, we focus on adapting task-oriented semantic parsers to
low-resource domains, and propose a novel method that outperforms a supervised
neural model at a 10-fold data reduction. In particular, we identify two
fundamental factors for low-resource domain adaptation: better representation
learning and better training techniques. Our representation learning uses BART
(Lewis et al., 2019) to initialize our model which outperforms encoder-only
pre-trained representations used in previous work. Furthermore, we train with
optimization-based meta-learning (Finn et al., 2017) to improve generalization
to low-resource domains. This approach significantly outperforms all baseline
methods in the experiments on a newly collected multi-domain task-oriented
semantic parsing dataset (TOPv2), which we release to the public.
| 2,020 | Computation and Language |
Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion | A case-based reasoning (CBR) system solves a new problem by retrieving
`cases' that are similar to the given problem. If such a system can achieve
high accuracy, it is appealing owing to its simplicity, interpretability, and
scalability. In this paper, we demonstrate that such a system is achievable for
reasoning in knowledge-bases (KBs). Our approach predicts attributes for an
entity by gathering reasoning paths from similar entities in the KB. Our
probabilistic model estimates the likelihood that a path is effective at
answering a query about the given entity. The parameters of our model can be
efficiently computed using simple path statistics and require no iterative
optimization. Our model is non-parametric, growing dynamically as new entities
and relations are added to the KB. On several benchmark datasets our approach
significantly outperforms other rule learning approaches and performs
comparably to state-of-the-art embedding-based approaches. Furthermore, we
demonstrate the effectiveness of our model in an "open-world" setting where new
entities arrive in an online fashion, significantly outperforming
state-of-the-art approaches and nearly matching the best offline method. Code
available at https://github.com/ameyagodbole/Prob-CBR
| 2,020 | Computation and Language |
Understanding Clinical Trial Reports: Extracting Medical Entities and
Their Relations | The best evidence concerning comparative treatment effectiveness comes from
clinical trials, the results of which are reported in unstructured articles.
Medical experts must manually extract information from articles to inform
decision-making, which is time-consuming and expensive. Here we consider the
end-to-end task of both (a) extracting treatments and outcomes from full-text
articles describing clinical trials (entity identification) and, (b) inferring
the reported results for the former with respect to the latter (relation
extraction). We introduce new data for this task, and evaluate models that have
recently achieved state-of-the-art results on similar tasks in Natural Language
Processing. We then propose a new method motivated by how trial results are
typically presented that outperforms these purely data-driven baselines.
Finally, we run a fielded evaluation of the model with a non-profit seeking to
identify existing drugs that might be re-purposed for cancer, showing the
potential utility of end-to-end evidence extraction systems.
| 2,022 | Computation and Language |
Characterizing the Value of Information in Medical Notes | Machine learning models depend on the quality of input data. As electronic
health records are widely adopted, the amount of data in health care is
growing, along with complaints about the quality of medical notes. We use two
prediction tasks, readmission prediction and in-hospital mortality prediction,
to characterize the value of information in medical notes. We show that as a
whole, medical notes only provide additional predictive power over structured
information in readmission prediction. We further propose a probing framework
to select parts of notes that enable more accurate predictions than using all
notes, despite that the selected information leads to a distribution shift from
the training data ("all notes"). Finally, we demonstrate that models trained on
the selected valuable information achieve even better predictive performance,
with only 6.8% of all the tokens for readmission prediction.
| 2,020 | Computation and Language |
SRLGRN: Semantic Role Labeling Graph Reasoning Network | This work deals with the challenge of learning and reasoning over multi-hop
question answering (QA). We propose a graph reasoning network based on the
semantic structure of the sentences to learn cross paragraph reasoning paths
and find the supporting facts and the answer jointly. The proposed graph is a
heterogeneous document-level graph that contains nodes of type sentence
(question, title, and other sentences), and semantic role labeling sub-graphs
per sentence that contain arguments as nodes and predicates as edges.
Incorporating the argument types, the argument phrases, and the semantics of
the edges originated from SRL predicates into the graph encoder helps in
finding and also the explainability of the reasoning paths. Our proposed
approach shows competitive performance on the HotpotQA distractor setting
benchmark compared to the recent state-of-the-art models.
| 2,020 | Computation and Language |
Combining Deep Learning and String Kernels for the Localization of Swiss
German Tweets | In this work, we introduce the methods proposed by the UnibucKernel team in
solving the Social Media Variety Geolocation task featured in the 2020 VarDial
Evaluation Campaign. We address only the second subtask, which targets a data
set composed of nearly 30 thousand Swiss German Jodels. The dialect
identification task is about accurately predicting the latitude and longitude
of test samples. We frame the task as a double regression problem, employing a
variety of machine learning approaches to predict both latitude and longitude.
From simple models for regression, such as Support Vector Regression, to deep
neural networks, such as Long Short-Term Memory networks and character-level
convolutional neural networks, and, finally, to ensemble models based on
meta-learners, such as XGBoost, our interest is focused on approaching the
problem from a few different perspectives, in an attempt to minimize the
prediction error. With the same goal in mind, we also considered many types of
features, from high-level features, such as BERT embeddings, to low-level
features, such as characters n-grams, which are known to provide good results
in dialect identification. Our empirical results indicate that the handcrafted
model based on string kernels outperforms the deep learning approaches.
Nevertheless, our best performance is given by the ensemble model that combines
both handcrafted and deep learning models.
| 2,020 | Computation and Language |
MuSeM: Detecting Incongruent News Headlines using Mutual Attentive
Semantic Matching | Measuring the congruence between two texts has several useful applications,
such as detecting the prevalent deceptive and misleading news headlines on the
web. Many works have proposed machine learning based solutions such as text
similarity between the headline and body text to detect the incongruence. Text
similarity based methods fail to perform well due to different inherent
challenges such as relative length mismatch between the news headline and its
body content and non-overlapping vocabulary. On the other hand, more recent
works that use headline guided attention to learn a headline derived contextual
representation of the news body also result in convoluting overall
representation due to the news body's lengthiness. This paper proposes a method
that uses inter-mutual attention-based semantic matching between the original
and synthetically generated headlines, which utilizes the difference between
all pairs of word embeddings of words involved. The paper also investigates two
more variations of our method, which use concatenation and dot-products of word
embeddings of the words of original and synthetic headlines. We observe that
the proposed method outperforms prior arts significantly for two publicly
available datasets.
| 2,020 | Computation and Language |
MOCHA: A Dataset for Training and Evaluating Generative Reading
Comprehension Metrics | Posing reading comprehension as a generation problem provides a great deal of
flexibility, allowing for open-ended questions with few restrictions on
possible answers. However, progress is impeded by existing generation metrics,
which rely on token overlap and are agnostic to the nuances of reading
comprehension. To address this, we introduce a benchmark for training and
evaluating generative reading comprehension metrics: MOdeling Correctness with
Human Annotations. MOCHA contains 40K human judgement scores on model outputs
from 6 diverse question answering datasets and an additional set of minimal
pairs for evaluation. Using MOCHA, we train a Learned Evaluation metric for
Reading Comprehension, LERC, to mimic human judgement scores. LERC outperforms
baseline metrics by 10 to 36 absolute Pearson points on held-out annotations.
When we evaluate robustness on minimal pairs, LERC achieves 80% accuracy,
outperforming baselines by 14 to 26 absolute percentage points while leaving
significant room for improvement. MOCHA presents a challenging problem for
developing accurate and robust generative reading comprehension metrics.
| 2,020 | Computation and Language |
Zero-Shot Stance Detection: A Dataset and Model using Generalized Topic
Representations | Stance detection is an important component of understanding hidden influences
in everyday life. Since there are thousands of potential topics to take a
stance on, most with little to no training data, we focus on zero-shot stance
detection: classifying stance from no training examples. In this paper, we
present a new dataset for zero-shot stance detection that captures a wider
range of topics and lexical variation than in previous datasets. Additionally,
we propose a new model for stance detection that implicitly captures
relationships between topics using generalized topic representations and show
that this model improves performance on a number of challenging linguistic
phenomena.
| 2,020 | Computation and Language |
Towards Understanding Sample Variance in Visually Grounded Language
Generation: Evaluations and Observations | A major challenge in visually grounded language generation is to build robust
benchmark datasets and models that can generalize well in real-world settings.
To do this, it is critical to ensure that our evaluation protocols are correct,
and benchmarks are reliable. In this work, we set forth to design a set of
experiments to understand an important but often ignored problem in visually
grounded language generation: given that humans have different utilities and
visual attention, how will the sample variance in multi-reference datasets
affect the models' performance? Empirically, we study several multi-reference
datasets and corresponding vision-and-language tasks. We show that it is of
paramount importance to report variance in experiments; that human-generated
references could vary drastically in different datasets/tasks, revealing the
nature of each task; that metric-wise, CIDEr has shown systematically larger
variances than others. Our evaluations on reference-per-instance shed light on
the design of reliable datasets in the future.
| 2,020 | Computation and Language |
A Mathematical Exploration of Why Language Models Help Solve Downstream
Tasks | Autoregressive language models, pretrained using large text corpora to do
well on next word prediction, have been successful at solving many downstream
tasks, even with zero-shot usage. However, there is little theoretical
understanding of this success. This paper initiates a mathematical study of
this phenomenon for the downstream task of text classification by considering
the following questions: (1) What is the intuitive connection between the
pretraining task of next word prediction and text classification? (2) How can
we mathematically formalize this connection and quantify the benefit of
language modeling? For (1), we hypothesize, and verify empirically, that
classification tasks of interest can be reformulated as sentence completion
tasks, thus making language modeling a meaningful pretraining task. With a
mathematical formalization of this hypothesis, we make progress towards (2) and
show that language models that are $\epsilon$-optimal in cross-entropy
(log-perplexity) learn features that can linearly solve such classification
tasks with $\mathcal{O}(\sqrt{\epsilon})$ error, thus demonstrating that doing
well on language modeling can be beneficial for downstream tasks. We
experimentally verify various assumptions and theoretical findings, and also
use insights from the analysis to design a new objective function that performs
well on some classification tasks.
| 2,021 | Computation and Language |
Cross-Thought for Sentence Encoder Pre-training | In this paper, we propose Cross-Thought, a novel approach to pre-training
sequence encoder, which is instrumental in building reusable sequence
embeddings for large-scale NLP tasks such as question answering. Instead of
using the original signals of full sentences, we train a Transformer-based
sequence encoder over a large set of short sequences, which allows the model to
automatically select the most useful information for predicting masked words.
Experiments on question answering and textual entailment tasks demonstrate that
our pre-trained encoder can outperform state-of-the-art encoders trained with
continuous sentence signals as well as traditional masked language modeling
baselines. Our proposed approach also achieves new state of the art on HotpotQA
(full-wiki setting) by improving intermediate information retrieval
performance.
| 2,020 | Computation and Language |
Exposing Shallow Heuristics of Relation Extraction Models with Challenge
Data | The process of collecting and annotating training data may introduce
distribution artifacts which may limit the ability of models to learn correct
generalization behavior. We identify failure modes of SOTA relation extraction
(RE) models trained on TACRED, which we attribute to limitations in the data
annotation process. We collect and annotate a challenge-set we call Challenging
RE (CRE), based on naturally occurring corpus examples, to benchmark this
behavior. Our experiments with four state-of-the-art RE models show that they
have indeed adopted shallow heuristics that do not generalize to the
challenge-set data. Further, we find that alternative question answering
modeling performs significantly better than the SOTA models on the
challenge-set, despite worse overall TACRED performance. By adding some of the
challenge data as training examples, the performance of the model improves.
Finally, we provide concrete suggestion on how to improve RE data collection to
alleviate this behavior.
| 2,020 | Computation and Language |
Detecting Fine-Grained Cross-Lingual Semantic Divergences without
Supervision by Learning to Rank | Detecting fine-grained differences in content conveyed in different languages
matters for cross-lingual NLP and multilingual corpora analysis, but it is a
challenging machine learning problem since annotation is expensive and hard to
scale. This work improves the prediction and annotation of fine-grained
semantic divergences. We introduce a training strategy for multilingual BERT
models by learning to rank synthetic divergent examples of varying granularity.
We evaluate our models on the Rationalized English-French Semantic Divergences,
a new dataset released with this work, consisting of English-French
sentence-pairs annotated with semantic divergence classes and token-level
rationales. Learning to rank helps detect fine-grained sentence-level
divergences more accurately than a strong sentence-level similarity model,
while token-level predictions have the potential of further distinguishing
between coarse and fine-grained divergences.
| 2,020 | Computation and Language |
Adaptive Self-training for Few-shot Neural Sequence Labeling | Sequence labeling is an important technique employed for many Natural
Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot
tagging for dialog systems and semantic parsing. Large-scale pre-trained
language models obtain very good performance on these tasks when fine-tuned on
large amounts of task-specific labeled data. However, such large-scale labeled
datasets are difficult to obtain for several tasks and domains due to the high
cost of human annotation as well as privacy and data access constraints for
sensitive user applications. This is exacerbated for sequence labeling tasks
requiring such annotations at token-level. In this work, we develop techniques
to address the label scarcity challenge for neural sequence labeling models.
Specifically, we develop self-training and meta-learning techniques for
training neural sequence taggers with few labels. While self-training serves as
an effective mechanism to learn from large amounts of unlabeled data --
meta-learning helps in adaptive sample re-weighting to mitigate error
propagation from noisy pseudo-labels. Extensive experiments on six benchmark
datasets including two for massive multilingual NER and four slot tagging
datasets for task-oriented dialog systems demonstrate the effectiveness of our
method. With only 10 labeled examples for each class for each task, our method
obtains 10% improvement over state-of-the-art systems demonstrating its
effectiveness for the low-resource setting.
| 2,020 | Computation and Language |
AxFormer: Accuracy-driven Approximation of Transformers for Faster,
Smaller and more Accurate NLP Models | Transformers have greatly advanced the state-of-the-art in Natural Language
Processing (NLP) in recent years, but present very large computation and
storage requirements. We observe that the design process of Transformers
(pre-train a foundation model on a large dataset in a self-supervised manner,
and subsequently fine-tune it for different downstream tasks) leads to
task-specific models that are highly over-parameterized, adversely impacting
both accuracy and inference efficiency. We propose AxFormer, a systematic
framework that applies accuracy-driven approximations to create optimized
transformer models for a given downstream task. AxFormer combines two key
optimizations -- accuracy-driven pruning and selective hard attention.
Accuracy-driven pruning identifies and removes parts of the fine-tuned
transformer that hinder performance on the given downstream task. Sparse
hard-attention optimizes attention blocks in selected layers by eliminating
irrelevant word aggregations, thereby helping the model focus only on the
relevant parts of the input. In effect, AxFormer leads to models that are more
accurate, while also being faster and smaller. Our experiments on GLUE and
SQUAD tasks show that AxFormer models are up to 4.5% more accurate, while also
being up to 2.5X faster and up to 3.2X smaller than conventional fine-tuned
models. In addition, we demonstrate that AxFormer can be combined with previous
efforts such as distillation or quantization to achieve further efficiency
gains.
| 2,022 | Computation and Language |
Learning to Recombine and Resample Data for Compositional Generalization | Flexible neural sequence models outperform grammar- and automaton-based
counterparts on a variety of tasks. However, neural models perform poorly in
settings requiring compositional generalization beyond the training data --
particularly to rare or unseen subsequences. Past work has found symbolic
scaffolding (e.g. grammars or automata) essential in these settings. We
describe R&R, a learned data augmentation scheme that enables a large category
of compositional generalizations without appeal to latent symbolic structure.
R&R has two components: recombination of original training examples via a
prototype-based generative model and resampling of generated examples to
encourage extrapolation. Training an ordinary neural sequence model on a
dataset augmented with recombined and resampled examples significantly improves
generalization in two language processing problems -- instruction following
(SCAN) and morphological analysis (SIGMORPHON 2018) -- where R&R enables
learning of new constructions and tenses from as few as eight initial examples.
| 2,021 | Computation and Language |
Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based
Decoding | Semantic parsing is one of the key components of natural language
understanding systems. A successful parse transforms an input utterance to an
action that is easily understood by the system. Many algorithms have been
proposed to solve this problem, from conventional rulebased or statistical
slot-filling systems to shiftreduce based neural parsers. For complex parsing
tasks, the state-of-the-art method is based on autoregressive sequence to
sequence models to generate the parse directly. This model is slow at inference
time, generating parses in O(n) decoding steps (n is the length of the target
sequence). In addition, we demonstrate that this method performs poorly in
zero-shot cross-lingual transfer learning settings. In this paper, we propose a
non-autoregressive parser which is based on the insertion transformer to
overcome these two issues. Our approach 1) speeds up decoding by 3x while
outperforming the autoregressive model and 2) significantly improves
cross-lingual transfer in the low-resource setting by 37% compared to
autoregressive baseline. We test our approach on three well-known monolingual
datasets: ATIS, SNIPS and TOP. For cross lingual semantic parsing, we use the
MultiATIS++ and the multilingual TOP datasets.
| 2,020 | Computation and Language |
A Cascade Approach to Neural Abstractive Summarization with Content
Selection and Fusion | We present an empirical study in favor of a cascade architecture to neural
text summarization. Summarization practices vary widely but few other than news
summarization can provide a sufficient amount of training data enough to meet
the requirement of end-to-end neural abstractive systems which perform content
selection and surface realization jointly to generate abstracts. Such systems
also pose a challenge to summarization evaluation, as they force content
selection to be evaluated along with text generation, yet evaluation of the
latter remains an unsolved problem. In this paper, we present empirical results
showing that the performance of a cascaded pipeline that separately identifies
important content pieces and stitches them together into a coherent text is
comparable to or outranks that of end-to-end systems, whereas a pipeline
architecture allows for flexible content selection. We finally discuss how we
can take advantage of a cascaded pipeline in neural text summarization and shed
light on important directions for future research.
| 2,020 | Computation and Language |
PARADE: A New Dataset for Paraphrase Identification Requiring Computer
Science Domain Knowledge | We present a new benchmark dataset called PARADE for paraphrase
identification that requires specialized domain knowledge. PARADE contains
paraphrases that overlap very little at the lexical and syntactic level but are
semantically equivalent based on computer science domain knowledge, as well as
non-paraphrases that overlap greatly at the lexical and syntactic level but are
not semantically equivalent based on this domain knowledge. Experiments show
that both state-of-the-art neural models and non-expert human annotators have
poor performance on PARADE. For example, BERT after fine-tuning achieves an F1
score of 0.709, which is much lower than its performance on other paraphrase
identification datasets. PARADE can serve as a resource for researchers
interested in testing models that incorporate domain knowledge. We make our
data and code freely available.
| 2,020 | Computation and Language |
Learning to Fuse Sentences with Transformers for Summarization | The ability to fuse sentences is highly attractive for summarization systems
because it is an essential step to produce succinct abstracts. However, to
date, summarizers can fail on fusing sentences. They tend to produce few
summary sentences by fusion or generate incorrect fusions that lead the summary
to fail to retain the original meaning. In this paper, we explore the ability
of Transformers to fuse sentences and propose novel algorithms to enhance their
ability to perform sentence fusion by leveraging the knowledge of points of
correspondence between sentences. Through extensive experiments, we investigate
the effects of different design choices on Transformer's performance. Our
findings highlight the importance of modeling points of correspondence between
sentences for effective sentence fusion.
| 2,020 | Computation and Language |
Leveraging Discourse Rewards for Document-Level Neural Machine
Translation | Document-level machine translation focuses on the translation of entire
documents from a source to a target language. It is widely regarded as a
challenging task since the translation of the individual sentences in the
document needs to retain aspects of the discourse at document level. However,
document-level translation models are usually not trained to explicitly ensure
discourse quality. Therefore, in this paper we propose a training approach that
explicitly optimizes two established discourse metrics, lexical cohesion (LC)
and coherence (COH), by using a reinforcement learning objective. Experiments
over four different language pairs and three translation domains have shown
that our training approach has been able to achieve more cohesive and coherent
document translations than other competitive approaches, yet without
compromising the faithfulness to the reference translation. In the case of the
Zh-En language pair, our method has achieved an improvement of 2.46 percentage
points (pp) in LC and 1.17 pp in COH over the runner-up, while at the same time
improving 0.63 pp in BLEU score and 0.47 pp in F_BERT.
| 2,020 | Computation and Language |
Shallow-to-Deep Training for Neural Machine Translation | Deep encoders have been proven to be effective in improving neural machine
translation (NMT) systems, but training an extremely deep encoder is time
consuming. Moreover, why deep models help NMT is an open question. In this
paper, we investigate the behavior of a well-tuned deep Transformer system. We
find that stacking layers is helpful in improving the representation ability of
NMT models and adjacent layers perform similarly. This inspires us to develop a
shallow-to-deep training method that learns deep models by stacking shallow
models. In this way, we successfully train a Transformer system with a 54-layer
encoder. Experimental results on WMT'16 English-German and WMT'14
English-French translation tasks show that it is $1.4$ $\times$ faster than
training from scratch, and achieves a BLEU score of $30.33$ and $43.29$ on two
tasks. The code is publicly available at
https://github.com/libeineu/SDT-Training/.
| 2,020 | Computation and Language |
Multi-hop Inference for Question-driven Summarization | Question-driven summarization has been recently studied as an effective
approach to summarizing the source document to produce concise but informative
answers for non-factoid questions. In this work, we propose a novel
question-driven abstractive summarization method, Multi-hop Selective Generator
(MSG), to incorporate multi-hop reasoning into question-driven summarization
and, meanwhile, provide justifications for the generated summaries.
Specifically, we jointly model the relevance to the question and the
interrelation among different sentences via a human-like multi-hop inference
module, which captures important sentences for justifying the summarized
answer. A gated selective pointer generator network with a multi-view coverage
mechanism is designed to integrate diverse information from different
perspectives. Experimental results show that the proposed method consistently
outperforms state-of-the-art methods on two non-factoid QA datasets, namely
WikiHow and PubMedQA.
| 2,020 | Computation and Language |
Infusing Disease Knowledge into BERT for Health Question Answering,
Medical Inference and Disease Name Recognition | Knowledge of a disease includes information of various aspects of the
disease, such as signs and symptoms, diagnosis and treatment. This disease
knowledge is critical for many health-related and biomedical tasks, including
consumer health question answering, medical language inference and disease name
recognition. While pre-trained language models like BERT have shown success in
capturing syntactic, semantic, and world knowledge from text, we find they can
be further complemented by specific information like knowledge of symptoms,
diagnoses, treatments, and other disease aspects. Hence, we integrate BERT with
disease knowledge for improving these important tasks. Specifically, we propose
a new disease knowledge infusion training procedure and evaluate it on a suite
of BERT models including BERT, BioBERT, SciBERT, ClinicalBERT, BlueBERT, and
ALBERT. Experiments over the three tasks show that these models can be enhanced
in nearly all cases, demonstrating the viability of disease knowledge infusion.
For example, accuracy of BioBERT on consumer health question answering is
improved from 68.29% to 72.09%, while new SOTA results are observed in two
datasets. We make our data and code freely available.
| 2,020 | Computation and Language |
Generalizable and Explainable Dialogue Generation via Explicit Action
Learning | Response generation for task-oriented dialogues implicitly optimizes two
objectives at the same time: task completion and language quality. Conditioned
response generation serves as an effective approach to separately and better
optimize these two objectives. Such an approach relies on system action
annotations which are expensive to obtain. To alleviate the need of action
annotations, latent action learning is introduced to map each utterance to a
latent representation. However, this approach is prone to over-dependence on
the training data, and the generalization capability is thus restricted. To
address this issue, we propose to learn natural language actions that represent
utterances as a span of words. This explicit action representation promotes
generalization via the compositional structure of language. It also enables an
explainable generation process. Our proposed unsupervised approach learns a
memory component to summarize system utterances into a short span of words. To
further promote a compact action representation, we propose an auxiliary task
that restores state annotations as the summarized dialogue context using the
memory component. Our proposed approach outperforms latent action baselines on
MultiWOZ, a benchmark multi-domain dataset.
| 2,020 | Computation and Language |
Discriminatively-Tuned Generative Classifiers for Robust Natural
Language Inference | While discriminative neural network classifiers are generally preferred,
recent work has shown advantages of generative classifiers in term of data
efficiency and robustness. In this paper, we focus on natural language
inference (NLI). We propose GenNLI, a generative classifier for NLI tasks, and
empirically characterize its performance by comparing it to five baselines,
including discriminative models and large-scale pretrained language
representation models like BERT. We explore training objectives for
discriminative fine-tuning of our generative classifiers, showing improvements
over log loss fine-tuning from prior work . In particular, we find strong
results with a simple unbounded modification to log loss, which we call the
"infinilog loss". Our experiments show that GenNLI outperforms both
discriminative and pretrained baselines across several challenging NLI
experimental settings, including small training sets, imbalanced label
distributions, and label noise.
| 2,020 | Computation and Language |
Assessing Phrasal Representation and Composition in Transformers | Deep transformer models have pushed performance on NLP tasks to new limits,
suggesting sophisticated treatment of complex linguistic inputs, such as
phrases. However, we have limited understanding of how these models handle
representation of phrases, and whether this reflects sophisticated composition
of phrase meaning like that done by humans. In this paper, we present
systematic analysis of phrasal representations in state-of-the-art pre-trained
transformers. We use tests leveraging human judgments of phrase similarity and
meaning shift, and compare results before and after control of word overlap, to
tease apart lexical effects versus composition effects. We find that phrase
representation in these models relies heavily on word content, with little
evidence of nuanced composition. We also identify variations in phrase
representation quality across models, layers, and representation types, and
make corresponding recommendations for usage of representations from these
models.
| 2,020 | Computation and Language |
Improving Attention Mechanism with Query-Value Interaction | Attention mechanism has played critical roles in various state-of-the-art NLP
models such as Transformer and BERT. It can be formulated as a ternary function
that maps the input queries, keys and values into an output by using a
summation of values weighted by the attention weights derived from the
interactions between queries and keys. Similar with query-key interactions,
there is also inherent relatedness between queries and values, and
incorporating query-value interactions has the potential to enhance the output
by learning customized values according to the characteristics of queries.
However, the query-value interactions are ignored by existing attention
methods, which may be not optimal. In this paper, we propose to improve the
existing attention mechanism by incorporating query-value interactions. We
propose a query-value interaction function which can learn query-aware
attention values, and combine them with the original values and attention
weights to form the final output. Extensive experiments on four datasets for
different tasks show that our approach can consistently improve the performance
of many attention-based models by incorporating query-value interactions.
| 2,020 | Computation and Language |
ALFWorld: Aligning Text and Embodied Environments for Interactive
Learning | Given a simple request like Put a washed apple in the kitchen fridge, humans
can reason in purely abstract terms by imagining action sequences and scoring
their likelihood of success, prototypicality, and efficiency, all without
moving a muscle. Once we see the kitchen in question, we can update our
abstract plans to fit the scene. Embodied agents require the same abilities,
but existing work does not yet provide the infrastructure necessary for both
reasoning abstractly and executing concretely. We address this limitation by
introducing ALFWorld, a simulator that enables agents to learn abstract, text
based policies in TextWorld (C\^ot\'e et al., 2018) and then execute goals from
the ALFRED benchmark (Shridhar et al., 2020) in a rich visual environment.
ALFWorld enables the creation of a new BUTLER agent whose abstract knowledge,
learned in TextWorld, corresponds directly to concrete, visually grounded
actions. In turn, as we demonstrate empirically, this fosters better agent
generalization than training only in the visually grounded environment.
BUTLER's simple, modular design factors the problem to allow researchers to
focus on models for improving every piece of the pipeline (language
understanding, planning, navigation, and visual scene understanding).
| 2,021 | Computation and Language |
Improving Long-Tail Relation Extraction with Collaborating
Relation-Augmented Attention | Wrong labeling problem and long-tail relations are two main challenges caused
by distant supervision in relation extraction. Recent works alleviate the wrong
labeling by selective attention via multi-instance learning, but cannot well
handle long-tail relations even if hierarchies of the relations are introduced
to share knowledge. In this work, we propose a novel neural network,
Collaborating Relation-augmented Attention (CoRA), to handle both the wrong
labeling and long-tail relations. Particularly, we first propose
relation-augmented attention network as base model. It operates on sentence bag
with a sentence-to-relation attention to minimize the effect of wrong labeling.
Then, facilitated by the proposed base model, we introduce collaborating
relation features shared among relations in the hierarchies to promote the
relation-augmenting process and balance the training data for long-tail
relations. Besides the main training objective to predict the relation of a
sentence bag, an auxiliary objective is utilized to guide the
relation-augmenting process for a more accurate bag-level representation. In
the experiments on the popular benchmark dataset NYT, the proposed CoRA
improves the prior state-of-the-art performance by a large margin in terms of
Precision@N, AUC and Hits@K. Further analyses verify its superior capability in
handling long-tail relations in contrast to the competitors.
| 2,020 | Computation and Language |
Detect All Abuse! Toward Universal Abusive Language Detection Models | Online abusive language detection (ALD) has become a societal issue of
increasing importance in recent years. Several previous works in online ALD
focused on solving a single abusive language problem in a single domain, like
Twitter, and have not been successfully transferable to the general ALD task or
domain. In this paper, we introduce a new generic ALD framework, MACAS, which
is capable of addressing several types of ALD tasks across different domains.
Our generic framework covers multi-aspect abusive language embeddings that
represent the target and content aspects of abusive language and applies a
textual graph embedding that analyses the user's linguistic behaviour. Then, we
propose and use the cross-attention gate flow mechanism to embrace multiple
aspects of abusive language. Quantitative and qualitative evaluation results
show that our ALD algorithm rivals or exceeds the six state-of-the-art ALD
algorithms across seven ALD datasets covering multiple aspects of abusive
language and different online community domains.
| 2,020 | Computation and Language |
An Empirical Study on Model-agnostic Debiasing Strategies for Robust
Natural Language Inference | The prior work on natural language inference (NLI) debiasing mainly targets
at one or few known biases while not necessarily making the models more robust.
In this paper, we focus on the model-agnostic debiasing strategies and explore
how to (or is it possible to) make the NLI models robust to multiple distinct
adversarial attacks while keeping or even strengthening the models'
generalization power. We firstly benchmark prevailing neural NLI models
including pretrained ones on various adversarial datasets. We then try to
combat distinct known biases by modifying a mixture of experts (MoE) ensemble
method and show that it's nontrivial to mitigate multiple NLI biases at the
same time, and that model-level ensemble method outperforms MoE ensemble
method. We also perform data augmentation including text swap, word
substitution and paraphrase and prove its efficiency in combating various
(though not all) adversarial attacks at the same time. Finally, we investigate
several methods to merge heterogeneous training data (1.35M) and perform model
ensembling, which are straightforward but effective to strengthen NLI models.
| 2,020 | Computation and Language |
TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling | We present a novel approach to the problem of text style transfer. Unlike
previous approaches requiring style-labeled training data, our method makes use
of readily-available unlabeled text by relying on the implicit connection in
style between adjacent sentences, and uses labeled data only at inference time.
We adapt T5 (Raffel et al., 2020), a strong pretrained text-to-text model, to
extract a style vector from text and use it to condition the decoder to perform
style transfer. As our label-free training results in a style vector space
encoding many facets of style, we recast transfers as "targeted restyling"
vector operations that adjust specific attributes of the input while preserving
others. We demonstrate that training on unlabeled Amazon reviews data results
in a model that is competitive on sentiment transfer, even compared to models
trained fully on labeled data. Furthermore, applying our novel method to a
diverse corpus of unlabeled web text results in a single model capable of
transferring along multiple dimensions of style (dialect, emotiveness,
formality, politeness, sentiment) despite no additional training and using only
a handful of exemplars at inference time.
| 2,021 | Computation and Language |
On the importance of pre-training data volume for compact language
models | Recent advances in language modeling have led to computationally intensive
and resource-demanding state-of-the-art models. In an effort towards
sustainable practices, we study the impact of pre-training data volume on
compact language models. Multiple BERT-based models are trained on gradually
increasing amounts of French text. Through fine-tuning on the French Question
Answering Dataset (FQuAD), we observe that well-performing models are obtained
with as little as 100 MB of text. In addition, we show that past critically low
amounts of pre-training data, an intermediate pre-training step on the
task-specific corpus does not yield substantial improvements.
| 2,020 | Computation and Language |
Extracting a Knowledge Base of Mechanisms from COVID-19 Papers | The COVID-19 pandemic has spawned a diverse body of scientific literature
that is challenging to navigate, stimulating interest in automated tools to
help find useful knowledge. We pursue the construction of a knowledge base (KB)
of mechanisms -- a fundamental concept across the sciences encompassing
activities, functions and causal relations, ranging from cellular processes to
economic impacts. We extract this information from the natural language of
scientific papers by developing a broad, unified schema that strikes a balance
between relevance and breadth. We annotate a dataset of mechanisms with our
schema and train a model to extract mechanism relations from papers. Our
experiments demonstrate the utility of our KB in supporting interdisciplinary
scientific search over COVID-19 literature, outperforming the prominent PubMed
search in a study with clinical experts.
| 2,021 | Computation and Language |
Two are Better than One: Joint Entity and Relation Extraction with
Table-Sequence Encoders | Named entity recognition and relation extraction are two important
fundamental problems. Joint learning algorithms have been proposed to solve
both tasks simultaneously, and many of them cast the joint task as a
table-filling problem. However, they typically focused on learning a single
encoder (usually learning representation in the form of a table) to capture
information required for both tasks within the same space. We argue that it can
be beneficial to design two distinct encoders to capture such two different
types of information in the learning process. In this work, we propose the
novel {\em table-sequence encoders} where two different encoders -- a table
encoder and a sequence encoder are designed to help each other in the
representation learning process. Our experiments confirm the advantages of
having {\em two} encoders over {\em one} encoder. On several standard datasets,
our model shows significant improvements over existing approaches.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.