Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Rewarding Coreference Resolvers for Being Consistent with World
Knowledge | Unresolved coreference is a bottleneck for relation extraction, and
high-quality coreference resolvers may produce an output that makes it a lot
easier to extract knowledge triples. We show how to improve coreference
resolvers by forwarding their input to a relation extraction system and reward
the resolvers for producing triples that are found in knowledge bases. Since
relation extraction systems can rely on different forms of supervision and be
biased in different ways, we obtain the best performance, improving over the
state of the art, using multi-task reinforcement learning.
| 2,019 | Computation and Language |
Reading Comprehension Ability Test-A Turing Test for Reading
Comprehension | Reading comprehension is an important ability of human intelligence. Literacy
and numeracy are two most essential foundation for people to succeed at study,
at work and in life. Reading comprehension ability is a core component of
literacy. In most of the education systems, developing reading comprehension
ability is compulsory in the curriculum from year one to year 12. It is an
indispensable ability in the dissemination of knowledge. With the emerging
artificial intelligence, computers start to be able to read and understand like
people in some context. They can even read better than human beings for some
tasks, but have little clue in other tasks. It will be very beneficial if we
can identify the levels of machine comprehension ability, which will direct us
on the further improvement. Turing test is a well-known test of the difference
between computer intelligence and human intelligence. In order to be able to
compare the difference between people reading and machines reading, we proposed
a test called (reading) Comprehension Ability Test (CAT).CAT is similar to
Turing test, passing of which means we cannot differentiate people from
algorithms in term of their comprehension ability. CAT has multiple levels
showing the different abilities in reading comprehension, from identifying
basic facts, performing inference, to understanding the intent and sentiment.
| 2,019 | Computation and Language |
A Discussion on Influence of Newspaper Headlines on Social Media | Newspaper headlines contribute severely and have an influence on the social
media. This work studies the durability of impact of verbs and adjectives on
headlines and determine the factors which are responsible for its nature of
influence on the social media. Each headline has been categorized into
positive, negative or neutral based on its sentiment score. Initial results
show that intensity of a sentiment nature is positively correlated with the
social media impression. Additionally, verbs and adjectives show a relation
with the sentiment scores
| 2,019 | Computation and Language |
FlowSeq: Non-Autoregressive Conditional Sequence Generation with
Generative Flow | Most sequence-to-sequence (seq2seq) models are autoregressive; they generate
each token by conditioning on previously generated tokens. In contrast,
non-autoregressive seq2seq models generate all tokens in one pass, which leads
to increased efficiency through parallel processing on hardware such as GPUs.
However, directly modeling the joint distribution of all tokens simultaneously
is challenging, and even with increasingly complex model structures accuracy
lags significantly behind autoregressive models. In this paper, we propose a
simple, efficient, and effective model for non-autoregressive sequence
generation using latent variable models. Specifically, we turn to generative
flow, an elegant technique to model complex distributions using neural
networks, and design several layers of flow tailored for modeling the
conditional density of sequential latent variables. We evaluate this model on
three neural machine translation (NMT) benchmark datasets, achieving comparable
performance with state-of-the-art non-autoregressive NMT models and almost
constant decoding time w.r.t the sequence length.
| 2,019 | Computation and Language |
Robustness to Modification with Shared Words in Paraphrase
Identification | Revealing the robustness issues of natural language processing models and
improving their robustness is important to their performance under difficult
situations. In this paper, we study the robustness of paraphrase identification
models from a new perspective -- via modification with shared words, and we
show that the models have significant robustness issues when facing such
modifications. To modify an example consisting of a sentence pair, we either
replace some words shared by both sentences or introduce new shared words. We
aim to construct a valid new example such that a target model makes a wrong
prediction. To find a modification solution, we use beam search constrained by
heuristic rules, and we leverage a BERT masked language model for generating
substitution words compatible with the context. Experiments show that the
performance of the target models has a dramatic drop on the modified examples,
thereby revealing the robustness issue. We also show that adversarial training
can mitigate this issue.
| 2,020 | Computation and Language |
Investigating BERT's Knowledge of Language: Five Analysis Methods with
NPIs | Though state-of-the-art sentence representation models can perform tasks
requiring significant knowledge of grammar, it is an open question how best to
evaluate their grammatical knowledge. We explore five experimental methods
inspired by prior work evaluating pretrained sentence representation models. We
use a single linguistic phenomenon, negative polarity item (NPI) licensing in
English, as a case study for our experiments. NPIs like "any" are grammatical
only if they appear in a licensing environment like negation ("Sue doesn't have
any cats" vs. "Sue has any cats"). This phenomenon is challenging because of
the variety of NPI licensing environments that exist. We introduce an
artificially generated dataset that manipulates key features of NPI licensing
for the experiments. We find that BERT has significant knowledge of these
features, but its success varies widely across different experimental methods.
We conclude that a variety of methods is necessary to reveal all relevant
aspects of a model's grammatical knowledge in a given domain.
| 2,019 | Computation and Language |
Syntax-Aware Aspect Level Sentiment Classification with Graph Attention
Networks | Aspect level sentiment classification aims to identify the sentiment
expressed towards an aspect given a context sentence. Previous neural network
based methods largely ignore the syntax structure in one sentence. In this
paper, we propose a novel target-dependent graph attention network (TD-GAT) for
aspect level sentiment classification, which explicitly utilizes the dependency
relationship among words. Using the dependency graph, it propagates sentiment
features directly from the syntactic context of an aspect target. In our
experiments, we show our method outperforms multiple baselines with GloVe
embeddings. We also demonstrate that using BERT representations further
substantially boosts the performance.
| 2,019 | Computation and Language |
Broad-Coverage Semantic Parsing as Transduction | We unify different broad-coverage semantic parsing tasks under a transduction
paradigm, and propose an attention-based neural framework that incrementally
builds a meaning representation via a sequence of semantic relations. By
leveraging multiple attention mechanisms, the transducer can be effectively
trained without relying on a pre-trained aligner. Experiments conducted on
three separate broad-coverage semantic parsing tasks -- AMR, SDP and UCCA --
demonstrate that our attention-based neural transducer improves the state of
the art on both AMR and UCCA, and is competitive with the state of the art on
SDP.
| 2,019 | Computation and Language |
TEASPN: Framework and Protocol for Integrated Writing Assistance
Environments | Language technologies play a key role in assisting people with their writing.
Although there has been steady progress in e.g., grammatical error correction
(GEC), human writers are yet to benefit from this progress due to the high
development cost of integrating with writing software. We propose TEASPN, a
protocol and an open-source framework for achieving integrated writing
assistance environments. The protocol standardizes the way writing software
communicates with servers that implement such technologies, allowing developers
and researchers to integrate the latest developments in natural language
processing (NLP) with low cost. As a result, users can enjoy the integrated
experience in their favorite writing software. The results from experiments
with human participants show that users use a wide range of technologies and
rate their writing experience favorably, allowing them to write more fluent
text.
| 2,019 | Computation and Language |
MoverScore: Text Generation Evaluating with Contextualized Embeddings
and Earth Mover Distance | A robust evaluation metric has a profound impact on the development of text
generation systems. A desirable metric compares system output against
references based on their semantics rather than surface forms. In this paper we
investigate strategies to encode system and reference texts to devise a metric
that shows a high correlation with human judgment of text quality. We validate
our new metric, namely MoverScore, on a number of text generation tasks
including summarization, machine translation, image captioning, and
data-to-text generation, where the outputs are produced by a variety of neural
and non-neural systems. Our findings suggest that metrics combining
contextualized representations with a distance measure perform the best. Such
metrics also demonstrate strong generalization capability across tasks. For
ease-of-use we make our metrics available as web service.
| 2,019 | Computation and Language |
Effective Use of Transformer Networks for Entity Tracking | Tracking entities in procedural language requires understanding the
transformations arising from actions on entities as well as those entities'
interactions. While self-attention-based pre-trained language encoders like GPT
and BERT have been successfully applied across a range of natural language
understanding tasks, their ability to handle the nuances of procedural texts is
still untested. In this paper, we explore the use of pre-trained transformer
networks for entity tracking tasks in procedural text. First, we test standard
lightweight approaches for prediction with pre-trained transformers, and find
that these approaches underperform even simple baselines. We show that much
stronger results can be attained by restructuring the input to guide the
transformer model to focus on a particular entity. Second, we assess the degree
to which transformer networks capture the process dynamics, investigating such
factors as merged entities and oblique entity references. On two different
tasks, ingredient detection in recipes and QA over scientific processes, we
achieve state-of-the-art results, but our models still largely attend to
shallow context clues and do not form complex representations of intermediate
entity or process state.
| 2,019 | Computation and Language |
In Plain Sight: Media Bias Through the Lens of Factual Reporting | The increasing prevalence of political bias in news media calls for greater
public awareness of it, as well as robust methods for its detection. While
prior work in NLP has primarily focused on the lexical bias captured by
linguistic attributes such as word choice and syntax, other types of bias stem
from the actual content selected for inclusion in the text. In this work, we
investigate the effects of informational bias: factual content that can
nevertheless be deployed to sway reader opinion. We first produce a new
dataset, BASIL, of 300 news articles annotated with 1,727 bias spans and find
evidence that informational bias appears in news articles more frequently than
lexical bias. We further study our annotations to observe how informational
bias surfaces in news articles by different media outlets. Lastly, a baseline
model for informational bias prediction is presented by fine-tuning BERT on our
labeled data, indicating the challenges of the task and future directions.
| 2,019 | Computation and Language |
Incorporating External Knowledge into Machine Reading for Generative
Question Answering | Commonsense and background knowledge is required for a QA model to answer
many nontrivial questions. Different from existing work on knowledge-aware QA,
we focus on a more challenging task of leveraging external knowledge to
generate answers in natural language for a given question with context.
In this paper, we propose a new neural model, Knowledge-Enriched Answer
Generator (KEAG), which is able to compose a natural answer by exploiting and
aggregating evidence from all four information sources available: question,
passage, vocabulary and knowledge. During the process of answer generation,
KEAG adaptively determines when to utilize symbolic knowledge and which fact
from the knowledge is useful. This allows the model to exploit external
knowledge that is not explicitly stated in the given text, but that is relevant
for generating an answer. The empirical study on public benchmark of answer
generation demonstrates that KEAG improves answer quality over models without
knowledge and existing knowledge-aware models, confirming its effectiveness in
leveraging knowledge.
| 2,019 | Computation and Language |
Effective Search of Logical Forms for Weakly Supervised Knowledge-Based
Question Answering | Many algorithms for Knowledge-Based Question Answering (KBQA) depend on
semantic parsing, which translates a question to its logical form. When only
weak supervision is provided, it is usually necessary to search valid logical
forms for model training. However, a complex question typically involves a huge
search space, which creates two main problems: 1) the solutions limited by
computation time and memory usually reduce the success rate of the search, and
2) spurious logical forms in the search results degrade the quality of training
data. These two problems lead to a poorly-trained semantic parsing model. In
this work, we propose an effective search method for weakly supervised KBQA
based on operator prediction for questions. With search space constrained by
predicted operators, sufficient search paths can be explored, more valid
logical forms can be derived, and operators possibly causing spurious logical
forms can be avoided. As a result, a larger proportion of questions in a weakly
supervised training set are equipped with logical forms, and fewer spurious
logical forms are generated. Such high-quality training data directly
contributes to a better semantic parsing model. Experimental results on one of
the largest KBQA datasets (i.e., CSQA) verify the effectiveness of our
approach: improving the precision from 67% to 72% and the recall from 67% to
72% in terms of the overall score.
| 2,019 | Computation and Language |
Towards Multimodal Emotion Recognition in German Speech Events in Cars
using Transfer Learning | The recognition of emotions by humans is a complex process which considers
multiple interacting signals such as facial expressions and both prosody and
semantic content of utterances. Commonly, research on automatic recognition of
emotions is, with few exceptions, limited to one modality. We describe an
in-car experiment for emotion recognition from speech interactions for three
modalities: the audio signal of a spoken interaction, the visual signal of the
driver's face, and the manually transcribed content of utterances of the
driver. We use off-the-shelf tools for emotion detection in audio and face and
compare that to a neural transfer learning approach for emotion recognition
from text which utilizes existing resources from other domains. We see that
transfer learning enables models based on out-of-domain corpora to perform
well. This method contributes up to 10 percentage points in F1, with up to 76
micro-average F1 across the emotions joy, annoyance and insecurity. Our
findings also indicate that off-the-shelf-tools analyzing face and audio are
not ready yet for emotion detection in in-car speech interactions without
further adjustments.
| 2,019 | Computation and Language |
Giveme5W1H: A Universal System for Extracting Main Events from News
Articles | Event extraction from news articles is a commonly required prerequisite for
various tasks, such as article summarization, article clustering, and news
aggregation. Due to the lack of universally applicable and publicly available
methods tailored to news datasets, many researchers redundantly implement event
extraction methods for their own projects. The journalistic 5W1H questions are
capable of describing the main event of an article, i.e., by answering who did
what, when, where, why, and how. We provide an in-depth description of an
improved version of Giveme5W1H, a system that uses syntactic and
domain-specific rules to automatically extract the relevant phrases from
English news articles to provide answers to these 5W1H questions. Given the
answers to these questions, the system determines an article's main event. In
an expert evaluation with three assessors and 120 articles, we determined an
overall precision of p=0.73, and p=0.82 for answering the first four W
questions, which alone can sufficiently summarize the main event reported on in
a news article. We recently made our system publicly available, and it remains
the only universal open-source 5W1H extractor capable of being applied to a
wide range of use cases in news analysis.
| 2,019 | Computation and Language |
Features in Extractive Supervised Single-document Summarization: Case of
Persian News | Text summarization has been one of the most challenging areas of research in
NLP. Much effort has been made to overcome this challenge by using either the
abstractive or extractive methods. Extractive methods are more popular, due to
their simplicity compared with the more elaborate abstractive methods. In
extractive approaches, the system will not generate sentences. Instead, it
learns how to score sentences within the text by using some textual features
and subsequently selecting those with the highest-rank. Therefore, the core
objective is ranking and it highly depends on the document. This dependency has
been unnoticed by many state-of-the-art solutions. In this work, the features
of the document are integrated into vectors of every sentence. In this way, the
system becomes informed about the context, increases the precision of the
learned model and consequently produces comprehensive and brief summaries.
| 2,019 | Computation and Language |
#MeTooMaastricht: Building a chatbot to assist survivors of sexual
harassment | Inspired by the recent social movement of #MeToo, we are building a chatbot
to assist survivors of sexual harassment cases (designed for the city of
Maastricht but can easily be extended). The motivation behind this work is
twofold: properly assist survivors of such events by directing them to
appropriate institutions that can offer them help and increase the incident
documentation so as to gather more data about harassment cases which are
currently under reported. We break down the problem into three data
science/machine learning components: harassment type identification (treated as
a classification problem), spatio-temporal information extraction (treated as
Named Entity Recognition problem) and dialogue with the users (treated as a
slot-filling based chatbot). We are able to achieve a success rate of more than
98% for the identification of a harassment-or-not case and around 80% for the
specific type harassment identification. Locations and dates are identified
with more than 90% accuracy and time occurrences prove more challenging with
almost 80%. Finally, initial validation of the chatbot shows great potential
for the further development and deployment of such a beneficial for the whole
society tool.
| 2,019 | Computation and Language |
Don't Forget the Long Tail! A Comprehensive Analysis of Morphological
Generalization in Bilingual Lexicon Induction | Human translators routinely have to translate rare inflections of words - due
to the Zipfian distribution of words in a language. When translating from
Spanish, a good translator would have no problem identifying the proper
translation of a statistically rare inflection such as habl\'aramos. Note the
lexeme itself, hablar, is relatively common. In this work, we investigate
whether state-of-the-art bilingual lexicon inducers are capable of learning
this kind of generalization. We introduce 40 morphologically complete
dictionaries in 10 languages and evaluate three of the state-of-the-art models
on the task of translation of less frequent morphological forms. We demonstrate
that the performance of state-of-the-art models drops considerably when
evaluated on infrequent morphological inflections and then show that adding a
simple morphological constraint at training time improves the performance,
proving that the bilingual lexicon inducers can benefit from better encoding of
morphology.
| 2,019 | Computation and Language |
A systematic comparison of methods for low-resource dependency parsing
on genuinely low-resource languages | Parsers are available for only a handful of the world's languages, since they
require lots of training data. How far can we get with just a small amount of
training data? We systematically compare a set of simple strategies for
improving low-resource parsers: data augmentation, which has not been tested
before; cross-lingual training; and transliteration. Experimenting on three
typologically diverse low-resource languages---North S\'ami, Galician, and
Kazah---We find that (1) when only the low-resource treebank is available, data
augmentation is very helpful; (2) when a related high-resource treebank is
available, cross-lingual training is helpful and complements data augmentation;
and (3) when the high-resource treebank uses a different writing system,
transliteration into a shared orthographic spaces is also very helpful.
| 2,019 | Computation and Language |
Supervised Multimodal Bitransformers for Classifying Images and Text | Self-supervised bidirectional transformer models such as BERT have led to
dramatic improvements in a wide variety of textual classification tasks. The
modern digital world is increasingly multimodal, however, and textual
information is often accompanied by other modalities such as images. We
introduce a supervised multimodal bitransformer model that fuses information
from text and image encoders, and obtain state-of-the-art performance on
various multimodal classification benchmark tasks, outperforming strong
baselines, including on hard test sets specifically designed to measure
multimodal performance.
| 2,020 | Computation and Language |
Extracting and Learning a Dependency-Enhanced Type Lexicon for Dutch | This thesis is concerned with type-logical grammars and their practical
applicability as tools of reasoning about sentence syntax and semantics. The
focal point is narrowed to Dutch, a language exhibiting a large degree of word
order variability. In order to overcome difficulties arising as a result of
that variability, the thesis explores and expands upon a type grammar based on
Multiplicative Intuitionistic Linear Logic, agnostic to word order but enriched
with decorations that aim to reduce its proof-theoretic complexity. An
algorithm for the conversion of dependency-annotated sentences into type
sequences is then implemented, populating the type logic with concrete,
data-driven lexical types. Two experiments are ran on the resulting grammar
instantiation. The first pertains to the learnability of the type-assignment
process by a neural architecture. A novel application of a self-attentive
sequence transduction model is proposed; contrary to established practices, it
constructs types inductively by internalizing the type-formation syntax, thus
exhibiting generalizability beyond a pre-specified type vocabulary. The second
revolves around a deductive parsing system that can resolve structural
ambiguities by consulting both word and type information; preliminary results
suggest both excellent computational efficiency and performance.
| 2,019 | Computation and Language |
User Evaluation of a Multi-dimensional Statistical Dialogue System | We present the first complete spoken dialogue system driven by a
multi-dimensional statistical dialogue manager. This framework has been shown
to substantially reduce data needs by leveraging domain-independent dimensions,
such as social obligations or feedback, which (as we show) can be transferred
between domains. In this paper, we conduct a user study and show that the
performance of a multi-dimensional system, which can be adapted from a source
domain, is equivalent to that of a one-dimensional baseline, which can only be
trained from scratch.
| 2,019 | Computation and Language |
RNN Architecture Learning with Sparse Regularization | Neural models for NLP typically use large numbers of parameters to reach
state-of-the-art performance, which can lead to excessive memory usage and
increased runtime. We present a structure learning method for learning sparse,
parameter-efficient NLP models. Our method applies group lasso to rational RNNs
(Peng et al., 2018), a family of models that is closely connected to weighted
finite-state automata (WFSAs). We take advantage of rational RNNs' natural
grouping of the weights, so the group lasso penalty directly removes WFSA
states, substantially reducing the number of parameters in the model. Our
experiments on a number of sentiment analysis datasets, using both GloVe and
BERT embeddings, show that our approach learns neural structures which have
fewer parameters without sacrificing performance relative to parameter-rich
baselines. Our method also highlights the interpretable properties of rational
RNNs. We show that sparsifying such models makes them easier to visualize, and
we present models that rely exclusively on as few as three WFSAs after pruning
more than 90% of the weights. We publicly release our code.
| 2,019 | Computation and Language |
Argument Component Classification for Classroom Discussions | This paper focuses on argument component classification for transcribed
spoken classroom discussions, with the goal of automatically classifying
student utterances into claims, evidence, and warrants. We show that an
existing method for argument component classification developed for another
educationally-oriented domain performs poorly on our dataset. We then show that
feature sets from prior work on argument mining for student essays and online
dialogues can be used to improve performance considerably. We also provide a
comparison between convolutional neural networks and recurrent neural networks
when trained under different conditions to classify argument components in
classroom discussions. While neural network models are not always able to
outperform a logistic regression model, we were able to gain some useful
insights: convolutional networks are more robust than recurrent networks both
at the character and at the word level, and specificity information can help
boost performance in multi-task training.
| 2,018 | Computation and Language |
Annotating Student Talk in Text-based Classroom Discussions | Classroom discussions in English Language Arts have a positive effect on
students' reading, writing and reasoning skills. Although prior work has
largely focused on teacher talk and student-teacher interactions, we focus on
three theoretically-motivated aspects of high-quality student talk:
argumentation, specificity, and knowledge domain. We introduce an annotation
scheme, then show that the scheme can be used to produce reliable annotations
and that the annotations are predictive of discussion quality. We also
highlight opportunities provided by our scheme for education and natural
language processing research.
| 2,018 | Computation and Language |
Uncertain Natural Language Inference | We introduce Uncertain Natural Language Inference (UNLI), a refinement of
Natural Language Inference (NLI) that shifts away from categorical labels,
targeting instead the direct prediction of subjective probability assessments.
We demonstrate the feasibility of collecting annotations for UNLI by relabeling
a portion of the SNLI dataset under a probabilistic scale, where items even
with the same categorical label differ in how likely people judge them to be
true given a premise. We describe a direct scalar regression modeling approach,
and find that existing categorically labeled NLI data can be used in
pre-training. Our best models approach human performance, demonstrating models
may be capable of more subtle inferences than the categorical bin assignment
employed in current NLI tasks.
| 2,020 | Computation and Language |
Deep learning with sentence embeddings pre-trained on biomedical corpora
improves the performance of finding similar sentences in electronic medical
records | Capturing sentence semantics plays a vital role in a range of text mining
applications. Despite continuous efforts on the development of related datasets
and models in the general domain, both datasets and models are limited in
biomedical and clinical domains. The BioCreative/OHNLP organizers have made the
first attempt to annotate 1,068 sentence pairs from clinical notes and have
called for a community effort to tackle the Semantic Textual Similarity
(BioCreative/OHNLP STS) challenge. We developed models using traditional
machine learning and deep learning approaches. For the post challenge, we focus
on two models: the Random Forest and the Encoder Network. We applied sentence
embeddings pre-trained on PubMed abstracts and MIMIC-III clinical notes and
updated the Random Forest and the Encoder Network accordingly. The official
results demonstrated our best submission was the ensemble of eight models. It
achieved a Person correlation coefficient of 0.8328, the highest performance
among 13 submissions from 4 teams. For the post challenge, the performance of
both Random Forest and the Encoder Network was improved; in particular, the
correlation of the Encoder Network was improved by ~13%. During the challenge
task, no end-to-end deep learning models had better performance than machine
learning models that take manually-crafted features. In contrast, with the
sentence embeddings pre-trained on biomedical corpora, the Encoder Network now
achieves a correlation of ~0.84, which is higher than the original best model.
The ensembled model taking the improved versions of the Random Forest and
Encoder Network as inputs further increased performance to 0.8528. Deep
learning models with sentence embeddings pre-trained on biomedical corpora
achieve the highest performance on the test set.
| 2,019 | Computation and Language |
"Going on a vacation" takes longer than "Going for a walk": A Study of
Temporal Commonsense Understanding | Understanding time is crucial for understanding events expressed in natural
language. Because people rarely say the obvious, it is often necessary to have
commonsense knowledge about various temporal aspects of events, such as
duration, frequency, and temporal order. However, this important problem has so
far received limited attention. This paper systematically studies this temporal
commonsense problem. Specifically, we define five classes of temporal
commonsense, and use crowdsourcing to develop a new dataset, MCTACO, that
serves as a test set for this task. We find that the best current methods used
on MCTACO are still far behind human performance, by about 20%, and discuss
several directions for improvement. We hope that the new dataset and our study
here can foster more future research on this topic.
| 2,019 | Computation and Language |
Learning to Discriminate Perturbations for Blocking Adversarial Attacks
in Text Classification | Adversarial attacks against machine learning models have threatened various
real-world applications such as spam filtering and sentiment analysis. In this
paper, we propose a novel framework, learning to DIScriminate Perturbations
(DISP), to identify and adjust malicious perturbations, thereby blocking
adversarial attacks for text classification models. To identify adversarial
attacks, a perturbation discriminator validates how likely a token in the text
is perturbed and provides a set of potential perturbations. For each potential
perturbation, an embedding estimator learns to restore the embedding of the
original word based on the context and a replacement token is chosen based on
approximate kNN search. DISP can block adversarial attacks for any NLP model
without modifying the model structure or training procedure. Extensive
experiments on two benchmark datasets demonstrate that DISP significantly
outperforms baseline methods in blocking adversarial attacks for text
classification. In addition, in-depth analysis shows the robustness of DISP
across different situations.
| 2,019 | Computation and Language |
ACUTE-EVAL: Improved Dialogue Evaluation with Optimized Questions and
Multi-turn Comparisons | While dialogue remains an important end-goal of natural language research,
the difficulty of evaluation is an oft-quoted reason why it remains troublesome
to make real progress towards its solution. Evaluation difficulties are
actually two-fold: not only do automatic metrics not correlate well with human
judgments, but also human judgments themselves are in fact difficult to
measure. The two most used human judgment tests, single-turn pairwise
evaluation and multi-turn Likert scores, both have serious flaws as we discuss
in this work.
We instead provide a novel procedure involving comparing two full dialogues,
where a human judge is asked to pay attention to only one speaker within each,
and make a pairwise judgment. The questions themselves are optimized to
maximize the robustness of judgments across different annotators, resulting in
better tests. We also show how these tests work in self-play model chat setups,
resulting in faster, cheaper tests. We hope these tests become the de facto
standard, and will release open-source code to that end.
| 2,019 | Computation and Language |
Abductive Reasoning as Self-Supervision for Common Sense Question
Answering | Question answering has seen significant advances in recent times, especially
with the introduction of increasingly bigger transformer-based models
pre-trained on massive amounts of data. While achieving impressive results on
many benchmarks, their performances appear to be proportional to the amount of
training data available in the target domain. In this work, we explore the
ability of current question-answering models to generalize - to both other
domains as well as with restricted training data. We find that large amounts of
training data are necessary, both for pre-training as well as fine-tuning to a
task, for the models to perform well on the designated task. We introduce a
novel abductive reasoning approach based on Grenander's Pattern Theory
framework to provide self-supervised domain adaptation cues or "pseudo-labels,"
which can be used instead of expensive human annotations. The proposed
self-supervised training regimen allows for effective domain adaptation without
losing performance compared to fully supervised baselines. Extensive
experiments on two publicly available benchmarks show the efficacy of the
proposed approach. We show that neural networks models trained using
self-labeled data can retain up to $75\%$ of the performance of models trained
on large amounts of human-annotated training data.
| 2,019 | Computation and Language |
Attending the Emotions to Detect Online Abusive Language | In recent years, abusive behavior has become a serious issue in online social
networks. In this paper, we present a new corpus from a semi-anonymous social
media platform, which contains the instances of offensive and neutral classes.
We introduce a single deep neural architecture that considers both local and
sequential information from the text in order to detect abusive language. Along
with this model, we introduce a new attention mechanism called emotion-aware
attention. This mechanism utilizes the emotions behind the text to find the
most important words within that text. We experiment with this model on our
dataset and later present the analysis. Additionally, we evaluate our proposed
method on different corpora and show new state-of-the-art results with respect
to offensive language detection.
| 2,019 | Computation and Language |
Efficient Sentence Embedding using Discrete Cosine Transform | Vector averaging remains one of the most popular sentence embedding methods
in spite of its obvious disregard for syntactic structure. While more complex
sequential or convolutional networks potentially yield superior classification
performance, the improvements in classification accuracy are typically mediocre
compared to the simple vector averaging. As an efficient alternative, we
propose the use of discrete cosine transform (DCT) to compress word sequences
in an order-preserving manner. The lower order DCT coefficients represent the
overall feature patterns in sentences, which results in suitable embeddings for
tasks that could benefit from syntactic features. Our results in semantic
probing tasks demonstrate that DCT embeddings indeed preserve more syntactic
information compared with vector averaging. With practically equivalent
complexity, the model yields better overall performance in downstream
classification tasks that correlate with syntactic features, which illustrates
the capacity of DCT to preserve word order information.
| 2,019 | Computation and Language |
To lemmatize or not to lemmatize: how word normalisation affects ELMo
performance in word sense disambiguation | We critically evaluate the widespread assumption that deep learning NLP
models do not require lemmatized input. To test this, we trained versions of
contextualised word embedding ELMo models on raw tokenized corpora and on the
corpora with word tokens replaced by their lemmas. Then, these models were
evaluated on the word sense disambiguation task. This was done for the English
and Russian languages.
The experiments showed that while lemmatization is indeed not necessary for
English, the situation is different for Russian. It seems that for
rich-morphology languages, using lemmatized training and testing data yields
small but consistent improvements: at least for word sense disambiguation. This
means that the decisions about text pre-processing before training ELMo should
consider the linguistic nature of the language in question.
| 2,019 | Computation and Language |
Enhancing Machine Translation with Dependency-Aware Self-Attention | Most neural machine translation models only rely on pairs of parallel
sentences, assuming syntactic information is automatically learned by an
attention mechanism. In this work, we investigate different approaches to
incorporate syntactic knowledge in the Transformer model and also propose a
novel, parameter-free, dependency-aware self-attention mechanism that improves
its translation quality, especially for long sentences and in low-resource
scenarios. We show the efficacy of each approach on WMT English-German and
English-Turkish, and WAT English-Japanese translation tasks.
| 2,020 | Computation and Language |
On Extractive and Abstractive Neural Document Summarization with
Transformer Language Models | We present a method to produce abstractive summaries of long documents that
exceed several thousand words via neural abstractive summarization. We perform
a simple extractive step before generating a summary, which is then used to
condition the transformer language model on relevant information before being
tasked with generating a summary. We show that this extractive step
significantly improves summarization results. We also show that this approach
produces more abstractive summaries compared to prior work that employs a copy
mechanism while still achieving higher rouge scores. Note: The abstract above
was not written by the authors, it was generated by one of the models presented
in this paper.
| 2,020 | Computation and Language |
KG-BERT: BERT for Knowledge Graph Completion | Knowledge graphs are important resources for many artificial intelligence
tasks but often suffer from incompleteness. In this work, we propose to use
pre-trained language models for knowledge graph completion. We treat triples in
knowledge graphs as textual sequences and propose a novel framework named
Knowledge Graph Bidirectional Encoder Representations from Transformer
(KG-BERT) to model these triples. Our method takes entity and relation
descriptions of a triple as input and computes scoring function of the triple
with the KG-BERT language model. Experimental results on multiple benchmark
knowledge graphs show that our method can achieve state-of-the-art performance
in triple classification, link prediction and relation prediction tasks.
| 2,019 | Computation and Language |
Deleter: Leveraging BERT to Perform Unsupervised Successive Text
Compression | Text compression has diverse applications such as Summarization, Reading
Comprehension and Text Editing. However, almost all existing approaches require
either hand-crafted features, syntactic labels or parallel data. Even for one
that achieves this task in an unsupervised setting, its architecture
necessitates a task-specific autoencoder. Moreover, these models only generate
one compressed sentence for each source input, so that adapting to different
style requirements (e.g. length) for the final output usually implies
retraining the model from scratch. In this work, we propose a fully
unsupervised model, Deleter, that is able to discover an "optimal deletion
path" for an arbitrary sentence, where each intermediate sequence along the
path is a coherent subsequence of the previous one. This approach relies
exclusively on a pretrained bidirectional language model (BERT) to score each
candidate deletion based on the average Perplexity of the resulting sentence
and performs progressive greedy lookahead search to select the best deletion
for each step. We apply Deleter to the task of extractive Sentence Compression,
and found that our model is competitive with state-of-the-art supervised models
trained on 1.02 million in-domain examples with similar compression ratio.
Qualitative analysis, as well as automatic and human evaluations both verify
that our model produces high-quality compression.
| 2,019 | Computation and Language |
A Novel Cascade Binary Tagging Framework for Relational Triple
Extraction | Extracting relational triples from unstructured text is crucial for
large-scale knowledge graph construction. However, few existing works excel in
solving the overlapping triple problem where multiple relational triples in the
same sentence share the same entities. In this work, we introduce a fresh
perspective to revisit the relational triple extraction task and propose a
novel cascade binary tagging framework (CasRel) derived from a principled
problem formulation. Instead of treating relations as discrete labels as in
previous works, our new framework models relations as functions that map
subjects to objects in a sentence, which naturally handles the overlapping
problem. Experiments show that the CasRel framework already outperforms
state-of-the-art methods even when its encoder module uses a randomly
initialized BERT encoder, showing the power of the new tagging framework. It
enjoys further performance boost when employing a pre-trained BERT encoder,
outperforming the strongest baseline by 17.5 and 30.2 absolute gain in F1-score
on two public datasets NYT and WebNLG, respectively. In-depth analysis on
different scenarios of overlapping triples shows that the method delivers
consistent performance gain across all these scenarios. The source code and
data are released online.
| 2,020 | Computation and Language |
MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact
Checking of Claims | We contribute the largest publicly available dataset of naturally occurring
factual claims for the purpose of automatic claim verification. It is collected
from 26 fact checking websites in English, paired with textual sources and rich
metadata, and labelled for veracity by human expert journalists. We present an
in-depth analysis of the dataset, highlighting characteristics and challenges.
Further, we present results for automatic veracity prediction, both with
established baselines and with a novel method for joint ranking of evidence
pages and predicting veracity that outperforms all baselines. Significant
performance increases are achieved by encoding evidence, and by modelling
metadata. Our best-performing model achieves a Macro F1 of 49.2%, showing that
this is a challenging testbed for claim veracity prediction.
| 2,019 | Computation and Language |
Semantic Role Labeling with Iterative Structure Refinement | Modern state-of-the-art Semantic Role Labeling (SRL) methods rely on
expressive sentence encoders (e.g., multi-layer LSTMs) but tend to model only
local (if any) interactions between individual argument labeling decisions.
This contrasts with earlier work and also with the intuition that the labels of
individual arguments are strongly interdependent. We model interactions between
argument labeling decisions through {\it iterative refinement}. Starting with
an output produced by a factorized model, we iteratively refine it using a
refinement network. Instead of modeling arbitrary interactions among roles and
words, we encode prior knowledge about the SRL problem by designing a
restricted network architecture capturing non-local interactions. This modeling
choice prevents overfitting and results in an effective model, outperforming
strong factorized baseline models on all 7 CoNLL-2009 languages, and achieving
state-of-the-art results on 5 of them, including English.
| 2,019 | Computation and Language |
Relationships from Entity Stream | Relational reasoning is a central component of intelligent behavior, but has
proven difficult for neural networks to learn. The Relation Network (RN) module
was recently proposed by DeepMind to solve such problems, and demonstrated
state-of-the-art results on a number of datasets. However, the RN module scales
quadratically in the size of the input, since it calculates relationship
factors between every patch in the visual field, including those that do not
correspond to entities. In this paper, we describe an architecture that enables
relationships to be determined from a stream of entities obtained by an
attention mechanism over the input field. The model is trained end-to-end, and
demonstrates equivalent performance with greater interpretability while
requiring only a fraction of the model parameters of the original RN module.
| 2,019 | Computation and Language |
Dependency Parsing for Spoken Dialog Systems | Dependency parsing of conversational input can play an important role in
language understanding for dialog systems by identifying the relationships
between entities extracted from user utterances. Additionally, effective
dependency parsing can elucidate differences in language structure and usage
for discourse analysis of human-human versus human-machine dialogs. However,
models trained on datasets based on news articles and web data do not perform
well on spoken human-machine dialog, and currently available annotation schemes
do not adapt well to dialog data. Therefore, we propose the Spoken Conversation
Universal Dependencies (SCUD) annotation scheme that extends the Universal
Dependencies (UD) (Nivre et al., 2016) guidelines to spoken human-machine
dialogs. We also provide ConvBank, a conversation dataset between humans and an
open-domain conversational dialog system with SCUD annotation. Finally, to
demonstrate the utility of the dataset, we train a dependency parser on the
ConvBank dataset. We demonstrate that by pre-training a dependency parser on a
set of larger public datasets and fine-tuning on ConvBank data, we achieved the
best result, 85.05% unlabeled and 77.82% labeled attachment accuracy.
| 2,019 | Computation and Language |
LAMOL: LAnguage MOdeling for Lifelong Language Learning | Most research on lifelong learning applies to images or games, but not
language. We present LAMOL, a simple yet effective method for lifelong language
learning (LLL) based on language modeling. LAMOL replays pseudo-samples of
previous tasks while requiring no extra memory or model capacity. Specifically,
LAMOL is a language model that simultaneously learns to solve the tasks and
generate training samples. When the model is trained for a new task, it
generates pseudo-samples of previous tasks for training alongside data for the
new task. The results show that LAMOL prevents catastrophic forgetting without
any sign of intransigence and can perform five very different language tasks
sequentially with only one model. Overall, LAMOL outperforms previous methods
by a considerable margin and is only 2-3% worse than multitasking, which is
usually considered the LLL upper bound. The source code is available at
https://github.com/jojotenya/LAMOL.
| 2,019 | Computation and Language |
Neural Machine Translation with Byte-Level Subwords | Almost all existing machine translation models are built on top of
character-based vocabularies: characters, subwords or words. Rare characters
from noisy text or character-rich languages such as Japanese and Chinese
however can unnecessarily take up vocabulary slots and limit its compactness.
Representing text at the level of bytes and using the 256 byte set as
vocabulary is a potential solution to this issue. High computational cost has
however prevented it from being widely deployed or used in practice. In this
paper, we investigate byte-level subwords, specifically byte-level BPE (BBPE),
which is compacter than character vocabulary and has no out-of-vocabulary
tokens, but is more efficient than using pure bytes only is. We claim that
contextualizing BBPE embeddings is necessary, which can be implemented by a
convolutional or recurrent layer. Our experiments show that BBPE has comparable
performance to BPE while its size is only 1/8 of that for BPE. In the
multilingual setting, BBPE maximizes vocabulary sharing across many languages
and achieves better translation quality. Moreover, we show that BBPE enables
transferring models between languages with non-overlapping character sets.
| 2,019 | Computation and Language |
Investigating Sports Commentator Bias within a Large Corpus of American
Football Broadcasts | Sports broadcasters inject drama into play-by-play commentary by building
team and player narratives through subjective analyses and anecdotes. Prior
studies based on small datasets and manual coding show that such theatrics
evince commentator bias in sports broadcasts. To examine this phenomenon, we
assemble FOOTBALL, which contains 1,455 broadcast transcripts from American
football games across six decades that are automatically annotated with 250K
player mentions and linked with racial metadata. We identify major confounding
factors for researchers examining racial bias in FOOTBALL, and perform a
computational analysis that supports conclusions from prior social science
studies.
| 2,019 | Computation and Language |
Designing and Interpreting Probes with Control Tasks | Probes, supervised models trained to predict properties (like
parts-of-speech) from representations (like ELMo), have achieved high accuracy
on a range of linguistic tasks. But does this mean that the representations
encode linguistic structure or just that the probe has learned the linguistic
task? In this paper, we propose control tasks, which associate word types with
random outputs, to complement linguistic tasks. By construction, these tasks
can only be learned by the probe itself. So a good probe, (one that reflects
the representation), should be selective, achieving high linguistic task
accuracy and low control task accuracy. The selectivity of a probe puts
linguistic task accuracy in context with the probe's capacity to memorize from
word types. We construct control tasks for English part-of-speech tagging and
dependency edge prediction, and show that popular probes on ELMo
representations are not selective. We also find that dropout, commonly used to
control probe complexity, is ineffective for improving selectivity of MLPs, but
that other forms of regularization are effective. Finally, we find that while
probes on the first layer of ELMo yield slightly better part-of-speech tagging
accuracy than the second, probes on the second layer are substantially more
selective, which raises the question of which layer better represents
parts-of-speech.
| 2,019 | Computation and Language |
Quality Estimation for Image Captions Based on Large-scale Human
Evaluations | Automatic image captioning has improved significantly over the last few
years, but the problem is far from being solved, with state of the art models
still often producing low quality captions when used in the wild. In this
paper, we focus on the task of Quality Estimation (QE) for image captions,
which attempts to model the caption quality from a human perspective and
without access to ground-truth references, so that it can be applied at
prediction time to detect low-quality captions produced on previously unseen
images. For this task, we develop a human evaluation process that collects
coarse-grained caption annotations from crowdsourced users, which is then used
to collect a large scale dataset spanning more than 600k caption quality
ratings. We then carefully validate the quality of the collected ratings and
establish baseline models for this new QE task. Finally, we further collect
fine-grained caption quality annotations from trained raters, and use them to
demonstrate that QE models trained over the coarse ratings can effectively
detect and filter out low-quality image captions, thereby improving the user
experience from captioning systems.
| 2,021 | Computation and Language |
Symmetric Regularization based BERT for Pair-wise Semantic Reasoning | The ability of semantic reasoning over the sentence pair is essential for
many natural language understanding tasks, e.g., natural language inference and
machine reading comprehension. A recent significant improvement in these tasks
comes from BERT. As reported, the next sentence prediction (NSP) in BERT, which
learns the contextual relationship between two sentences, is of great
significance for downstream problems with sentence-pair input. Despite the
effectiveness of NSP, we suggest that NSP still lacks the essential signal to
distinguish between entailment and shallow correlation. To remedy this, we
propose to augment the NSP task to a 3-class categorization task, which
includes a category for previous sentence prediction (PSP). The involvement of
PSP encourages the model to focus on the informative semantics to determine the
sentence order, thereby improves the ability of semantic understanding. This
simple modification yields remarkable improvement against vanilla BERT. To
further incorporate the document-level information, the scope of NSP and PSP is
expanded into a broader range, i.e., NSP and PSP also include close but
nonsuccessive sentences, the noise of which is mitigated by the label-smoothing
technique. Both qualitative and quantitative experimental results demonstrate
the effectiveness of the proposed method. Our method consistently improves the
performance on the NLI and MRC benchmarks, including the challenging HANS
dataset \cite{hans}, suggesting that the document-level task is still promising
for the pre-training.
| 2,021 | Computation and Language |
Conditional Text Generation for Harmonious Human-Machine Interaction | In recent years, with the development of deep learning, text generation
technology has undergone great changes and provided many kinds of services for
human beings, such as restaurant reservation and daily communication. The
automatically generated text is becoming more and more fluent so researchers
begin to consider more anthropomorphic text generation technology, that is the
conditional text generation, including emotional text generation, personalized
text generation, and so on. Conditional Text Generation (CTG) has thus become a
research hotspot. As a promising research field, we find that many efforts have
been paid to exploring it. Therefore, we aim to give a comprehensive review of
the new research trends of CTG. We first summary several key techniques and
illustrate the technical evolution route in the field of neural text
generation, based on the concept model of CTG. We further make an investigation
of existing CTG fields and propose several general learning models for CTG.
Finally, we discuss the open issues and promising research directions of CTG.
| 2,020 | Computation and Language |
Commonsense Knowledge + BERT for Level 2 Reading Comprehension Ability
Test | Commonsense knowledge plays an important role when we read. The performance
of BERT on SQuAD dataset shows that the accuracy of BERT can be better than
human users. However, it does not mean that computers can surpass the human
being in reading comprehension. CommonsenseQA is a large-scale dataset which is
designed based on commonsense knowledge. BERT only achieved an accuracy of
55.9% on it. The result shows that computers cannot apply commonsense knowledge
like human beings to answer questions. Comprehension Ability Test (CAT) divided
the reading comprehension ability at four levels. We can achieve human like
comprehension ability level by level. BERT has performed well at level 1 which
does not require common knowledge. In this research, we propose a system which
aims to allow computers to read articles and answer related questions with
commonsense knowledge like a human being for CAT level 2. This system consists
of three parts. Firstly, we built a commonsense knowledge graph; and then
automatically constructed the commonsense knowledge question dataset according
to it. Finally, BERT is combined with the commonsense knowledge to achieve the
reading comprehension ability at CAT level 2. Experiments show that it can pass
the CAT as long as the required common knowledge is included in the knowledge
base.
| 2,019 | Computation and Language |
May I Check Again? -- A simple but efficient way to generate and use
contextual dictionaries for Named Entity Recognition. Application to French
Legal Texts | In this paper we present a new method to learn a model robust to typos for a
Named Entity Recognition task. Our improvement over existing methods helps the
model to take into account the context of the sentence inside a court decision
in order to recognize an entity with a typo. We used state-of-the-art models
and enriched the last layer of the neural network with high-level information
linked with the potential of the word to be a certain type of entity. More
precisely, we utilized the similarities between the word and the potential
entity candidates in the tagged sentence context. The experiments on a dataset
of French court decisions show a reduction of the relative F1-score error of
32%, upgrading the score obtained with the most competitive fine-tuned
state-of-the-art system from 94.85% to 96.52%.
| 2,019 | Computation and Language |
Back to the Future -- Sequential Alignment of Text Representations | Language evolves over time in many ways relevant to natural language
processing tasks. For example, recent occurrences of tokens 'BERT' and 'ELMO'
in publications refer to neural network architectures rather than persons. This
type of temporal signal is typically overlooked, but is important if one aims
to deploy a machine learning model over an extended period of time. In
particular, language evolution causes data drift between time-steps in
sequential decision-making tasks. Examples of such tasks include prediction of
paper acceptance for yearly conferences (regular intervals) or author stance
prediction for rumours on Twitter (irregular intervals). Inspired by successes
in computer vision, we tackle data drift by sequentially aligning learned
representations. We evaluate on three challenging tasks varying in terms of
time-scales, linguistic units, and domains. These tasks show our method
outperforming several strong baselines, including using all available data. We
argue that, due to its low computational expense, sequential alignment is a
practical solution to dealing with language evolution.
| 2,019 | Computation and Language |
Aspect-based Sentiment Classification with Aspect-specific Graph
Convolutional Networks | Due to their inherent capability in semantic alignment of aspects and their
context words, attention mechanism and Convolutional Neural Networks (CNNs) are
widely applied for aspect-based sentiment classification. However, these models
lack a mechanism to account for relevant syntactical constraints and long-range
word dependencies, and hence may mistakenly recognize syntactically irrelevant
contextual words as clues for judging aspect sentiment. To tackle this problem,
we propose to build a Graph Convolutional Network (GCN) over the dependency
tree of a sentence to exploit syntactical information and word dependencies.
Based on it, a novel aspect-specific sentiment classification framework is
raised. Experiments on three benchmarking collections illustrate that our
proposed model has comparable effectiveness to a range of state-of-the-art
models, and further demonstrate that both syntactical information and
long-range word dependencies are properly captured by the graph convolution
structure.
| 2,019 | Computation and Language |
Story Realization: Expanding Plot Events into Sentences | Neural network based approaches to automated story plot generation attempt to
learn how to generate novel plots from a corpus of natural language plot
summaries. Prior work has shown that a semantic abstraction of sentences called
events improves neural plot generation and and allows one to decompose the
problem into: (1) the generation of a sequence of events (event-to-event) and
(2) the transformation of these events into natural language sentences
(event-to-sentence). However, typical neural language generation approaches to
event-to-sentence can ignore the event details and produce
grammatically-correct but semantically-unrelated sentences. We present an
ensemble-based model that generates natural language guided by events.We
provide results---including a human subjects study---for a full end-to-end
automated story generation system showing that our method generates more
coherent and plausible stories than baseline approaches.
| 2,020 | Computation and Language |
Evaluating Topic Quality with Posterior Variability | Probabilistic topic models such as latent Dirichlet allocation (LDA) are
popularly used with Bayesian inference methods such as Gibbs sampling to learn
posterior distributions over topic model parameters. We derive a novel measure
of LDA topic quality using the variability of the posterior distributions.
Compared to several existing baselines for automatic topic evaluation, the
proposed metric achieves state-of-the-art correlations with human judgments of
topic quality in experiments on three corpora. We additionally demonstrate that
topic quality estimation can be further improved using a supervised estimator
that combines multiple metrics.
| 2,019 | Computation and Language |
Multi-Task Bidirectional Transformer Representations for Irony Detection | Supervised deep learning requires large amounts of training data. In the
context of the FIRE2019 Arabic irony detection shared task (IDAT@FIRE2019), we
show how we mitigate this need by fine-tuning the pre-trained bidirectional
encoders from transformers (BERT) on gold data in a multi-task setting. We
further improve our models by by further pre-training BERT on `in-domain' data,
thus alleviating an issue of dialect mismatch in the Google-released BERT
model. Our best model acquires 82.4 macro F1 score, and has the unique
advantage of being feature-engineering free (i.e., based exclusively on deep
learning).
| 2,019 | Computation and Language |
Large Scale Question Answering using Tourism Data | We introduce the novel task of answering entity-seeking recommendation
questions using a collection of reviews that describe candidate answer
entities. We harvest a QA dataset that contains 47,124 paragraph-sized real
user questions from travelers seeking recommendations for hotels, attractions
and restaurants. Each question can have thousands of candidate answers to
choose from and each candidate is associated with a collection of unstructured
reviews. This dataset is especially challenging because commonly used neural
architectures for reasoning and QA are prohibitively expensive for a task of
this scale. As a solution, we design a scalable cluster-select-rerank approach.
It first clusters text for each entity to identify exemplar sentences
describing an entity. It then uses a scalable neural information retrieval (IR)
module to select a set of potential entities from the large candidate set. A
reranker uses a deeper attention-based architecture to pick the best answers
from the selected entities. This strategy performs better than a pure IR or a
pure attention-based reasoning approach yielding nearly 25% relative
improvement in Accuracy@3 over both approaches.
| 2,020 | Computation and Language |
Czech Text Processing with Contextual Embeddings: POS Tagging,
Lemmatization, Parsing and NER | Contextualized embeddings, which capture appropriate word meaning depending
on context, have recently been proposed. We evaluate two meth ods for
precomputing such embeddings, BERT and Flair, on four Czech text processing
tasks: part-of-speech (POS) tagging, lemmatization, dependency pars ing and
named entity recognition (NER). The first three tasks, POS tagging,
lemmatization and dependency parsing, are evaluated on two corpora: the Prague
Dependency Treebank 3.5 and the Universal Dependencies 2.3. The named entity
recognition (NER) is evaluated on the Czech Named Entity Corpus 1.1 and 2.0. We
report state-of-the-art results for the above mentioned tasks and corpora.
| 2,021 | Computation and Language |
Entity, Relation, and Event Extraction with Contextualized Span
Representations | We examine the capabilities of a unified, multi-task framework for three
information extraction tasks: named entity recognition, relation extraction,
and event extraction. Our framework (called DyGIE++) accomplishes all tasks by
enumerating, refining, and scoring text spans designed to capture local
(within-sentence) and global (cross-sentence) context. Our framework achieves
state-of-the-art results across all tasks, on four datasets from a variety of
domains. We perform experiments comparing different techniques to construct
span representations. Contextualized embeddings like BERT perform well at
capturing relationships among entities in the same or adjacent sentences, while
dynamic span graph updates model long-range cross-sentence relationships. For
instance, propagating span representations via predicted coreference links can
enable the model to disambiguate challenging entity mentions. Our code is
publicly available at https://github.com/dwadden/dygiepp and can be easily
adapted for new tasks or datasets.
| 2,019 | Computation and Language |
QuaRTz: An Open-Domain Dataset of Qualitative Relationship Questions | We introduce the first open-domain dataset, called QuaRTz, for reasoning
about textual qualitative relationships. QuaRTz contains general qualitative
statements, e.g., "A sunscreen with a higher SPF protects the skin longer.",
twinned with 3864 crowdsourced situated questions, e.g., "Billy is wearing
sunscreen with a lower SPF than Lucy. Who will be best protected from the
sun?", plus annotations of the properties being compared. Unlike previous
datasets, the general knowledge is textual and not tied to a fixed set of
relationships, and tests a system's ability to comprehend and apply textual
qualitative knowledge in a novel setting. We find state-of-the-art results are
substantially (20%) below human performance, presenting an open challenge to
the NLP community.
| 2,019 | Computation and Language |
Neural Gaussian Copula for Variational Autoencoder | Variational language models seek to estimate the posterior of latent
variables with an approximated variational posterior. The model often assumes
the variational posterior to be factorized even when the true posterior is not.
The learned variational posterior under this assumption does not capture the
dependency relationships over latent variables. We argue that this would cause
a typical training problem called posterior collapse observed in all other
variational language models. We propose Gaussian Copula Variational Autoencoder
(VAE) to avert this problem. Copula is widely used to model correlation and
dependencies of high-dimensional random variables, and therefore it is helpful
to maintain the dependency relationships that are lost in VAE. The empirical
results show that by modeling the correlation of latent variables explicitly
using a neural parametric copula, we can avert this training difficulty while
getting competitive results among all other VAE approaches.
| 2,019 | Computation and Language |
Clickbait? Sensational Headline Generation with Auto-tuned Reinforcement
Learning | Sensational headlines are headlines that capture people's attention and
generate reader interest. Conventional abstractive headline generation methods,
unlike human writers, do not optimize for maximal reader attention. In this
paper, we propose a model that generates sensational headlines without labeled
data. We first train a sensationalism scorer by classifying online headlines
with many comments ("clickbait") against a baseline of headlines generated from
a summarization model. The score from the sensationalism scorer is used as the
reward for a reinforcement learner. However, maximizing the noisy
sensationalism reward will generate unnatural phrases instead of sensational
headlines. To effectively leverage this noisy reward, we propose a novel loss
function, Auto-tuned Reinforcement Learning (ARL), to dynamically balance
reinforcement learning (RL) with maximum likelihood estimation (MLE). Human
evaluation shows that 60.8% of samples generated by our model are sensational,
which is significantly better than the Pointer-Gen baseline and other RL
models.
| 2,019 | Computation and Language |
Unsupervised Paraphrasing by Simulated Annealing | Unsupervised paraphrase generation is a promising and important research
topic in natural language processing. We propose UPSA, a novel approach that
accomplishes Unsupervised Paraphrasing by Simulated Annealing. We model
paraphrase generation as an optimization problem and propose a sophisticated
objective function, involving semantic similarity, expression diversity, and
language fluency of paraphrases. Then, UPSA searches the sentence space towards
this objective by performing a sequence of local editing. Our method is
unsupervised and does not require parallel corpora for training, so it could be
easily applied to different domains. We evaluate our approach on a variety of
benchmark datasets, namely, Quora, Wikianswers, MSCOCO, and Twitter. Extensive
results show that UPSA achieves the state-of-the-art performance compared with
previous unsupervised methods in terms of both automatic and human evaluations.
Further, our approach outperforms most existing domain-adapted supervised
models, showing the generalizability of UPSA.
| 2,019 | Computation and Language |
Does Order Matter? An Empirical Study on Generating Multiple Keyphrases
as a Sequence | Recently, concatenating multiple keyphrases as a target sequence has been
proposed as a new learning paradigm for keyphrase generation. Existing studies
concatenate target keyphrases in different orders but no study has examined the
effects of ordering on models' behavior. In this paper, we propose several
orderings for concatenation and inspect the important factors for training a
successful keyphrase generation model. By running comprehensive comparisons, we
observe one preferable ordering and summarize a number of empirical findings
and challenges, which can shed light on future research on this line of work.
| 2,022 | Computation and Language |
What Matters for Neural Cross-Lingual Named Entity Recognition: An
Empirical Analysis | Building named entity recognition (NER) models for languages that do not have
much training data is a challenging task. While recent work has shown promising
results on cross-lingual transfer from high-resource languages to low-resource
languages, it is unclear what knowledge is transferred. In this paper, we first
propose a simple and efficient neural architecture for cross-lingual NER.
Experiments show that our model achieves competitive performance with the
state-of-the-art. We further analyze how transfer learning works for
cross-lingual NER on two transferable factors: sequential order and
multilingual embeddings, and investigate how model performance varies across
entity lengths. Finally, we conduct a case-study on a non-Latin language,
Bengali, which suggests that leveraging knowledge from Wikipedia will be a
promising direction to further improve the model performances. Our results can
shed light on future research for improving cross-lingual NER.
| 2,019 | Computation and Language |
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known
Dataset Biases | State-of-the-art models often make use of superficial patterns in the data
that do not generalize well to out-of-domain or adversarial settings. For
example, textual entailment models often learn that particular key words imply
entailment, irrespective of context, and visual question answering models learn
to predict prototypical answers, without considering evidence in the image. In
this paper, we show that if we have prior knowledge of such biases, we can
train a model to be more robust to domain shift. Our method has two stages: we
(1) train a naive model that makes predictions exclusively based on dataset
biases, and (2) train a robust model as part of an ensemble with the naive one
in order to encourage it to focus on other patterns in the data that are more
likely to generalize. Experiments on five datasets with out-of-domain test sets
show significantly improved robustness in all settings, including a 12 point
gain on a changing priors visual question answering dataset and a 9 point gain
on an adversarial question answering test set.
| 2,019 | Computation and Language |
Improving Neural Question Generation using World Knowledge | In this paper, we propose a method for incorporating world knowledge (linked
entities and fine-grained entity types) into a neural question generation
model. This world knowledge helps to encode additional information related to
the entities present in the passage required to generate human-like questions.
We evaluate our models on both SQuAD and MS MARCO to demonstrate the usefulness
of the world knowledge features. The proposed world knowledge enriched question
generation model is able to outperform the vanilla neural question generation
model by 1.37 and 1.59 absolute BLEU 4 score on SQuAD and MS MARCO test dataset
respectively.
| 2,019 | Computation and Language |
Reasoning Over Semantic-Level Graph for Fact Checking | Fact checking is a challenging task because verifying the truthfulness of a
claim requires reasoning about multiple retrievable evidence. In this work, we
present a method suitable for reasoning about the semantic-level structure of
evidence. Unlike most previous works, which typically represent evidence
sentences with either string concatenation or fusing the features of isolated
evidence sentences, our approach operates on rich semantic structures of
evidence obtained by semantic role labeling. We propose two mechanisms to
exploit the structure of evidence while leveraging the advances of pre-trained
models like BERT, GPT or XLNet. Specifically, using XLNet as the backbone, we
first utilize the graph structure to re-define the relative distances of words,
with the intuition that semantically related words should have short distances.
Then, we adopt graph convolutional network and graph attention network to
propagate and aggregate information from neighboring nodes on the graph. We
evaluate our system on FEVER, a benchmark dataset for fact checking, and find
that rich structural information is helpful and both our graph-based mechanisms
improve the accuracy. Our model is the state-of-the-art system in terms of both
official evaluation metrics, namely claim verification accuracy and FEVER
score.
| 2,020 | Computation and Language |
Combining SMT and NMT Back-Translated Data for Efficient NMT | Neural Machine Translation (NMT) models achieve their best performance when
large sets of parallel data are used for training. Consequently, techniques for
augmenting the training set have become popular recently. One of these methods
is back-translation (Sennrich et al., 2016), which consists on generating
synthetic sentences by translating a set of monolingual, target-language
sentences using a Machine Translation (MT) model.
Generally, NMT models are used for back-translation. In this work, we analyze
the performance of models when the training data is extended with synthetic
data using different MT approaches. In particular we investigate
back-translated data generated not only by NMT but also by Statistical Machine
Translation (SMT) models and combinations of both. The results reveal that the
models achieve the best performances when the training set is augmented with
back-translated data created by merging different MT approaches.
| 2,019 | Computation and Language |
Neural Conversational QA: Learning to Reason v.s. Exploiting Patterns | Neural Conversational QA tasks like ShARC require systems to answer questions
based on the contents of a given passage. On studying recent state-of-the-art
models on the ShARCQA task, we found indications that the models learn spurious
clues/patterns in the dataset. Furthermore, we show that a heuristic-based
program designed to exploit these patterns can have performance comparable to
that of the neural models. In this paper we share our findings about four types
of patterns found in the ShARC corpus and describe how neural models exploit
them. Motivated by the aforementioned findings, we create and share a modified
dataset that has fewer spurious patterns, consequently allowing models to learn
better.
| 2,020 | Computation and Language |
Language learning using Speech to Image retrieval | Humans learn language by interaction with their environment and listening to
other humans. It should also be possible for computational models to learn
language directly from speech but so far most approaches require text. We
improve on existing neural network approaches to create visually grounded
embeddings for spoken utterances. Using a combination of a multi-layer GRU,
importance sampling, cyclic learning rates, ensembling and vectorial
self-attention our results show a remarkable increase in image-caption
retrieval performance over previous work. Furthermore, we investigate which
layers in the model learn to recognise words in the input. We find that deeper
network layers are better at encoding word presence, although the final layer
has slightly lower performance. This shows that our visually grounded sentence
encoder learns to recognise words from the input even though it is not
explicitly trained for word recognition.
| 2,019 | Computation and Language |
Out-of-domain Detection for Natural Language Understanding in Dialog
Systems | Natural Language Understanding (NLU) is a vital component of dialogue
systems, and its ability to detect Out-of-Domain (OOD) inputs is critical in
practical applications, since the acceptance of the OOD input that is
unsupported by the current system may lead to catastrophic failure. However,
most existing OOD detection methods rely heavily on manually labeled OOD
samples and cannot take full advantage of unlabeled data. This limits the
feasibility of these models in practical applications.
In this paper, we propose a novel model to generate high-quality pseudo OOD
samples that are akin to IN-Domain (IND) input utterances, and thereby improves
the performance of OOD detection. To this end, an autoencoder is trained to map
an input utterance into a latent code. and the codes of IND and OOD samples are
trained to be indistinguishable by utilizing a generative adversarial network.
To provide more supervision signals, an auxiliary classifier is introduced to
regularize the generated OOD samples to have indistinguishable intent labels.
Experiments show that these pseudo OOD samples generated by our model can be
used to effectively improve OOD detection in NLU. Besides, we also demonstrate
that the effectiveness of these pseudo OOD data can be further improved by
efficiently utilizing unlabeled data.
| 2,022 | Computation and Language |
Recommendation as a Communication Game: Self-Supervised Bot-Play for
Goal-oriented Dialogue | Traditional recommendation systems produce static rather than interactive
recommendations invariant to a user's specific requests, clarifications, or
current mood, and can suffer from the cold-start problem if their tastes are
unknown. These issues can be alleviated by treating recommendation as an
interactive dialogue task instead, where an expert recommender can sequentially
ask about someone's preferences, react to their requests, and recommend more
appropriate items. In this work, we collect a goal-driven recommendation
dialogue dataset (GoRecDial), which consists of 9,125 dialogue games and 81,260
conversation turns between pairs of human workers recommending movies to each
other. The task is specifically designed as a cooperative game between two
players working towards a quantifiable common goal. We leverage the dataset to
develop an end-to-end dialogue system that can simultaneously converse and
recommend. Models are first trained to imitate the behavior of human players
without considering the task goal itself (supervised training). We then
finetune our models on simulated bot-bot conversations between two paired
pre-trained models (bot-play), in order to achieve the dialogue goal. Our
experiments show that models finetuned with bot-play learn improved dialogue
strategies, reach the dialogue goal more often when paired with a human, and
are rated as more consistent by humans compared to models trained without
bot-play. The dataset and code are publicly available through the ParlAI
framework.
| 2,019 | Computation and Language |
The Trumpiest Trump? Identifying a Subject's Most Characteristic Tweets | The sequence of documents produced by any given author varies in style and
content, but some documents are more typical or representative of the source
than others. We quantify the extent to which a given short text is
characteristic of a specific person, using a dataset of tweets from fifteen
celebrities. Such analysis is useful for generating excerpts of high-volume
Twitter profiles, and understanding how representativeness relates to tweet
popularity. We first consider the related task of binary author detection (is x
the author of text T?), and report a test accuracy of 90.37% for the best of
five approaches to this problem. We then use these models to compute
characterization scores among all of an author's texts. A user study shows
human evaluators agree with our characterization model for all 15 celebrities
in our dataset, each with p-value < 0.05. We use these classifiers to show
surprisingly strong correlations between characterization scores and the
popularity of the associated texts. Indeed, we demonstrate a statistically
significant correlation between this score and tweet popularity
(likes/replies/retweets) for 13 of the 15 celebrities in our study.
| 2,019 | Computation and Language |
Countering the Effects of Lead Bias in News Summarization via
Multi-Stage Training and Auxiliary Losses | Sentence position is a strong feature for news summarization, since the lead
often (but not always) summarizes the key points of the article. In this paper,
we show that recent neural systems excessively exploit this trend, which
although powerful for many inputs, is also detrimental when summarizing
documents where important content should be extracted from later parts of the
article. We propose two techniques to make systems sensitive to the importance
of content in different parts of the article. The first technique employs
'unbiased' data; i.e., randomly shuffled sentences of the source document, to
pretrain the model. The second technique uses an auxiliary ROUGE-based loss
that encourages the model to distribute importance scores throughout a document
by mimicking sentence-level ROUGE scores on the training data. We show that
these techniques significantly improve the performance of a competitive
reinforcement learning based extractive system, with the auxiliary loss being
more powerful than pretraining.
| 2,019 | Computation and Language |
Pretrained Language Models for Sequential Sentence Classification | As a step toward better document-level understanding, we explore
classification of a sequence of sentences into their corresponding categories,
a task that requires understanding sentences in context of the document. Recent
successful models for this task have used hierarchical models to contextualize
sentence representations, and Conditional Random Fields (CRFs) to incorporate
dependencies between subsequent labels. In this work, we show that pretrained
language models, BERT (Devlin et al., 2018) in particular, can be used for this
task to capture contextual dependencies without the need for hierarchical
encoding nor a CRF. Specifically, we construct a joint sentence representation
that allows BERT Transformer layers to directly utilize contextual information
from all words in all sentences. Our approach achieves state-of-the-art results
on four datasets, including a new dataset of structured scientific abstracts.
| 2,019 | Computation and Language |
Counterfactual Story Reasoning and Generation | Counterfactual reasoning requires predicting how alternative events, contrary
to what actually happened, might have resulted in different outcomes. Despite
being considered a necessary component of AI-complete systems, few resources
have been developed for evaluating counterfactual reasoning in narratives.
In this paper, we propose Counterfactual Story Rewriting: given an original
story and an intervening counterfactual event, the task is to minimally revise
the story to make it compatible with the given counterfactual event. Solving
this task will require deep understanding of causal narrative chains and
counterfactual invariance, and integration of such story reasoning capabilities
into conditional language generation models.
We present TimeTravel, a new dataset of 29,849 counterfactual rewritings,
each with the original story, a counterfactual event, and human-generated
revision of the original story compatible with the counterfactual event.
Additionally, we include 80,115 counterfactual "branches" without a rewritten
storyline to support future work on semi- or un-supervised approaches to
counterfactual story rewriting.
Finally, we evaluate the counterfactual rewriting capacities of several
competitive baselines based on pretrained language models, and assess whether
common overlap and model-based automatic metrics for text generation correlate
well with human scores for counterfactual rewriting.
| 2,019 | Computation and Language |
Neural Naturalist: Generating Fine-Grained Image Comparisons | We introduce the new Birds-to-Words dataset of 41k sentences describing
fine-grained differences between photographs of birds. The language collected
is highly detailed, while remaining understandable to the everyday observer
(e.g., "heart-shaped face," "squat body"). Paragraph-length descriptions
naturally adapt to varying levels of taxonomic and visual distance---drawn from
a novel stratified sampling approach---with the appropriate level of detail. We
propose a new model called Neural Naturalist that uses a joint image encoding
and comparative module to generate comparative language, and evaluate the
results with humans who must use the descriptions to distinguish real images.
Our results indicate promising potential for neural models to explain
differences in visual embedding space using natural language, as well as a
concrete path for machine learning to aid citizen scientists in their effort to
preserve biodiversity.
| 2,019 | Computation and Language |
Span Selection Pre-training for Question Answering | BERT (Bidirectional Encoder Representations from Transformers) and related
pre-trained Transformers have provided large gains across many language
understanding tasks, achieving a new state-of-the-art (SOTA). BERT is
pre-trained on two auxiliary tasks: Masked Language Model and Next Sentence
Prediction. In this paper we introduce a new pre-training task inspired by
reading comprehension to better align the pre-training from memorization to
understanding. Span Selection Pre-Training (SSPT) poses cloze-like training
instances, but rather than draw the answer from the model's parameters, it is
selected from a relevant passage. We find significant and consistent
improvements over both BERT-BASE and BERT-LARGE on multiple reading
comprehension (MRC) datasets. Specifically, our proposed model has strong
empirical evidence as it obtains SOTA results on Natural Questions, a new
benchmark MRC dataset, outperforming BERT-LARGE by 3 F1 points on short answer
prediction. We also show significant impact in HotpotQA, improving answer
prediction F1 by 4 points and supporting fact prediction F1 by 1 point and
outperforming the previous best system. Moreover, we show that our pre-training
approach is particularly effective when training data is limited, improving the
learning curve by a large amount.
| 2,020 | Computation and Language |
Reverse Transfer Learning: Can Word Embeddings Trained for Different NLP
Tasks Improve Neural Language Models? | Natural language processing (NLP) tasks tend to suffer from a paucity of
suitably annotated training data, hence the recent success of transfer learning
across a wide variety of them. The typical recipe involves: (i) training a
deep, possibly bidirectional, neural network with an objective related to
language modeling, for which training data is plentiful; and (ii) using the
trained network to derive contextual representations that are far richer than
standard linear word embeddings such as word2vec, and thus result in important
gains. In this work, we wonder whether the opposite perspective is also true:
can contextual representations trained for different NLP tasks improve language
modeling itself? Since language models (LMs) are predominantly locally
optimized, other NLP tasks may help them make better predictions based on the
entire semantic fabric of a document. We test the performance of several types
of pre-trained embeddings in neural LMs, and we investigate whether it is
possible to make the LM more aware of global semantic information through
embeddings pre-trained with a domain classification model. Initial experiments
suggest that as long as the proper objective criterion is used during training,
pre-trained embeddings are likely to be beneficial for neural language
modeling.
| 2,019 | Computation and Language |
Knowledge Enhanced Contextual Word Representations | Contextual word representations, typically trained on unstructured, unlabeled
text, do not contain any explicit grounding to real world entities and are
often unable to remember facts about those entities. We propose a general
method to embed multiple knowledge bases (KBs) into large scale models, and
thereby enhance their representations with structured, human-curated knowledge.
For each KB, we first use an integrated entity linker to retrieve relevant
entity embeddings, then update contextual word representations via a form of
word-to-entity attention. In contrast to previous approaches, the entity
linkers and self-supervised language modeling objective are jointly trained
end-to-end in a multitask setting that combines a small amount of entity
linking supervision with a large amount of raw text. After integrating WordNet
and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert)
demonstrates improved perplexity, ability to recall facts as measured in a
probing task and downstream performance on relationship extraction, entity
typing, and word sense disambiguation. KnowBert's runtime is comparable to
BERT's and it scales to large KBs.
| 2,019 | Computation and Language |
Learning Semantic Parsers from Denotations with Latent Structured
Alignments and Abstract Programs | Semantic parsing aims to map natural language utterances onto machine
interpretable meaning representations, aka programs whose execution against a
real-world environment produces a denotation. Weakly-supervised semantic
parsers are trained on utterance-denotation pairs treating programs as latent.
The task is challenging due to the large search space and spuriousness of
programs which may execute to the correct answer but do not generalize to
unseen examples. Our goal is to instill an inductive bias in the parser to help
it distinguish between spurious and correct programs. We capitalize on the
intuition that correct programs would likely respect certain structural
constraints were they to be aligned to the question (e.g., program fragments
are unlikely to align to overlapping text spans) and propose to model
alignments as structured latent variables. In order to make the
latent-alignment framework tractable, we decompose the parsing task into (1)
predicting a partial "abstract program" and (2) refining it while modeling
structured alignments with differential dynamic programming. We obtain
state-of-the-art performance on the WIKITABLEQUESTIONS and WIKISQL datasets.
When compared to a standard attention baseline, we observe that the proposed
structured-alignment mechanism is highly beneficial.
| 2,019 | Computation and Language |
Learning to Learn and Predict: A Meta-Learning Approach for Multi-Label
Classification | Many tasks in natural language processing can be viewed as multi-label
classification problems. However, most of the existing models are trained with
the standard cross-entropy loss function and use a fixed prediction policy
(e.g., a threshold of 0.5) for all the labels, which completely ignores the
complexity and dependencies among different labels. In this paper, we propose a
meta-learning method to capture these complex label dependencies. More
specifically, our method utilizes a meta-learner to jointly learn the training
policies and prediction policies for different labels. The training policies
are then used to train the classifier with the cross-entropy loss function, and
the prediction policies are further implemented for prediction. Experimental
results on fine-grained entity typing and text classification demonstrate that
our proposed method can obtain more accurate multi-label classification
results.
| 2,019 | Computation and Language |
BERT-Based Arabic Social Media Author Profiling | We report our models for detecting age, language variety, and gender from
social media data in the context of the Arabic author profiling and deception
detection shared task (APDA). We build simple models based on pre-trained
bidirectional encoders from transformers (BERT). We first fine-tune the
pre-trained BERT model on each of the three datasets with shared task released
data. Then we augment shared task data with in-house data for gender and
dialect, showing the utility of augmenting training data. Our best models on
the shared task test data are acquired with a majority voting of various BERT
models trained under different data conditions. We acquire 54.72% accuracy for
age, 93.75% for dialect, 81.67% for gender, and 40.97% joint accuracy across
the three tasks.
| 2,019 | Computation and Language |
Follow the Leader: Documents on the Leading Edge of Semantic Change Get
More Citations | Diachronic word embeddings -- vector representations of words over time --
offer remarkable insights into the evolution of language and provide a tool for
quantifying sociocultural change from text documents. Prior work has used such
embeddings to identify shifts in the meaning of individual words. However,
simply knowing that a word has changed in meaning is insufficient to identify
the instances of word usage that convey the historical or the newer meaning. In
this paper, we link diachronic word embeddings to documents, by situating those
documents as leaders or laggards with respect to ongoing semantic changes.
Specifically, we propose a novel method to quantify the degree of semantic
progressiveness in each word usage, and then show how these usages can be
aggregated to obtain scores for each document. We analyze two large collections
of documents, representing legal opinions and scientific articles. Documents
that are scored as semantically progressive receive a larger number of
citations, indicating that they are especially influential. Our work thus
provides a new technique for identifying lexical semantic leaders and
demonstrates a new link between progressive use of language and influence in a
citation network.
| 2,020 | Computation and Language |
Improving the Explainability of Neural Sentiment Classifiers via Data
Augmentation | Sentiment analysis has been widely used by businesses for social media
opinion mining, especially in the financial services industry, where customers'
feedbacks are critical for companies. Recent progress of neural network models
has achieved remarkable performance on sentiment classification, while the lack
of classification interpretation may raise the trustworthy and many other
issues in practice. In this work, we study the problem of improving the
explainability of existing sentiment classifiers. We propose two data
augmentation methods that create additional training examples to help improve
model explainability: one method with a predefined sentiment word list as
external knowledge and the other with adversarial examples. We test the
proposed methods on both CNN and RNN classifiers with three benchmark sentiment
datasets. The model explainability is assessed by both human evaluators and a
simple automatic evaluation measurement. Experiments show the proposed data
augmentation methods significantly improve the explainability of both neural
classifiers.
| 2,020 | Computation and Language |
Mitigating Annotation Artifacts in Natural Language Inference Datasets
to Improve Cross-dataset Generalization Ability | Natural language inference (NLI) aims at predicting the relationship between
a given pair of premise and hypothesis. However, several works have found that
there widely exists a bias pattern called annotation artifacts in NLI datasets,
making it possible to identify the label only by looking at the hypothesis.
This irregularity makes the evaluation results over-estimated and affects
models' generalization ability. In this paper, we consider a more trust-worthy
setting, i.e., cross-dataset evaluation. We explore the impacts of annotation
artifacts in cross-dataset testing. Furthermore, we propose a training
framework to mitigate the impacts of the bias pattern. Experimental results
demonstrate that our methods can alleviate the negative effect of the artifacts
and improve the generalization ability of models.
| 2,019 | Computation and Language |
A Benchmark Dataset for Learning to Intervene in Online Hate Speech | Countering online hate speech is a critical yet challenging task, but one
which can be aided by the use of Natural Language Processing (NLP) techniques.
Previous research has primarily focused on the development of NLP methods to
automatically and effectively detect online hate speech while disregarding
further action needed to calm and discourage individuals from using hate speech
in the future. In addition, most existing hate speech datasets treat each post
as an isolated instance, ignoring the conversational context. In this paper, we
propose a novel task of generative hate speech intervention, where the goal is
to automatically generate responses to intervene during online conversations
that contain hate speech. As a part of this work, we introduce two
fully-labeled large-scale hate speech intervention datasets collected from Gab
and Reddit. These datasets provide conversation segments, hate speech labels,
as well as intervention responses written by Mechanical Turk Workers. In this
paper, we also analyze the datasets to understand the common intervention
strategies and explore the performance of common automatic response generation
methods on these new datasets to provide a benchmark for future research.
| 2,019 | Computation and Language |
Joint Extraction of Entities and Relations Based on a Novel
Decomposition Strategy | Joint extraction of entities and relations aims to detect entity pairs along
with their relations using a single model. Prior work typically solves this
task in the extract-then-classify or unified labeling manner. However, these
methods either suffer from the redundant entity pairs, or ignore the important
inner structure in the process of extracting entities and relations. To address
these limitations, in this paper, we first decompose the joint extraction task
into two interrelated subtasks, namely HE extraction and TER extraction. The
former subtask is to distinguish all head-entities that may be involved with
target relations, and the latter is to identify corresponding tail-entities and
relations for each extracted head-entity. Next, these two subtasks are further
deconstructed into several sequence labeling problems based on our proposed
span-based tagging scheme, which are conveniently solved by a hierarchical
boundary tagger and a multi-span decoding algorithm. Owing to the reasonable
decomposition strategy, our model can fully capture the semantic
interdependency between different steps, as well as reduce noise from
irrelevant entity pairs. Experimental results show that our method outperforms
previous work by 5.2%, 5.9% and 21.5% (F1 score), achieving a new
state-of-the-art on three public datasets
| 2,020 | Computation and Language |
Multimodal Embeddings from Language Models | Word embeddings such as ELMo have recently been shown to model word semantics
with greater efficacy through contextualized learning on large-scale language
corpora, resulting in significant improvement in state of the art across many
natural language tasks. In this work we integrate acoustic information into
contextualized lexical embeddings through the addition of multimodal inputs to
a pretrained bidirectional language model. The language model is trained on
spoken language that includes text and audio modalities. The resulting
representations from this model are multimodal and contain paralinguistic
information which can modify word meanings and provide affective information.
We show that these multimodal embeddings can be used to improve over previous
state of the art multimodal models in emotion recognition on the CMU-MOSEI
dataset.
| 2,019 | Computation and Language |
Core Semantic First: A Top-down Approach for AMR Parsing | We introduce a novel scheme for parsing a piece of text into its Abstract
Meaning Representation (AMR): Graph Spanning based Parsing (GSP). One novel
characteristic of GSP is that it constructs a parse graph incrementally in a
top-down fashion. Starting from the root, at each step, a new node and its
connections to existing nodes will be jointly predicted. The output graph spans
the nodes by the distance to the root, following the intuition of first
grasping the main ideas then digging into more details. The \textit{core
semantic first} principle emphasizes capturing the main ideas of a sentence,
which is of great interest. We evaluate our model on the latest AMR sembank and
achieve the state-of-the-art performance in the sense that no heuristic graph
re-categorization is adopted. More importantly, the experiments show that our
parser is especially good at obtaining the core semantics.
| 2,019 | Computation and Language |
Fine-grained Knowledge Fusion for Sequence Labeling Domain Adaptation | In sequence labeling, previous domain adaptation methods focus on the
adaptation from the source domain to the entire target domain without
considering the diversity of individual target domain samples, which may lead
to negative transfer results for certain samples. Besides, an important
characteristic of sequence labeling tasks is that different elements within a
given sample may also have diverse domain relevance, which requires further
consideration. To take the multi-level domain relevance discrepancy into
account, in this paper, we propose a fine-grained knowledge fusion model with
the domain relevance modeling scheme to control the balance between learning
from the target domain data and learning from the source domain model.
Experiments on three sequence labeling tasks show that our fine-grained
knowledge fusion model outperforms strong baselines and other state-of-the-art
sequence labeling domain adaptation methods.
| 2,019 | Computation and Language |
Extending the Service Composition Formalism with Relational Parameters | Web Service Composition deals with the (re)use of Web Services to provide
complex functionality, inexistent in any single service. Over the
state-of-the-art, we introduce a new type of modeling, based on ontologies and
relations between objects, which allows us to extend the expressiveness of
problems that can be solved automatically.
| 2,019 | Computation and Language |
A Corpus-free State2Seq User Simulator for Task-oriented Dialogue | Recent reinforcement learning algorithms for task-oriented dialogue system
absorbs a lot of interest. However, an unavoidable obstacle for training such
algorithms is that annotated dialogue corpora are often unavailable. One of the
popular approaches addressing this is to train a dialogue agent with a user
simulator. Traditional user simulators are built upon a set of dialogue rules
and therefore lack response diversity. This severely limits the simulated cases
for agent training. Later data-driven user models work better in diversity but
suffer from data scarcity problem. To remedy this, we design a new corpus-free
framework that taking advantage of their benefits. The framework builds a user
simulator by first generating diverse dialogue data from templates and then
build a new State2Seq user simulator on the data. To enhance the performance,
we propose the State2Seq user simulator model to efficiently leverage dialogue
state and history. Experiment results on an open dataset show that our user
simulator helps agents achieve an improvement of 6.36% on success rate.
State2Seq model outperforms the seq2seq baseline for 1.9 F-score.
| 2,019 | Computation and Language |
Select and Attend: Towards Controllable Content Selection in Text
Generation | Many text generation tasks naturally contain two steps: content selection and
surface realization. Current neural encoder-decoder models conflate both steps
into a black-box architecture. As a result, the content to be described in the
text cannot be explicitly controlled. This paper tackles this problem by
decoupling content selection from the decoder. The decoupled content selection
is human interpretable, whose value can be manually manipulated to control the
content of generated text. The model can be trained end-to-end without human
annotations by maximizing a lower bound of the marginal likelihood. We further
propose an effective way to trade-off between performance and controllability
with a single adjustable hyperparameter. In both data-to-text and headline
generation tasks, our model achieves promising results, paving the way for
controllable content selection in text generation.
| 2,019 | Computation and Language |
Learning review representations from user and product level information
for spam detection | Opinion spam has become a widespread problem in social media, where hired
spammers write deceptive reviews to promote or demote products to mislead the
consumers for profit or fame. Existing works mainly focus on manually designing
discrete textual or behavior features, which cannot capture complex semantics
of reviews. Although recent works apply deep learning methods to learn
review-level semantic features, their models ignore the impact of the
user-level and product-level information on learning review semantics and the
inherent user-review-product relationship information. In this paper, we
propose a Hierarchical Fusion Attention Network (HFAN) to automatically learn
the semantics of reviews from the user and product level. Specifically, we
design a multi-attention unit to extract user(product)-related review
information. Then, we use orthogonal decomposition and fusion attention to
learn a user, review, and product representation from the review information.
Finally, we take the review as a relation between user and product entity and
apply TransH to jointly encode this relationship into review representation.
Experimental results obtained more than 10\% absolute precision improvement
over the state-of-the-art performances on four real-world datasets, which show
the effectiveness and versatility of the model.
| 2,019 | Computation and Language |
Jointly embedding the local and global relations of heterogeneous graph
for rumor detection | The development of social media has revolutionized the way people
communicate, share information and make decisions, but it also provides an
ideal platform for publishing and spreading rumors. Existing rumor detection
methods focus on finding clues from text content, user profiles, and
propagation patterns. However, the local semantic relation and global
structural information in the message propagation graph have not been well
utilized by previous works.
In this paper, we present a novel global-local attention network (GLAN) for
rumor detection, which jointly encodes the local semantic and global structural
information. We first generate a better integrated representation for each
source tweet by fusing the semantic information of related retweets with the
attention mechanism. Then, we model the global relationships among all source
tweets, retweets, and users as a heterogeneous graph to capture the rich
structural information for rumor detection. We conduct experiments on three
real-world datasets, and the results demonstrate that GLAN significantly
outperforms the state-of-the-art models in both rumor detection and early
detection scenarios.
| 2,019 | Computation and Language |
Countering Language Drift via Visual Grounding | Emergent multi-agent communication protocols are very different from natural
language and not easily interpretable by humans. We find that agents that were
initially pretrained to produce natural language can also experience
detrimental language drift: when a non-linguistic reward is used in a
goal-based task, e.g. some scalar success metric, the communication protocol
may easily and radically diverge from natural language. We recast translation
as a multi-agent communication game and examine auxiliary training constraints
for their effectiveness in mitigating language drift. We show that a
combination of syntactic (language model likelihood) and semantic (visual
grounding) constraints gives the best communication performance, allowing
pre-trained agents to retain English syntax while learning to accurately convey
the intended meaning.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.