Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Conversation Model Fine-Tuning for Classifying Client Utterances in
Counseling Dialogues | The recent surge of text-based online counseling applications enables us to
collect and analyze interactions between counselors and clients. A dataset of
those interactions can be used to learn to automatically classify the client
utterances into categories that help counselors in diagnosing client status and
predicting counseling outcome. With proper anonymization, we collect
counselor-client dialogues, define meaningful categories of client utterances
with professional counselors, and develop a novel neural network model for
classifying the client utterances. The central idea of our model, ConvMFiT, is
a pre-trained conversation model which consists of a general language model
built from an out-of-domain corpus and two role-specific language models built
from unlabeled in-domain dialogues. The classification result shows that
ConvMFiT outperforms state-of-the-art comparison models. Further, the attention
weights in the learned model confirm that the model finds expected linguistic
patterns for each category.
| 2,019 | Computation and Language |
SART - Similarity, Analogies, and Relatedness for Tatar Language: New
Benchmark Datasets for Word Embeddings Evaluation | There is a huge imbalance between languages currently spoken and
corresponding resources to study them. Most of the attention naturally goes to
the "big" languages: those which have the largest presence in terms of media
and number of speakers. Other less represented languages sometimes do not even
have a good quality corpus to study them. In this paper, we tackle this
imbalance by presenting a new set of evaluation resources for Tatar, a language
of the Turkic language family which is mainly spoken in Tatarstan Republic,
Russia.
We present three datasets: Similarity and Relatedness datasets that consist
of human scored word pairs and can be used to evaluate semantic models; and
Analogies dataset that comprises analogy questions and allows to explore
semantic, syntactic, and morphological aspects of language modeling. All three
datasets build upon existing datasets for the English language and follow the
same structure. However, they are not mere translations. They take into account
specifics of the Tatar language and expand beyond the original datasets. We
evaluate state-of-the-art word embedding models for two languages using our
proposed datasets for Tatar and the original datasets for English and report
our findings on performance comparison.
| 2,019 | Computation and Language |
Using Similarity Measures to Select Pretraining Data for NER | Word vectors and Language Models (LMs) pretrained on a large amount of
unlabelled data can dramatically improve various Natural Language Processing
(NLP) tasks. However, the measure and impact of similarity between pretraining
data and target task data are left to intuition. We propose three
cost-effective measures to quantify different aspects of similarity between
source pretraining and target task data. We demonstrate that these measures are
good predictors of the usefulness of pretrained models for Named Entity
Recognition (NER) over 30 data pairs. Results also suggest that pretrained LMs
are more effective and more predictable than pretrained word vectors, but
pretrained word vectors are better when pretraining data is dissimilar.
| 2,019 | Computation and Language |
Discontinuous Constituency Parsing with a Stack-Free Transition System
and a Dynamic Oracle | We introduce a novel transition system for discontinuous constituency
parsing. Instead of storing subtrees in a stack --i.e. a data structure with
linear-time sequential access-- the proposed system uses a set of parsing
items, with constant-time random access. This change makes it possible to
construct any discontinuous constituency tree in exactly $4n - 2$ transitions
for a sentence of length $n$. At each parsing step, the parser considers every
item in the set to be combined with a focus item and to construct a new
constituent in a bottom-up fashion. The parsing strategy is based on the
assumption that most syntactic structures can be parsed incrementally and that
the set --the memory of the parser-- remains reasonably small on average.
Moreover, we introduce a provably correct dynamic oracle for the new transition
system, and present the first experiments in discontinuous constituency parsing
using a dynamic oracle. Our parser obtains state-of-the-art results on three
English and German discontinuous treebanks.
| 2,019 | Computation and Language |
Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine
translation (NMT). It helps computers to deeply understand visual objects and
their relations with natural languages. However, multimodal NMT systems suffer
from a shortage of available training data, resulting in poor performance for
translating rare words. In NMT, pretrained word embeddings have been shown to
improve NMT of low-resource domains, and a search-based approach is proposed to
address the rare word problem. In this study, we effectively combine these two
approaches in the context of multimodal NMT and explore how we can take full
advantage of pretrained word embeddings to better translate rare words. We
report overall performance improvements of 1.24 METEOR and 2.49 BLEU and
achieve an improvement of 7.67 F-score for rare word translation.
| 2,019 | Computation and Language |
Recognizing Musical Entities in User-generated Content | Recognizing Musical Entities is important for Music Information Retrieval
(MIR) since it can improve the performance of several tasks such as music
recommendation, genre classification or artist similarity. However, most entity
recognition systems in the music domain have concentrated on formal texts (e.g.
artists' biographies, encyclopedic articles, etc.), ignoring rich and noisy
user-generated content. In this work, we present a novel method to recognize
musical entities in Twitter content generated by users following a classical
music radio channel. Our approach takes advantage of both formal radio schedule
and users' tweets to improve entity recognition. We instantiate several machine
learning algorithms to perform entity recognition combining task-specific and
corpus-based features. We also show how to improve recognition results by
jointly considering formal and user-generated content
| 2,019 | Computation and Language |
Syntactic Interchangeability in Word Embedding Models | Nearest neighbors in word embedding models are commonly observed to be
semantically similar, but the relations between them can vary greatly. We
investigate the extent to which word embedding models preserve syntactic
interchangeability, as reflected by distances between word vectors, and the
effect of hyper-parameters---context window size in particular. We use part of
speech (POS) as a proxy for syntactic interchangeability, as generally
speaking, words with the same POS are syntactically valid in the same contexts.
We also investigate the relationship between interchangeability and similarity
as judged by commonly-used word similarity benchmarks, and correlate the result
with the performance of word embedding models on these benchmarks. Our results
will inform future research and applications in the selection of word embedding
model, suggesting a principle for an appropriate selection of the context
window size parameter depending on the use-case.
| 2,019 | Computation and Language |
Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human
Needs | To make machines better understand sentiments, research needs to move from
polarity identification to understanding the reasons that underlie the
expression of sentiment. Categorizing the goals or needs of humans is one way
to explain the expression of sentiment in text. Humans are good at
understanding situations described in natural language and can easily connect
them to the character's psychological needs using commonsense knowledge. We
present a novel method to extract, rank, filter and select multi-hop relation
paths from a commonsense knowledge resource to interpret the expression of
sentiment in terms of their underlying human needs. We efficiently integrate
the acquired knowledge paths in a neural model that interfaces context
representations with knowledge using a gated attention mechanism. We assess the
model's performance on a recently published dataset for categorizing human
needs. Selectively integrating knowledge paths boosts performance and
establishes a new state-of-the-art. Our model offers interpretability through
the learned attention map over commonsense knowledge paths. Human evaluation
highlights the relevance of the encoded knowledge.
| 2,019 | Computation and Language |
Automatic text summarization: What has been done and what has to be done | Summaries are important when it comes to process huge amounts of information.
Their most important benefit is saving time, which we do not have much
nowadays. Therefore, a summary must be short, representative and readable.
Generating summaries automatically can be beneficial for humans, since it can
save time and help selecting relevant documents. Automatic summarization and,
in particular, Automatic text summarization (ATS) is not a new research field;
It was known since the 50s. Since then, researchers have been active to find
the perfect summarization method. In this article, we will discuss different
works in automatic summarization, especially the recent ones. We will present
some problems and limits which prevent works to move forward. Most of these
challenges are much more related to the nature of processed languages. These
challenges are interesting for academics and developers, as a path to follow in
this field.
| 2,019 | Computation and Language |
Neural Speed Reading with Structural-Jump-LSTM | Recurrent neural networks (RNNs) can model natural language by sequentially
'reading' input tokens and outputting a distributed representation of each
token. Due to the sequential nature of RNNs, inference time is linearly
dependent on the input length, and all inputs are read regardless of their
importance. Efforts to speed up this inference, known as 'neural speed
reading', either ignore or skim over part of the input. We present
Structural-Jump-LSTM: the first neural speed reading model to both skip and
jump text during inference. The model consists of a standard LSTM and two
agents: one capable of skipping single words when reading, and one capable of
exploiting punctuation structure (sub-sentence separators (,:), sentence end
symbols (.!?), or end of text markers) to jump ahead after reading a word. A
comprehensive experimental evaluation of our model against all five
state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves
the best overall floating point operations (FLOP) reduction (hence is faster),
while keeping the same accuracy or even improving it compared to a vanilla LSTM
that reads the whole text.
| 2,019 | Computation and Language |
A Survey of Code-switched Speech and Language Processing | Code-switching, the alternation of languages within a conversation or
utterance, is a common communicative phenomenon that occurs in multilingual
communities across the world. This survey reviews computational approaches for
code-switched Speech and Natural Language Processing. We motivate why
processing code-switched text and speech is essential for building intelligent
agents and systems that interact with users in multilingual communities. As
code-switching data and resources are scarce, we list what is available in
various code-switched language pairs with the language processing tasks they
can be used for. We review code-switching research in various Speech and NLP
applications, including language processing tools and end-to-end systems. We
conclude with future directions and open problems in the field.
| 2,020 | Computation and Language |
Question Embeddings Based on Shannon Entropy: Solving intent
classification task in goal-oriented dialogue system | Question-answering systems and voice assistants are becoming major part of
client service departments of many organizations, helping them to reduce the
labor costs of staff. In many such systems, there is always natural language
understanding module that solves intent classification task. This task is
complicated because of its case-dependency - every subject area has its own
semantic kernel. The state of art approaches for intent classification are
different machine learning and deep learning methods that use text vector
representations as input. The basic vector representation models such as Bag of
words and TF-IDF generate sparse matrixes, which are becoming very big as the
amount of input data grows. Modern methods such as word2vec and FastText use
neural networks to evaluate word embeddings with fixed dimension size. As we
are developing a question-answering system for students and enrollees of the
Perm National Research Polytechnic University, we have faced the problem of
user's intent detection. The subject area of our system is very specific, that
is why there is a lack of training data. This aspect makes intent
classification task more challenging for using state of the art deep learning
methods. In this paper, we propose an approach of the questions embeddings
representation based on calculation of Shannon entropy.The goal of the approach
is to produce low dimensional question vectors as neural approaches do and to
outperform related methods, described above in condition of small dataset. We
evaluate and compare our model with existing ones using logistic regression and
dataset that contains questions asked by students and enrollees. The data is
labeled into six classes. Experimental comparison of proposed approach and
other models revealed that proposed model performed better in the given task.
| 2,019 | Computation and Language |
Neural Abstractive Text Summarization and Fake News Detection | In this work, we study abstractive text summarization by exploring different
models such as LSTM-encoder-decoder with attention, pointer-generator networks,
coverage mechanisms, and transformers. Upon extensive and careful
hyperparameter tuning we compare the proposed architectures against each other
for the abstractive text summarization task. Finally, as an extension of our
work, we apply our text summarization model as a feature extractor for a fake
news detection task where the news articles prior to classification will be
summarized and the results are compared against the classification using only
the original news text.
keywords: LSTM, encoder-deconder, abstractive text summarization,
pointer-generator, coverage mechanism, transformers, fake news detection
| 2,019 | Computation and Language |
Making Neural Machine Reading Comprehension Faster | This study aims at solving the Machine Reading Comprehension problem where
questions have to be answered given a context passage. The challenge is to
develop a computationally faster model which will have improved inference time.
State of the art in many natural language understanding tasks, BERT model, has
been used and knowledge distillation method has been applied to train two
smaller models. The developed models are compared with other models which have
been developed with the same intention.
| 2,019 | Computation and Language |
A Convolutional Neural Network for Language-Agnostic Source Code
Summarization | Descriptive comments play a crucial role in the software engineering process.
They decrease development time, enable better bug detection, and facilitate the
reuse of previously written code. However, comments are commonly the last of a
software developer's priorities and are thus either insufficient or missing
entirely. Automatic source code summarization may therefore have the ability to
significantly improve the software development process. We introduce a novel
encoder-decoder model that summarizes source code, effectively writing a
comment to describe the code's functionality. We make two primary innovations
beyond current source code summarization models. First, our encoder is fully
language-agnostic and requires no complex input preprocessing. Second, our
decoder has an open vocabulary, enabling it to predict any word, even ones not
seen in training. We demonstrate results comparable to state-of-the-art methods
on a single-language data set and provide the first results on a data set
consisting of multiple programming languages.
| 2,019 | Computation and Language |
Polysemy and brevity versus frequency in language | The pioneering research of G. K. Zipf on the relationship between word
frequency and other word features led to the formulation of various linguistic
laws. The most popular is Zipf's law for word frequencies. Here we focus on two
laws that have been studied less intensively: the meaning-frequency law, i.e.
the tendency of more frequent words to be more polysemous, and the law of
abbreviation, i.e. the tendency of more frequent words to be shorter. In a
previous work, we tested the robustness of these Zipfian laws for English,
roughly measuring word length in number of characters and distinguishing adult
from child speech. In the present article, we extend our study to other
languages (Dutch and Spanish) and introduce two additional measures of length:
syllabic length and phonemic length. Our correlation analysis indicates that
both the meaning-frequency law and the law of abbreviation hold overall in all
the analyzed languages.
| 2,019 | Computation and Language |
Unsupervised Abbreviation Disambiguation Contextual disambiguation using
word embeddings | Abbreviations often have several distinct meanings, often making their use in
text ambiguous. Expanding them to their intended meaning in context is
important for Machine Reading tasks such as document search, recommendation and
question answering. Existing approaches mostly rely on manually labeled
examples of abbreviations and their correct long-forms. Such data sets are
costly to create and result in trained models with limited applicability and
flexibility. Importantly, most current methods must be subjected to a full
empirical evaluation in order to understand their limitations, which is
cumbersome in practice.
In this paper, we present an entirely unsupervised abbreviation
disambiguation method (called UAD) that picks up abbreviation definitions from
unstructured text. Creating distinct tokens per meaning, we learn context
representations as word vectors. We demonstrate how to further boost
abbreviation disambiguation performance by obtaining better context
representations using additional unstructured text. Our method is the first
abbreviation disambiguation approach with a transparent model that allows
performance analysis without requiring full-scale evaluation, making it highly
relevant for real-world deployments.
In our thorough empirical evaluation, UAD achieves high performance on large
real-world data sets from different domains and outperforms both baseline and
state-of-the-art methods. UAD scales well and supports thousands of
abbreviations with multiple different meanings within a single model.
In order to spur more research into abbreviation disambiguation, we publish a
new data set, that we also use in our experiments.
| 2,019 | Computation and Language |
Lost in Interpretation: Predicting Untranslated Terminology in
Simultaneous Interpretation | Simultaneous interpretation, the translation of speech from one language to
another in real-time, is an inherently difficult and strenuous task. One of the
greatest challenges faced by interpreters is the accurate translation of
difficult terminology like proper names, numbers, or other entities.
Intelligent computer-assisted interpreting (CAI) tools that could analyze the
spoken word and detect terms likely to be untranslated by an interpreter could
reduce translation error and improve interpreter performance. In this paper, we
propose a task of predicting which terminology simultaneous interpreters will
leave untranslated, and examine methods that perform this task using supervised
sequence taggers. We describe a number of task-specific features explicitly
designed to indicate when an interpreter may struggle with translating a word.
Experimental results on a newly-annotated version of the NAIST Simultaneous
Translation Corpus (Shimizu et al., 2014) indicate the promise of our proposed
method.
| 2,019 | Computation and Language |
Sentiment analysis with genetically evolved Gaussian kernels | Sentiment analysis consists of evaluating opinions or statements from the
analysis of text. Among the methods used to estimate the degree in which a text
expresses a given sentiment, are those based on Gaussian Processes. However,
traditional Gaussian Processes methods use a predefined kernel with
hyperparameters that can be tuned but whose structure can not be adapted. In
this paper, we propose the application of Genetic Programming for evolving
Gaussian Process kernels that are more precise for sentiment analysis. We use
use a very flexible representation of kernels combined with a multi-objective
approach that simultaneously considers two quality metrics and the
computational time spent by the kernels. Our results show that the algorithm
can outperform Gaussian Processes with traditional kernels for some of the
sentiment analysis tasks considered.
| 2,019 | Computation and Language |
Learning to Stop in Structured Prediction for Neural Machine Translation | Beam search optimization resolves many issues in neural machine translation.
However, this method lacks principled stopping criteria and does not learn how
to stop during training, and the model naturally prefers the longer hypotheses
during the testing time in practice since they use the raw score instead of the
probability-based score. We propose a novel ranking method which enables an
optimal beam search stopping criteria. We further introduce a structured
prediction loss function which penalizes suboptimal finished candidates
produced by beam search during training. Experiments of neural machine
translation on both synthetic data and real languages (German-to-English and
Chinese-to-English) demonstrate our proposed methods lead to better length and
BLEU score.
| 2,019 | Computation and Language |
fairseq: A Fast, Extensible Toolkit for Sequence Modeling | fairseq is an open-source sequence modeling toolkit that allows researchers
and developers to train custom models for translation, summarization, language
modeling, and other text generation tasks. The toolkit is based on PyTorch and
supports distributed training across multiple GPUs and machines. We also
support fast mixed-precision training and inference on modern GPUs. A demo
video can be found at https://www.youtube.com/watch?v=OtgDdWtHvto
| 2,019 | Computation and Language |
ASSERT: Anti-Spoofing with Squeeze-Excitation and Residual neTworks | We present JHU's system submission to the ASVspoof 2019 Challenge:
Anti-Spoofing with Squeeze-Excitation and Residual neTworks (ASSERT).
Anti-spoofing has gathered more and more attention since the inauguration of
the ASVspoof Challenges, and ASVspoof 2019 dedicates to address attacks from
all three major types: text-to-speech, voice conversion, and replay. Built upon
previous research work on Deep Neural Network (DNN), ASSERT is a pipeline for
DNN-based approach to anti-spoofing. ASSERT has four components: feature
engineering, DNN models, network optimization and system combination, where the
DNN models are variants of squeeze-excitation and residual networks. We
conducted an ablation study of the effectiveness of each component on the
ASVspoof 2019 corpus, and experimental results showed that ASSERT obtained more
than 93% and 17% relative improvements over the baseline systems in the two
sub-challenges in ASVspooof 2019, ranking ASSERT one of the top performing
systems. Code and pretrained models will be made publicly available.
| 2,019 | Computation and Language |
PAWS: Paraphrase Adversaries from Word Scrambling | Existing paraphrase identification datasets lack sentence pairs that have
high lexical overlap without being paraphrases. Models trained on such data
fail to distinguish pairs like flights from New York to Florida and flights
from Florida to New York. This paper introduces PAWS (Paraphrase Adversaries
from Word Scrambling), a new dataset with 108,463 well-formed paraphrase and
non-paraphrase pairs with high lexical overlap. Challenging pairs are generated
by controlled word swapping and back translation, followed by fluency and
paraphrase judgments by human raters. State-of-the-art models trained on
existing datasets have dismal performance on PAWS (<40% accuracy); however,
including PAWS training data for these models improves their accuracy to 85%
while maintaining performance on existing tasks. In contrast, models that do
not capture non-local contextual information fail even with PAWS training
examples. As such, PAWS provides an effective instrument for driving further
progress on models that better exploit structure, context, and pairwise
comparisons.
| 2,019 | Computation and Language |
Benchmarking Approximate Inference Methods for Neural Structured
Prediction | Exact structured inference with neural network scoring functions is
computationally challenging but several methods have been proposed for
approximating inference. One approach is to perform gradient descent with
respect to the output structure directly (Belanger and McCallum, 2016). Another
approach, proposed recently, is to train a neural network (an "inference
network") to perform inference (Tu and Gimpel, 2018). In this paper, we compare
these two families of inference methods on three sequence labeling datasets. We
choose sequence labeling because it permits us to use exact inference as a
benchmark in terms of speed, accuracy, and search error. Across datasets, we
demonstrate that inference networks achieve a better speed/accuracy/search
error trade-off than gradient descent, while also being faster than exact
inference at similar accuracy levels. We find further benefit by combining
inference networks and gradient descent, using the former to provide a warm
start for the latter.
| 2,019 | Computation and Language |
Recent Advances in Natural Language Inference: A Survey of Benchmarks,
Resources, and Approaches | In the NLP community, recent years have seen a surge of research activities
that address machines' ability to perform deep language understanding which
goes beyond what is explicitly stated in text, rather relying on reasoning and
knowledge of the world. Many benchmark tasks and datasets have been created to
support the development and evaluation of such natural language inference
ability. As these benchmarks become instrumental and a driving force for the
NLP research community, this paper aims to provide an overview of recent
benchmarks, relevant knowledge resources, and state-of-the-art learning and
inference approaches in order to support a better understanding of this growing
field.
| 2,020 | Computation and Language |
A Multi-Task Approach for Disentangling Syntax and Semantics in Sentence
Representations | We propose a generative model for a sentence that uses two latent variables,
with one intended to represent the syntax of the sentence and the other to
represent its semantics. We show we can achieve better disentanglement between
semantic and syntactic representations by training with multiple losses,
including losses that exploit aligned paraphrastic sentences and word-order
information. We also investigate the effect of moving from bag-of-words to
recurrent neural network modules. We evaluate our models as well as several
popular pretrained embeddings on standard semantic similarity tasks and novel
syntactic similarity tasks. Empirically, we find that the model with the best
performing syntactic and semantic representations also gives rise to the most
disentangled representations.
| 2,019 | Computation and Language |
UHop: An Unrestricted-Hop Relation Extraction Framework for
Knowledge-Based Question Answering | In relation extraction for knowledge-based question answering, searching from
one entity to another entity via a single relation is called "one hop". In
related work, an exhaustive search from all one-hop relations, two-hop
relations, and so on to the max-hop relations in the knowledge graph is
necessary but expensive. Therefore, the number of hops is generally restricted
to two or three. In this paper, we propose UHop, an unrestricted-hop framework
which relaxes this restriction by use of a transition-based search framework to
replace the relation-chain-based search one. We conduct experiments on
conventional 1- and 2-hop questions as well as lengthy questions, including
datasets such as WebQSP, PathQuestion, and Grid World. Results show that the
proposed framework enables the ability to halt, works well with
state-of-the-art models, achieves competitive performance without exhaustive
searches, and opens the performance gap for long relation paths.
| 2,019 | Computation and Language |
Temporal and Aspectual Entailment | Inferences regarding "Jane's arrival in London" from predications such as
"Jane is going to London" or "Jane has gone to London" depend on tense and
aspect of the predications. Tense determines the temporal location of the
predication in the past, present or future of the time of utterance. The
aspectual auxiliaries on the other hand specify the internal constituency of
the event, i.e. whether the event of "going to London" is completed and whether
its consequences hold at that time or not. While tense and aspect are among the
most important factors for determining natural language inference, there has
been very little work to show whether modern NLP models capture these semantic
concepts. In this paper we propose a novel entailment dataset and analyse the
ability of a range of recently proposed NLP models to perform inference on
temporal predications. We show that the models encode a substantial amount of
morphosyntactic information relating to tense and aspect, but fail to model
inferences that require reasoning with these semantic properties.
| 2,019 | Computation and Language |
Pragmatically Informative Text Generation | We improve the informativeness of models for conditional text generation
using techniques from computational pragmatics. These techniques formulate
language production as a game between speakers and listeners, in which a
speaker should generate output text that a listener can use to correctly
identify the original input that the text describes. While such approaches are
widely used in cognitive science and grounded language learning, they have
received less attention for more standard language generation tasks. We
consider two pragmatic modeling methods for text generation: one where
pragmatics is imposed by information preservation, and another where pragmatics
is imposed by explicit modeling of distractors. We find that these methods
improve the performance of strong existing systems for abstractive
summarization and generation from structured meaning representations.
| 2,019 | Computation and Language |
Short Text Classification Improved by Feature Space Extension | With the explosive development of mobile Internet, short text has been
applied extensively. The difference between classifying short text and long
documents is that short text is of shortness and sparsity. Thus, it is
challenging to deal with short text classification owing to its less semantic
information. In this paper, we propose a novel topic-based convolutional neural
network (TB-CNN) based on Latent Dirichlet Allocation (LDA) model and
convolutional neural network. Comparing to traditional CNN methods, TB-CNN
generates topic words with LDA model to reduce the sparseness and combines the
embedding vectors of topic words and input words to extend feature space of
short text. The validation results on IMDB movie review dataset show the
improvement and effectiveness of TB-CNN.
| 2,020 | Computation and Language |
Using Multi-Sense Vector Embeddings for Reverse Dictionaries | Popular word embedding methods such as word2vec and GloVe assign a single
vector representation to each word, even if a word has multiple distinct
meanings. Multi-sense embeddings instead provide different vectors for each
sense of a word. However, they typically cannot serve as a drop-in replacement
for conventional single-sense embeddings, because the correct sense vector
needs to be selected for each word. In this work, we study the effect of
multi-sense embeddings on the task of reverse dictionaries. We propose a
technique to easily integrate them into an existing neural network architecture
using an attention mechanism. Our experiments demonstrate that large
improvements can be obtained when employing multi-sense embeddings both in the
input sequence as well as for the target representation. An analysis of the
sense distributions and of the learned attention is provided as well.
| 2,019 | Computation and Language |
Training Data Augmentation for Context-Sensitive Neural Lemmatization
Using Inflection Tables and Raw Text | Lemmatization aims to reduce the sparse data problem by relating the
inflected forms of a word to its dictionary form. Using context can help, both
for unseen and ambiguous words. Yet most context-sensitive approaches require
full lemma-annotated sentences for training, which may be scarce or unavailable
in low-resource languages. In addition (as shown here), in a low-resource
setting, a lemmatizer can learn more from $n$ labeled examples of distinct
words (types) than from $n$ (contiguous) labeled tokens, since the latter
contain far fewer distinct types. To combine the efficiency of type-based
learning with the benefits of context, we propose a way to train a
context-sensitive lemmatizer with little or no labeled corpus data, using
inflection tables from the UniMorph project and raw text examples from
Wikipedia that provide sentence contexts for the unambiguous UniMorph examples.
Despite these being unambiguous examples, the model successfully generalizes
from them, leading to improved results (both overall, and especially on unseen
words) in comparison to a baseline that does not use context.
| 2,019 | Computation and Language |
Neural Vector Conceptualization for Word Vector Space Interpretation | Distributed word vector spaces are considered hard to interpret which hinders
the understanding of natural language processing (NLP) models. In this work, we
introduce a new method to interpret arbitrary samples from a word vector space.
To this end, we train a neural model to conceptualize word vectors, which means
that it activates higher order concepts it recognizes in a given vector.
Contrary to prior approaches, our model operates in the original vector space
and is capable of learning non-linear relations between word vectors and
concepts. Furthermore, we show that it produces considerably less entropic
concept activation profiles than the popular cosine similarity.
| 2,019 | Computation and Language |
Understanding language-elicited EEG data by predicting it from a
fine-tuned language model | Electroencephalography (EEG) recordings of brain activity taken while
participants read or listen to language are widely used within the cognitive
neuroscience and psycholinguistics communities as a tool to study language
comprehension. Several time-locked stereotyped EEG responses to
word-presentations -- known collectively as event-related potentials (ERPs) --
are thought to be markers for semantic or syntactic processes that take place
during comprehension. However, the characterization of each individual ERP in
terms of what features of a stream of language trigger the response remains
controversial. Improving this characterization would make ERPs a more useful
tool for studying language comprehension. We take a step towards better
understanding the ERPs by fine-tuning a language model to predict them. This
new approach to analysis shows for the first time that all of the ERPs are
predictable from embeddings of a stream of language. Prior work has only found
two of the ERPs to be predictable. In addition to this analysis, we examine
which ERPs benefit from sharing parameters during joint training. We find that
two pairs of ERPs previously identified in the literature as being related to
each other benefit from joint training, while several other pairs of ERPs that
benefit from joint training are suggestive of potential relationships.
Extensions of this analysis that further examine what kinds of information in
the model embeddings relate to each ERP have the potential to elucidate the
processes involved in human language comprehension.
| 2,019 | Computation and Language |
Contrastive Predictive Coding Based Feature for Automatic Speaker
Verification | This thesis describes our ongoing work on Contrastive Predictive Coding (CPC)
features for speaker verification. CPC is a recently proposed representation
learning framework based on predictive coding and noise contrastive estimation.
We focus on incorporating CPC features into the standard automatic speaker
verification systems, and we present our methods, experiments, and analysis.
This thesis also details necessary background knowledge in past and recent work
on automatic speaker verification systems, conventional speech features, and
the motivation and techniques behind CPC.
| 2,019 | Computation and Language |
Asking the Right Question: Inferring Advice-Seeking Intentions from
Personal Narratives | People often share personal narratives in order to seek advice from others.
To properly infer the narrator's intention, one needs to apply a certain degree
of common sense and social intuition. To test the capabilities of NLP systems
to recover such intuition, we introduce the new task of inferring what is the
advice-seeking goal behind a personal narrative. We formulate this as a cloze
test, where the goal is to identify which of two advice-seeking questions was
removed from a given narrative.
The main challenge in constructing this task is finding pairs of semantically
plausible advice-seeking questions for given narratives. To address this
challenge, we devise a method that exploits commonalities in experiences people
share online to automatically extract pairs of questions that are appropriate
candidates for the cloze task. This results in a dataset of over 20,000
personal narratives, each matched with a pair of related advice-seeking
questions: one actually intended by the narrator, and the other one not. The
dataset covers a very broad array of human experiences, from dating, to career
options, to stolen iPads. We use human annotation to determine the degree to
which the task relies on common sense and social intuition in addition to a
semantic understanding of the narrative. By introducing several baselines for
this new task we demonstrate its feasibility and identify avenues for better
modeling the intention of the narrator.
| 2,019 | Computation and Language |
Analyzing Polarization in Social Media: Method and Application to Tweets
on 21 Mass Shootings | We provide an NLP framework to uncover four linguistic dimensions of
political polarization in social media: topic choice, framing, affect and
illocutionary force. We quantify these aspects with existing lexical methods,
and propose clustering of tweet embeddings as a means to identify salient
topics for analysis across events; human evaluations show that our approach
generates more cohesive topics than traditional LDA-based models. We apply our
methods to study 4.4M tweets on 21 mass shootings. We provide evidence that the
discussion of these events is highly polarized politically and that this
polarization is primarily driven by partisan differences in framing rather than
topic choice. We identify framing devices, such as grounding and the
contrasting use of the terms "terrorist" and "crazy", that contribute to
polarization. Results pertaining to topic choice, affect and illocutionary
force suggest that Republicans focus more on the shooter and event-specific
facts (news) while Democrats focus more on the victims and call for policy
changes. Our work contributes to a deeper understanding of the way group
divisions manifest in language and to computational methods for studying them.
| 2,019 | Computation and Language |
Inferring Which Medical Treatments Work from Reports of Clinical Trials | How do we know if a particular medical treatment actually works? Ideally one
would consult all available evidence from relevant clinical trials.
Unfortunately, such results are primarily disseminated in natural language
scientific articles, imposing substantial burden on those trying to make sense
of them. In this paper, we present a new task and corpus for making this
unstructured evidence actionable. The task entails inferring reported findings
from a full-text article describing a randomized controlled trial (RCT) with
respect to a given intervention, comparator, and outcome of interest, e.g.,
inferring if an article provides evidence supporting the use of aspirin to
reduce risk of stroke, as compared to placebo.
We present a new corpus for this task comprising 10,000+ prompts coupled with
full-text articles describing RCTs. Results using a suite of models --- ranging
from heuristic (rule-based) approaches to attentive neural architectures ---
demonstrate the difficulty of the task, which we believe largely owes to the
lengthy, technical input texts. To facilitate further work on this important,
challenging problem we make the corpus, documentation, a website and
leaderboard, and code for baselines and evaluation available at
http://evidence-inference.ebm-nlp.com/.
| 2,019 | Computation and Language |
Structural Scaffolds for Citation Intent Classification in Scientific
Publications | Identifying the intent of a citation in scientific papers (e.g., background
information, use of methods, comparing results) is critical for machine reading
of individual publications and automated analysis of the scientific literature.
We propose structural scaffolds, a multitask model to incorporate structural
information of scientific papers into citations for effective classification of
citation intents. Our model achieves a new state-of-the-art on an existing ACL
anthology dataset (ACL-ARC) with a 13.3% absolute increase in F1 score, without
relying on external linguistic resources or hand-engineered features as done in
existing methods. In addition, we introduce a new dataset of citation intents
(SciCite) which is more than five times larger and covers multiple scientific
domains compared with existing datasets. Our code and data are available at:
https://github.com/allenai/scicite.
| 2,019 | Computation and Language |
Attentive Mimicking: Better Word Embeddings by Attending to Informative
Contexts | Learning high-quality embeddings for rare words is a hard problem because of
sparse context information. Mimicking (Pinter et al., 2017) has been proposed
as a solution: given embeddings learned by a standard algorithm, a model is
first trained to reproduce embeddings of frequent words from their surface form
and then used to compute embeddings for rare words. In this paper, we introduce
attentive mimicking: the mimicking model is given access not only to a word's
surface form, but also to all available contexts and learns to attend to the
most informative and reliable contexts for computing an embedding. In an
evaluation on four tasks, we show that attentive mimicking outperforms previous
work for both rare and medium-frequency words. Thus, compared to previous work,
attentive mimicking improves embeddings for a much larger part of the
vocabulary, including the medium-frequency range.
| 2,019 | Computation and Language |
Identification, Interpretability, and Bayesian Word Embeddings | Social scientists have recently turned to analyzing text using tools from
natural language processing like word embeddings to measure concepts like
ideology, bias, and affinity. However, word embeddings are difficult to use in
the regression framework familiar to social scientists: embeddings are are
neither identified, nor directly interpretable. I offer two advances on
standard embedding models to remedy these problems. First, I develop Bayesian
Word Embeddings with Automatic Relevance Determination priors, relaxing the
assumption that all embedding dimensions have equal weight. Second, I apply
work identifying latent variable models to anchor the dimensions of the
resulting embeddings, identifying them, and making them interpretable and
usable in a regression. I then apply this model and anchoring approach to two
cases, the shift in internationalist rhetoric in the American presidents'
inaugural addresses, and the relationship between bellicosity in American
foreign policy decision-makers' deliberations. I find that inaugural addresses
became less internationalist after 1945, which goes against the conventional
wisdom, and that an increase in bellicosity is associated with an increase in
hostile actions by the United States, showing that elite deliberations are not
cheap talk, and helping confirm the validity of the model.
| 2,019 | Computation and Language |
Impact of ASR on Alzheimer's Disease Detection: All Errors are Equal,
but Deletions are More Equal than Others | Automatic Speech Recognition (ASR) is a critical component of any
fully-automated speech-based dementia detection model. However, despite years
of speech recognition research, little is known about the impact of ASR
accuracy on dementia detection. In this paper, we experiment with controlled
amounts of artificially generated ASR errors and investigate their influence on
dementia detection. We find that deletion errors affect detection performance
the most, due to their impact on the features of syntactic complexity and
discourse representation in speech. We show the trend to be generalisable
across two different datasets for cognitive impairment detection. As a
conclusion, we propose optimising the ASR to reflect a higher penalty for
deletion errors in order to improve dementia detection performance.
| 2,020 | Computation and Language |
The Tower of Babel Meets Web 2.0: User-Generated Content and its
Applications in a Multilingual Context | This study explores language's fragmenting effect on user-generated content
by examining the diversity of knowledge representations across 25 different
Wikipedia language editions. This diversity is measured at two levels: the
concepts that are included in each edition and the ways in which these concepts
are described. We demonstrate that the diversity present is greater than has
been presumed in the literature and has a significant influence on applications
that use Wikipedia as a source of world knowledge. We close by explicating how
knowledge diversity can be beneficially leveraged to create "culturally-aware
applications" and "hyperlingual applications".
| 2,019 | Computation and Language |
Multi-Modal Generative Adversarial Network for Short Product Title
Generation in Mobile E-Commerce | Nowadays, more and more customers browse and purchase products in favor of
using mobile E-Commerce Apps such as Taobao and Amazon. Since merchants are
usually inclined to describe redundant and over-informative product titles to
attract attentions from customers, it is important to concisely display short
product titles on limited screen of mobile phones. To address this discrepancy,
previous studies mainly consider textual information of long product titles and
lacks of human-like view during training and evaluation process. In this paper,
we propose a Multi-Modal Generative Adversarial Network (MM-GAN) for short
product title generation in E-Commerce, which innovatively incorporates image
information and attribute tags from product, as well as textual information
from original long titles. MM-GAN poses short title generation as a
reinforcement learning process, where the generated titles are evaluated by the
discriminator in a human-like view. Extensive experiments on a large-scale
E-Commerce dataset demonstrate that our algorithm outperforms other
state-of-the-art methods. Moreover, we deploy our model into a real-world
online E-Commerce environment and effectively boost the performance of click
through rate and click conversion rate by 1.66% and 1.87%, respectively.
| 2,019 | Computation and Language |
Multi-task Learning for Chinese Word Usage Errors Detection | Chinese word usage errors often occur in non-native Chinese learners'
writing. It is very helpful for non-native Chinese learners to detect them
automatically when learning writing. In this paper, we propose a novel
approach, which takes advantages of different auxiliary tasks, such as
POS-tagging prediction and word log frequency prediction, to help the task of
Chinese word usage error detection. With the help of these auxiliary tasks, we
achieve the state-of-the-art results on the performances on the HSK corpus
data, without any other extra data.
| 2,018 | Computation and Language |
Cross-lingual transfer learning for spoken language understanding | Typically, spoken language understanding (SLU) models are trained on
annotated data which are costly to gather. Aiming to reduce data needs for
bootstrapping a SLU system for a new language, we present a simple but
effective weight transfer approach using data from another language. The
approach is evaluated with our promising multi-task SLU framework developed
towards different languages. We evaluate our approach on the ATIS and a
real-world SLU dataset, showing that i) our monolingual models outperform the
state-of-the-art, ii) we can reduce data amounts needed for bootstrapping a SLU
system for a new language greatly, and iii) while multitask training improves
over separate training, different weight transfer settings may work best for
different SLU modules.
| 2,019 | Computation and Language |
Modeling Vocabulary for Big Code Machine Learning | When building machine learning models that operate on source code, several
decisions have to be made to model source-code vocabulary. These decisions can
have a large impact: some can lead to not being able to train models at all,
others significantly affect performance, particularly for Neural Language
Models. Yet, these decisions are not often fully described. This paper lists
important modeling choices for source code vocabulary, and explores their
impact on the resulting vocabulary on a large-scale corpus of 14,436 projects.
We show that a subset of decisions have decisive characteristics, allowing to
train accurate Neural Language Models quickly on a large corpus of 10,106
projects.
| 2,019 | Computation and Language |
Unsupervised Deep Structured Semantic Models for Commonsense Reasoning | Commonsense reasoning is fundamental to natural language understanding. While
traditional methods rely heavily on human-crafted features and knowledge bases,
we explore learning commonsense knowledge from a large amount of raw text via
unsupervised learning. We propose two neural network models based on the Deep
Structured Semantic Models (DSSM) framework to tackle two classic commonsense
reasoning tasks, Winograd Schema challenges (WSC) and Pronoun Disambiguation
(PDP). Evaluation shows that the proposed models effectively capture contextual
information in the sentence and co-reference information between pronouns and
nouns, and achieve significant improvement over previous state-of-the-art
approaches.
| 2,019 | Computation and Language |
Subword-Level Language Identification for Intra-Word Code-Switching | Language identification for code-switching (CS), the phenomenon of
alternating between two or more languages in conversations, has traditionally
been approached under the assumption of a single language per token. However,
if at least one language is morphologically rich, a large number of words can
be composed of morphemes from more than one language (intra-word CS). In this
paper, we extend the language identification task to the subword-level, such
that it includes splitting mixed words while tagging each part with a language
ID. We further propose a model for this task, which is based on a segmental
recurrent neural network. In experiments on a new Spanish--Wixarika dataset and
on an adapted German--Turkish dataset, our proposed model performs slightly
better than or roughly on par with our best baseline, respectively. Considering
only mixed words, however, it strongly outperforms all baselines.
| 2,019 | Computation and Language |
A Large-Scale Comparison of Historical Text Normalization Systems | There is no consensus on the state-of-the-art approach to historical text
normalization. Many techniques have been proposed, including rule-based
methods, distance metrics, character-based statistical machine translation, and
neural encoder--decoder models, but studies have used different datasets,
different evaluation methods, and have come to different conclusions. This
paper presents the largest study of historical text normalization done so far.
We critically survey the existing literature and report experiments on eight
languages, comparing systems spanning all categories of proposed normalization
techniques, analysing the effect of training data quantity, and using different
evaluation methods. The datasets and scripts are made publicly available.
| 2,019 | Computation and Language |
Automated Fact Checking in the News Room | Fact checking is an essential task in journalism; its importance has been
highlighted due to recently increased concerns and efforts in combating
misinformation. In this paper, we present an automated fact-checking platform
which given a claim, it retrieves relevant textual evidence from a document
collection, predicts whether each piece of evidence supports or refutes the
claim, and returns a final verdict. We describe the architecture of the system
and the user interface, focusing on the choices made to improve its
user-friendliness and transparency. We conduct a user study of the
fact-checking platform in a journalistic setting: we integrated it with a
collection of news articles and provide an evaluation of the platform using
feedback from journalists in their workflow. We found that the predictions of
our platform were correct 58\% of the time, and 59\% of the returned evidence
was relevant.
| 2,019 | Computation and Language |
75 Languages, 1 Model: Parsing Universal Dependencies Universally | We present UDify, a multilingual multi-task model capable of accurately
predicting universal part-of-speech, morphological features, lemmas, and
dependency trees simultaneously for all 124 Universal Dependencies treebanks
across 75 languages. By leveraging a multilingual BERT self-attention model
pretrained on 104 languages, we found that fine-tuning it on all datasets
concatenated together with simple softmax classifiers for each UD task can
result in state-of-the-art UPOS, UFeats, Lemmas, UAS, and LAS scores, without
requiring any recurrent or language-specific components. We evaluate UDify for
multilingual learning, showing that low-resource languages benefit the most
from cross-linguistic annotations. We also evaluate for zero-shot learning,
with results suggesting that multilingual training provides strong UD
predictions even for languages that neither UDify nor BERT have ever been
trained on. Code for UDify is available at
https://github.com/hyperparticle/udify.
| 2,019 | Computation and Language |
CAN-NER: Convolutional Attention Network for Chinese Named Entity
Recognition | Named entity recognition (NER) in Chinese is essential but difficult because
of the lack of natural delimiters. Therefore, Chinese Word Segmentation (CWS)
is usually considered as the first step for Chinese NER. However, models based
on word-level embeddings and lexicon features often suffer from segmentation
errors and out-of-vocabulary (OOV) words. In this paper, we investigate a
Convolutional Attention Network called CAN for Chinese NER, which consists of a
character-based convolutional neural network (CNN) with local-attention layer
and a gated recurrent unit (GRU) with global self-attention layer to capture
the information from adjacent characters and sentence contexts. Also, compared
to other models, not depending on any external resources like lexicons and
employing small size of char embeddings make our model more practical.
Extensive experimental results show that our approach outperforms
state-of-the-art methods without word embedding and external lexicon resources
on different domain datasets including Weibo, MSRA and Chinese Resume NER
dataset.
| 2,020 | Computation and Language |
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive
Autoencoders | We introduce deep inside-outside recursive autoencoders (DIORA), a
fully-unsupervised method for discovering syntax that simultaneously learns
representations for constituents within the induced tree. Our approach predicts
each word in an input sentence conditioned on the rest of the sentence and uses
inside-outside dynamic programming to consider all possible binary trees over
the sentence. At test time the CKY algorithm extracts the highest scoring
parse. DIORA achieves a new state-of-the-art F1 in unsupervised binary
constituency parsing (unlabeled) in two benchmark datasets, WSJ and MultiNLI.
| 2,019 | Computation and Language |
Probing Biomedical Embeddings from Language Models | Contextualized word embeddings derived from pre-trained language models (LMs)
show significant improvements on downstream NLP tasks. Pre-training on
domain-specific corpora, such as biomedical articles, further improves their
performance. In this paper, we conduct probing experiments to determine what
additional information is carried intrinsically by the in-domain trained
contextualized embeddings. For this we use the pre-trained LMs as fixed feature
extractors and restrict the downstream task models to not have additional
sequence modeling layers. We compare BERT, ELMo, BioBERT and BioELMo, a
biomedical version of ELMo trained on 10M PubMed abstracts. Surprisingly, while
fine-tuned BioBERT is better than BioELMo in biomedical NER and NLI tasks, as a
fixed feature extractor BioELMo outperforms BioBERT in our probing tasks. We
use visualization and nearest neighbor analysis to show that better encoding of
entity-type and relational information leads to this superiority.
| 2,019 | Computation and Language |
Massively Multilingual Adversarial Speech Recognition | We report on adaptation of multilingual end-to-end speech recognition models
trained on as many as 100 languages. Our findings shed light on the relative
importance of similarity between the target and pretraining languages along the
dimensions of phonetics, phonology, language family, geographical location, and
orthography. In this context, experiments demonstrate the effectiveness of two
additional pretraining objectives in encouraging language-independent encoder
representations: a context-independent phoneme objective paired with a
language-adversarial classification objective.
| 2,019 | Computation and Language |
The Effect of Downstream Classification Tasks for Evaluating Sentence
Embeddings | One popular method for quantitatively evaluating the utility of sentence
embeddings involves using them in downstream language processing tasks that
require sentence representations as input. One simple such task is
classification, where the sentence representations are used to train and test
models on several classification datasets. We argue that by evaluating sentence
representations in such a manner, the goal of the representations becomes
learning a low-dimensional factorization of a sentence-task label matrix. We
show how characteristics of this matrix can affect the ability for a
low-dimensional factorization to perform as sentence representations in a suite
of classification tasks. Primarily, sentences that have more labels across all
possible classification tasks have a higher reconstruction loss, however the
general nature of this effect is ultimately dependent on the overall
distribution of labels across all possible sentences.
| 2,019 | Computation and Language |
BERT Post-Training for Review Reading Comprehension and Aspect-based
Sentiment Analysis | Question-answering plays an important role in e-commerce as it allows
potential customers to actively seek crucial information about products or
services to help their purchase decision making. Inspired by the recent success
of machine reading comprehension (MRC) on formal documents, this paper explores
the potential of turning customer reviews into a large source of knowledge that
can be exploited to answer user questions.~We call this problem Review Reading
Comprehension (RRC). To the best of our knowledge, no existing work has been
done on RRC. In this work, we first build an RRC dataset called ReviewRC based
on a popular benchmark for aspect-based sentiment analysis. Since ReviewRC has
limited training examples for RRC (and also for aspect-based sentiment
analysis), we then explore a novel post-training approach on the popular
language model BERT to enhance the performance of fine-tuning of BERT for RRC.
To show the generality of the approach, the proposed post-training is also
applied to some other review-based tasks such as aspect extraction and aspect
sentiment classification in aspect-based sentiment analysis. Experimental
results demonstrate that the proposed post-training is highly effective. The
datasets and code are available at https://www.cs.uic.edu/~hxu/.
| 2,019 | Computation and Language |
Multi-task Learning for Japanese Predicate Argument Structure Analysis | An event-noun is a noun that has an argument structure similar to a
predicate. Recent works, including those considered state-of-the-art, ignore
event-nouns or build a single model for solving both Japanese predicate
argument structure analysis (PASA) and event-noun argument structure analysis
(ENASA). However, because there are interactions between predicates and
event-nouns, it is not sufficient to target only predicates. To address this
problem, we present a multi-task learning method for PASA and ENASA. Our
multi-task models improved the performance of both tasks compared to a
single-task model by sharing knowledge from each task. Moreover, in PASA, our
models achieved state-of-the-art results in overall F1 scores on the NAIST Text
Corpus. In addition, this is the first work to employ neural networks in ENASA.
| 2,019 | Computation and Language |
Learning Outside the Box: Discourse-level Features Improve Metaphor
Identification | Most current approaches to metaphor identification use restricted linguistic
contexts, e.g. by considering only a verb's arguments or the sentence
containing a phrase. Inspired by pragmatic accounts of metaphor, we argue that
broader discourse features are crucial for better metaphor identification. We
train simple gradient boosting classifiers on representations of an utterance
and its surrounding discourse learned with a variety of document embedding
methods, obtaining near state-of-the-art results on the 2018 VU Amsterdam
metaphor identification task without the complex metaphor-specific features or
deep neural architectures employed by other systems. A qualitative analysis
further confirms the need for broader context in metaphor processing.
| 2,019 | Computation and Language |
Answer-based Adversarial Training for Generating Clarification Questions | We present an approach for generating clarification questions with the goal
of eliciting new information that would make the given textual context more
complete. We propose that modeling hypothetical answers (to clarification
questions) as latent variables can guide our approach into generating more
useful clarification questions. We develop a Generative Adversarial Network
(GAN) where the generator is a sequence-to-sequence model and the discriminator
is a utility function that models the value of updating the context with the
answer to the clarification question. We evaluate on two datasets, using both
automatic metrics and human judgments of usefulness, specificity and relevance,
showing that our approach outperforms both a retrieval-based model and
ablations that exclude the utility model and the adversarial training.
| 2,019 | Computation and Language |
Generative Adversarial Networks for text using word2vec intermediaries | Generative adversarial networks (GANs) have shown considerable success,
especially in the realistic generation of images. In this work, we apply
similar techniques for the generation of text. We propose a novel approach to
handle the discrete nature of text, during training, using word embeddings. Our
method is agnostic to vocabulary size and achieves competitive results relative
to methods with various discrete gradient estimators.
| 2,019 | Computation and Language |
Evaluating Style Transfer for Text | Research in the area of style transfer for text is currently bottlenecked by
a lack of standard evaluation practices. This paper aims to alleviate this
issue by experimentally identifying best practices with a Yelp sentiment
dataset. We specify three aspects of interest (style transfer intensity,
content preservation, and naturalness) and show how to obtain more reliable
measures of them from human evaluation than in previous work. We propose a set
of metrics for automated evaluation and demonstrate that they are more strongly
correlated and in agreement with human judgment: direction-corrected Earth
Mover's Distance, Word Mover's Distance on style-masked texts, and adversarial
classification for the respective aspects. We also show that the three examined
models exhibit tradeoffs between aspects of interest, demonstrating the
importance of evaluating style transfer models at specific points of their
tradeoff plots. We release software with our evaluation metrics to facilitate
research.
| 2,019 | Computation and Language |
A Simple Joint Model for Improved Contextual Neural Lemmatization | English verbs have multiple forms. For instance, talk may also appear as
talks, talked or talking, depending on the context. The NLP task of
lemmatization seeks to map these diverse forms back to a canonical one, known
as the lemma. We present a simple joint neural model for lemmatization and
morphological tagging that achieves state-of-the-art results on 20 languages
from the Universal Dependencies corpora. Our paper describes the model in
addition to training and decoding procedures. Error analysis indicates that
joint morphological tagging and lemmatization is especially helpful in
low-resource lemmatization and languages that display a larger degree of
morphological complexity. Code and pre-trained models are available at
https://sigmorphon.github.io/sharedtasks/2019/task2/.
| 2,020 | Computation and Language |
Guiding Extractive Summarization with Question-Answering Rewards | Highlighting while reading is a natural behavior for people to track salient
content of a document. It would be desirable to teach an extractive summarizer
to do the same. However, a major obstacle to the development of a supervised
summarizer is the lack of ground-truth. Manual annotation of extraction units
is cost-prohibitive, whereas acquiring labels by automatically aligning human
abstracts and source documents can yield inferior results. In this paper we
describe a novel framework to guide a supervised, extractive summarization
system with question-answering rewards. We argue that quality summaries should
serve as a document surrogate to answer important questions, and such
question-answer pairs can be conveniently obtained from human abstracts. The
system learns to promote summaries that are informative, fluent, and perform
competitively on question-answering. Our results compare favorably with those
reported by strong summarization baselines as evaluated by automatic metrics
and human assessors.
| 2,019 | Computation and Language |
Extract and Edit: An Alternative to Back-Translation for Unsupervised
Neural Machine Translation | The overreliance on large parallel corpora significantly limits the
applicability of machine translation systems to the majority of language pairs.
Back-translation has been dominantly used in previous approaches for
unsupervised neural machine translation, where pseudo sentence pairs are
generated to train the models with a reconstruction loss. However, the pseudo
sentences are usually of low quality as translation errors accumulate during
training. To avoid this fundamental issue, we propose an alternative but more
effective approach, extract-edit, to extract and then edit real sentences from
the target monolingual corpora. Furthermore, we introduce a comparative
translation loss to evaluate the translated target sentences and thus train the
unsupervised translation systems. Experiments show that the proposed approach
consistently outperforms the previous state-of-the-art unsupervised machine
translation systems across two benchmarks (English-French and English-German)
and two low-resource language pairs (English-Romanian and English-Russian) by
more than 2 (up to 3.63) BLEU points.
| 2,019 | Computation and Language |
Text Generation from Knowledge Graphs with Graph Transformers | Generating texts which express complex ideas spanning multiple sentences
requires a structured representation of their content (document plan), but
these representations are prohibitively expensive to manually produce. In this
work, we address the problem of generating coherent multi-sentence texts from
the output of an information extraction system, and in particular a knowledge
graph. Graphical knowledge representations are ubiquitous in computing, but
pose a significant challenge for text generation techniques due to their
non-hierarchical nature, collapsing of long-distance dependencies, and
structural variety. We introduce a novel graph transforming encoder which can
leverage the relational structure of such knowledge graphs without imposing
linearization or hierarchical constraints. Incorporated into an encoder-decoder
setup, we provide an end-to-end trainable system for graph-to-text generation
that we apply to the domain of scientific text. Automatic and human evaluations
show that our technique produces more informative texts which exhibit better
document structure than competitive encoder-decoder methods.
| 2,022 | Computation and Language |
Density Matching for Bilingual Word Embedding | Recent approaches to cross-lingual word embedding have generally been based
on linear transformations between the sets of embedding vectors in the two
languages. In this paper, we propose an approach that instead expresses the two
monolingual embedding spaces as probability densities defined by a Gaussian
mixture model, and matches the two densities using a method called normalizing
flow. The method requires no explicit supervision, and can be learned with only
a seed dictionary of words that have identical strings. We argue that this
formulation has several intuitively attractive properties, particularly with
the respect to improving robustness and generalization to mappings between
difficult language pairs or word pairs. On a benchmark data set of bilingual
lexicon induction and cross-lingual word similarity, our approach can achieve
competitive or superior performance compared to state-of-the-art published
results, with particularly strong results being found on etymologically distant
and/or morphologically rich languages.
| 2,019 | Computation and Language |
Document-Level $N$-ary Relation Extraction with Multiscale
Representation Learning | Most information extraction methods focus on binary relations expressed
within single sentences. In high-value domains, however, $n$-ary relations are
of great demand (e.g., drug-gene-mutation interactions in precision oncology).
Such relations often involve entity mentions that are far apart in the
document, yet existing work on cross-sentence relation extraction is generally
confined to small text spans (e.g., three consecutive sentences), which
severely limits recall. In this paper, we propose a novel multiscale neural
architecture for document-level $n$-ary relation extraction. Our system
combines representations learned over various text spans throughout the
document and across the subrelation hierarchy. Widening the system's purview to
the entire document maximizes potential recall. Moreover, by integrating weak
signals across the document, multiscale modeling increases precision, even in
the presence of noisy labels from distant supervision. Experiments on
biomedical machine reading show that our approach substantially outperforms
previous $n$-ary relation extraction methods.
| 2,019 | Computation and Language |
Plan, Write, and Revise: an Interactive System for Open-Domain Story
Generation | Story composition is a challenging problem for machines and even for humans.
We present a neural narrative generation system that interacts with humans to
generate stories. Our system has different levels of human interaction, which
enables us to understand at what stage of story-writing human collaboration is
most productive, both to improving story quality and human engagement in the
writing process. We compare different varieties of interaction in
story-writing, story-planning, and diversity controls under time constraints,
and show that increased types of human collaboration at both planning and
writing stages results in a 10-50% improvement in story quality as compared to
less interactive baselines. We also show an accompanying increase in user
engagement and satisfaction with stories as compared to our own less
interactive systems and to previous turn-taking approaches to interaction.
Finally, we find that humans tasked with collaboratively improving a particular
characteristic of a story are in fact able to do so, which has implications for
future uses of human-in-the-loop systems.
| 2,019 | Computation and Language |
Multi-reference Tacotron by Intercross Training for Style
Disentangling,Transfer and Control in Speech Synthesis | Speech style control and transfer techniques aim to enrich the diversity and
expressiveness of synthesized speech. Existing approaches model all speech
styles into one representation, lacking the ability to control a specific
speech feature independently. To address this issue, we introduce a novel
multi-reference structure to Tacotron and propose intercross training approach,
which together ensure that each sub-encoder of the multi-reference encoder
independently disentangles and controls a specific style. Experimental results
show that our model is able to control and transfer desired speech styles
individually.
| 2,019 | Computation and Language |
Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for
Text Modeling | Recurrent Variational Autoencoder has been widely used for language modeling
and text generation tasks. These models often face a difficult optimization
problem, also known as the Kullback-Leibler (KL) term vanishing issue, where
the posterior easily collapses to the prior, and the model will ignore latent
codes in generative tasks. To address this problem, we introduce an improved
Wasserstein Variational Autoencoder (WAE) with Riemannian Normalizing Flow
(RNF) for text modeling. The RNF transforms a latent variable into a space that
respects the geometric characteristics of input space, which makes posterior
impossible to collapse to the non-informative prior. The Wasserstein objective
minimizes the distance between the marginal distribution and the prior directly
and therefore does not force the posterior to match the prior. Empirical
experiments show that our model avoids KL vanishing over a range of datasets
and has better performances in tasks such as language modeling, likelihood
approximation, and text generation. Through a series of experiments and
analysis over latent space, we show that our model learns latent distributions
that respect latent space geometry and is able to generate sentences that are
more diverse.
| 2,019 | Computation and Language |
Learning to Decipher Hate Symbols | Existing computational models to understand hate speech typically frame the
problem as a simple classification task, bypassing the understanding of hate
symbols (e.g., 14 words, kigy) and their secret connotations. In this paper, we
propose a novel task of deciphering hate symbols. To do this, we leverage the
Urban Dictionary and collected a new, symbol-rich Twitter corpus of hate
speech. We investigate neural network latent context models for deciphering
hate symbols. More specifically, we study Sequence-to-Sequence models and show
how they are able to crack the ciphers based on context. Furthermore, we
propose a novel Variational Decipher and show how it can generalize better to
unseen hate symbols in a more challenging testing setting.
| 2,019 | Computation and Language |
ReWE: Regressing Word Embeddings for Regularization of Neural Machine
Translation Systems | Regularization of neural machine translation is still a significant problem,
especially in low-resource settings. To mollify this problem, we propose
regressing word embeddings (ReWE) as a new regularization technique in a system
that is jointly trained to predict the next word in the translation
(categorical value) and its word embedding (continuous value). Such a joint
training allows the proposed system to learn the distributional properties
represented by the word embeddings, empirically improving the generalization to
unseen sentences. Experiments over three translation datasets have showed a
consistent improvement over a strong baseline, ranging between 0.91 and 2.54
BLEU points, and also a marked improvement over a state-of-the-art system.
| 2,019 | Computation and Language |
Composition of Sentence Embeddings:Lessons from Statistical Relational
Learning | Various NLP problems -- such as the prediction of sentence similarity,
entailment, and discourse relations -- are all instances of the same general
task: the modeling of semantic relations between a pair of textual elements. A
popular model for such problems is to embed sentences into fixed size vectors,
and use composition functions (e.g. concatenation or sum) of those vectors as
features for the prediction. At the same time, composition of embeddings has
been a main focus within the field of Statistical Relational Learning (SRL)
whose goal is to predict relations between entities (typically from knowledge
base triples). In this article, we show that previous work on relation
prediction between texts implicitly uses compositions from baseline SRL models.
We show that such compositions are not expressive enough for several tasks
(e.g. natural language inference). We build on recent SRL models to address
textual relational problems, showing that they are more expressive, and can
alleviate issues from simpler compositions. The resulting models significantly
improve the state of the art in both transferable sentence representation
learning and relation prediction.
| 2,019 | Computation and Language |
Multi-Context Term Embeddings: the Use Case of Corpus-based Term Set
Expansion | In this paper, we present a novel algorithm that combines multi-context term
embeddings using a neural classifier and we test this approach on the use case
of corpus-based term set expansion. In addition, we present a novel and unique
dataset for intrinsic evaluation of corpus-based term set expansion algorithms.
We show that, over this dataset, our algorithm provides up to 5 mean average
precision points over the best baseline.
| 2,019 | Computation and Language |
Robust Evaluation of Language-Brain Encoding Experiments | Language-brain encoding experiments evaluate the ability of language models
to predict brain responses elicited by language stimuli. The evaluation
scenarios for this task have not yet been standardized which makes it difficult
to compare and interpret results. We perform a series of evaluation experiments
with a consistent encoding setup and compute the results for multiple fMRI
datasets. In addition, we test the sensitivity of the evaluation measures to
randomized data and analyze the effect of voxel selection methods. Our
experimental framework is publicly available to make modelling decisions more
transparent and support reproducibility for future comparisons.
| 2,019 | Computation and Language |
Dialogue Act Classification with Context-Aware Self-Attention | Recent work in Dialogue Act classification has treated the task as a sequence
labeling problem using hierarchical deep neural networks. We build on this
prior work by leveraging the effectiveness of a context-aware self-attention
mechanism coupled with a hierarchical recurrent neural network. We conduct
extensive evaluations on standard Dialogue Act classification datasets and show
significant improvement over state-of-the-art results on the Switchboard
Dialogue Act (SwDA) Corpus. We also investigate the impact of different
utterance-level representation learning methods and show that our method is
effective at capturing utterance-level semantic text representations while
maintaining high accuracy.
| 2,019 | Computation and Language |
Sequence-to-Sequence Speech Recognition with Time-Depth Separable
Convolutions | We propose a fully convolutional sequence-to-sequence encoder architecture
with a simple and efficient decoder. Our model improves WER on LibriSpeech
while being an order of magnitude more efficient than a strong RNN baseline.
Key to our approach is a time-depth separable convolution block which
dramatically reduces the number of parameters in the model while keeping the
receptive field large. We also give a stable and efficient beam search
inference procedure which allows us to effectively integrate a language model.
Coupled with a convolutional language model, our time-depth separable
convolution architecture improves by more than 22% relative WER over the best
previously reported sequence-to-sequence results on the noisy LibriSpeech test
set.
| 2,019 | Computation and Language |
ElimiNet: A Model for Eliminating Options for Reading Comprehension with
Multiple Choice Questions | The task of Reading Comprehension with Multiple Choice Questions, requires a
human (or machine) to read a given passage, question pair and select one of the
n given options. The current state of the art model for this task first
computes a question-aware representation for the passage and then selects the
option which has the maximum similarity with this representation. However, when
humans perform this task they do not just focus on option selection but use a
combination of elimination and selection. Specifically, a human would first try
to eliminate the most irrelevant option and then read the passage again in the
light of this new information (and perhaps ignore portions corresponding to the
eliminated option). This process could be repeated multiple times till the
reader is finally ready to select the correct option. We propose ElimiNet, a
neural network-based model which tries to mimic this process. Specifically, it
has gates which decide whether an option can be eliminated given the passage,
question pair and if so it tries to make the passage representation orthogonal
to this eliminated option (akin to ignoring portions of the passage
corresponding to the eliminated option). The model makes multiple rounds of
partial elimination to refine the passage representation and finally uses a
selection module to pick the best option. We evaluate our model on the recently
released large scale RACE dataset and show that it outperforms the current
state of the art model on 7 out of the $13$ question types in this dataset.
Further, we show that taking an ensemble of our elimination-selection based
method with a selection based method gives us an improvement of 3.1% over the
best-reported performance on this dataset.
| 2,018 | Computation and Language |
Recommendations for Datasets for Source Code Summarization | Source Code Summarization is the task of writing short, natural language
descriptions of source code. The main use for these descriptions is in software
documentation e.g. the one-sentence Java method descriptions in JavaDocs. Code
summarization is rapidly becoming a popular research problem, but progress is
restrained due to a lack of suitable datasets. In addition, a lack of community
standards for creating datasets leads to confusing and unreproducible research
results -- we observe swings in performance of more than 33% due only to
changes in dataset design. In this paper, we make recommendations for these
standards from experimental results. We release a dataset based on prior work
of over 2.1m pairs of Java methods and one sentence method descriptions from
over 28k Java projects. We describe the dataset and point out key differences
from natural language data, to guide and support future researchers.
| 2,019 | Computation and Language |
Frustratingly Poor Performance of Reading Comprehension Models on
Non-adversarial Examples | When humans learn to perform a difficult task (say, reading comprehension
(RC) over longer passages), it is typically the case that their performance
improves significantly on an easier version of this task (say, RC over shorter
passages). Ideally, we would want an intelligent agent to also exhibit such a
behavior. However, on experimenting with state of the art RC models using the
standard RACE dataset, we observe that this is not true. Specifically, we see
counter-intuitive results wherein even when we show frustratingly easy examples
to the model at test time, there is hardly any improvement in its performance.
We refer to this as non-adversarial evaluation as opposed to adversarial
evaluation. Such non-adversarial examples allow us to assess the utility of
specialized neural components. For example, we show that even for easy examples
where the answer is clearly embedded in the passage, the neural components
designed for paying attention to relevant portions of the passage fail to serve
their intended purpose. We believe that the non-adversarial dataset created as
a part of this work would complement the research on adversarial evaluation and
give a more realistic assessment of the ability of RC models. All the datasets
and codes developed as a part of this work will be made publicly available.
| 2,019 | Computation and Language |
Inoculation by Fine-Tuning: A Method for Analyzing Challenge Datasets | Several datasets have recently been constructed to expose brittleness in
models trained on existing benchmarks. While model performance on these
challenge datasets is significantly lower compared to the original benchmark,
it is unclear what particular weaknesses they reveal. For example, a challenge
dataset may be difficult because it targets phenomena that current models
cannot capture, or because it simply exploits blind spots in a model's specific
training set. We introduce inoculation by fine-tuning, a new analysis method
for studying challenge datasets by exposing models (the metaphorical patient)
to a small amount of data from the challenge dataset (a metaphorical pathogen)
and assessing how well they can adapt. We apply our method to analyze the NLI
"stress tests" (Naik et al., 2018) and the Adversarial SQuAD dataset (Jia and
Liang, 2017). We show that after slight exposure, some of these datasets are no
longer challenging, while others remain difficult. Our results indicate that
failures on challenge datasets may lead to very different conclusions about
models, training datasets, and the challenge datasets themselves.
| 2,019 | Computation and Language |
Studying Cultural Differences in Emoji Usage across the East and the
West | Global acceptance of Emojis suggests a cross-cultural, normative use of
Emojis. Meanwhile, nuances in Emoji use across cultures may also exist due to
linguistic differences in expressing emotions and diversity in conceptualizing
topics. Indeed, literature in cross-cultural psychology has found both
normative and culture-specific ways in which emotions are expressed. In this
paper, using social media, we compare the Emoji usage based on frequency,
context, and topic associations across countries in the East (China and Japan)
and the West (United States, United Kingdom, and Canada). Across the East and
the West, our study examines a) similarities and differences on the usage of
different categories of Emojis such as People, Food \& Drink, Travel \& Places
etc., b) potential mapping of Emoji use differences with previously identified
cultural differences in users' expression about diverse concepts such as death,
money emotions and family, and c) relative correspondence of validated
psycho-linguistic categories with Ekman's emotions. The analysis of Emoji use
in the East and the West reveals recognizable normative and culture specific
patterns. This research reveals the ways in which Emojis can be used for
cross-cultural communication.
| 2,019 | Computation and Language |
Advancing NLP with Cognitive Language Processing Signals | When we read, our brain processes language and generates cognitive processing
data such as gaze patterns and brain activity. These signals can be recorded
while reading. Cognitive language processing data such as eye-tracking features
have shown improvements on single NLP tasks. We analyze whether using such
human features can show consistent improvement across tasks and data sources.
We present an extensive investigation of the benefits and limitations of using
cognitive processing data for NLP. Specifically, we use gaze and EEG features
to augment models of named entity recognition, relation classification, and
sentiment analysis. These methods significantly outperform the baselines and
show the potential and current limitations of employing human language
processing data for NLP.
| 2,019 | Computation and Language |
Neural Models of the Psychosemantics of `Most' | How are the meanings of linguistic expressions related to their use in
concrete cognitive tasks? Visual identification tasks show human speakers can
exhibit considerable variation in their understanding, representation and
verification of certain quantifiers. This paper initiates an investigation into
neural models of these psycho-semantic tasks. We trained two types of network
-- a convolutional neural network (CNN) model and a recurrent model of visual
attention (RAM) -- on the "most" verification task from \citet{Pietroski2009},
manipulating the visual scene and novel notions of task duration. Our results
qualitatively mirror certain features of human performance (such as sensitivity
to the ratio of set sizes, indicating a reliance on approximate number) while
differing in interesting ways (such as exhibiting a subtly different pattern
for the effect of image type). We conclude by discussing the prospects for
using neural models as cognitive models of this and other psychosemantic tasks.
| 2,019 | Computation and Language |
ExCL: Extractive Clip Localization Using Natural Language Descriptions | The task of retrieving clips within videos based on a given natural language
query requires cross-modal reasoning over multiple frames. Prior approaches
such as sliding window classifiers are inefficient, while text-clip similarity
driven ranking-based approaches such as segment proposal networks are far more
complicated. In order to select the most relevant video clip corresponding to
the given text description, we propose a novel extractive approach that
predicts the start and end frames by leveraging cross-modal interactions
between the text and video - this removes the need to retrieve and re-rank
multiple proposal segments. Using recurrent networks we encode the two
modalities into a joint representation which is then used in different variants
of start-end frame predictor networks. Through extensive experimentation and
ablative analysis, we demonstrate that our simple and elegant approach
significantly outperforms state of the art on two datasets and has comparable
performance on a third.
| 2,019 | Computation and Language |
Complexity-Weighted Loss and Diverse Reranking for Sentence
Simplification | Sentence simplification is the task of rewriting texts so they are easier to
understand. Recent research has applied sequence-to-sequence (Seq2Seq) models
to this task, focusing largely on training-time improvements via reinforcement
learning and memory augmentation. One of the main problems with applying
generic Seq2Seq models for simplification is that these models tend to copy
directly from the original sentence, resulting in outputs that are relatively
long and complex. We aim to alleviate this issue through the use of two main
techniques. First, we incorporate content word complexities, as predicted with
a leveled word complexity model, into our loss function during training.
Second, we generate a large set of diverse candidate simplifications at test
time, and rerank these to promote fluency, adequacy, and simplicity. Here, we
measure simplicity through a novel sentence complexity model. These extensions
allow our models to perform competitively with state-of-the-art systems while
generating simpler sentences. We report standard automatic and human evaluation
metrics.
| 2,019 | Computation and Language |
In Other News: A Bi-style Text-to-speech Model for Synthesizing
Newscaster Voice with Limited Data | Neural text-to-speech synthesis (NTTS) models have shown significant progress
in generating high-quality speech, however they require a large quantity of
training data. This makes creating models for multiple styles expensive and
time-consuming. In this paper different styles of speech are analysed based on
prosodic variations, from this a model is proposed to synthesise speech in the
style of a newscaster, with just a few hours of supplementary data. We pose the
problem of synthesising in a target style using limited data as that of
creating a bi-style model that can synthesise both neutral-style and
newscaster-style speech via a one-hot vector which factorises the two styles.
We also propose conditioning the model on contextual word embeddings, and
extensively evaluate it against neutral NTTS, and neutral concatenative-based
synthesis. This model closes the gap in perceived style-appropriateness between
natural recordings for newscaster-style of speech, and neutral speech synthesis
by approximately two-thirds.
| 2,019 | Computation and Language |
Unifying Human and Statistical Evaluation for Natural Language
Generation | How can we measure whether a natural language generation system produces both
high quality and diverse outputs? Human evaluation captures quality but not
diversity, as it does not catch models that simply plagiarize from the training
set. On the other hand, statistical evaluation (i.e., perplexity) captures
diversity but not quality, as models that occasionally emit low quality samples
would be insufficiently penalized. In this paper, we propose a unified
framework which evaluates both diversity and quality, based on the optimal
error rate of predicting whether a sentence is human- or machine-generated. We
demonstrate that this error rate can be efficiently estimated by combining
human and statistical evaluation, using an evaluation metric which we call
HUSE. On summarization and chit-chat dialogue, we show that (i) HUSE detects
diversity defects which fool pure human evaluation and that (ii) techniques
such as annealing for improving quality actually decrease HUSE due to decreased
diversity.
| 2,019 | Computation and Language |
Affect-Driven Dialog Generation | The majority of current systems for end-to-end dialog generation focus on
response quality without an explicit control over the affective content of the
responses. In this paper, we present an affect-driven dialog system, which
generates emotional responses in a controlled manner using a continuous
representation of emotions. The system achieves this by modeling emotions at a
word and sequence level using: (1) a vector representation of the desired
emotion, (2) an affect regularizer, which penalizes neutral words, and (3) an
affect sampling method, which forces the neural network to generate diverse
words that are emotionally relevant. During inference, we use a reranking
procedure that aims to extract the most emotionally relevant responses using a
human-in-the-loop optimization process. We study the performance of our system
in terms of both quantitative (BLEU score and response diversity), and
qualitative (emotional appropriateness) measures.
| 2,019 | Computation and Language |
Improving Dialogue State Tracking by Discerning the Relevant Context | A typical conversation comprises of multiple turns between participants where
they go back-and-forth between different topics. At each user turn, dialogue
state tracking (DST) aims to estimate user's goal by processing the current
utterance. However, in many turns, users implicitly refer to the previous goal,
necessitating the use of relevant dialogue history. Nonetheless, distinguishing
relevant history is challenging and a popular method of using dialogue recency
for that is inefficient. We, therefore, propose a novel framework for DST that
identifies relevant historical context by referring to the past utterances
where a particular slot-value changes and uses that together with weighted
system utterance to identify the relevant context. Specifically, we use the
current user utterance and the most recent system utterance to determine the
relevance of a system utterance. Empirical analyses show that our method
improves joint goal accuracy by 2.75% and 2.36% on WoZ 2.0 and MultiWoZ 2.0
restaurant domain datasets respectively over the previous state-of-the-art GLAD
model.
| 2,019 | Computation and Language |
Topic Spotting using Hierarchical Networks with Self Attention | Success of deep learning techniques have renewed the interest in development
of dialogue systems. However, current systems struggle to have consistent long
term conversations with the users and fail to build rapport. Topic spotting,
the task of automatically inferring the topic of a conversation, has been shown
to be helpful in making a dialog system more engaging and efficient. We propose
a hierarchical model with self attention for topic spotting. Experiments on the
Switchboard corpus show the superior performance of our model over previously
proposed techniques for topic spotting and deep models for text classification.
Additionally, in contrast to offline processing of dialog, we also analyze the
performance of our model in a more realistic setting i.e. in an online setting
where the topic is identified in real time as the dialog progresses. Results
show that our model is able to generalize even with limited information in the
online setting.
| 2,019 | Computation and Language |
Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence
Labeling | Contextualized word embeddings such as ELMo and BERT provide a foundation for
strong performance across a wide range of natural language processing tasks by
pretraining on large corpora of unlabeled text. However, the applicability of
this approach is unknown when the target domain varies substantially from the
pretraining corpus. We are specifically interested in the scenario in which
labeled data is available in only a canonical source domain such as newstext,
and the target domain is distinct from both the labeled and pretraining texts.
To address this scenario, we propose domain-adaptive fine-tuning, in which the
contextualized embeddings are adapted by masked language modeling on text from
the target domain. We test this approach on sequence labeling in two
challenging domains: Early Modern English and Twitter. Both domains differ
substantially from existing pretraining corpora, and domain-adaptive
fine-tuning yields substantial improvements over strong BERT baselines, with
particularly impressive results on out-of-vocabulary words. We conclude that
domain-adaptive fine-tuning offers a simple and effective approach for the
unsupervised adaptation of sequence labeling to difficult new domains.
| 2,019 | Computation and Language |
Combining Sentiment Lexica with a Multi-View Variational Autoencoder | When assigning quantitative labels to a dataset, different methodologies may
rely on different scales. In particular, when assigning polarities to words in
a sentiment lexicon, annotators may use binary, categorical, or continuous
labels. Naturally, it is of interest to unify these labels from disparate
scales to both achieve maximal coverage over words and to create a single, more
robust sentiment lexicon while retaining scale coherence. We introduce a
generative model of sentiment lexica to combine disparate scales into a common
latent representation. We realize this model with a novel multi-view
variational autoencoder (VAE), called SentiVAE. We evaluate our approach via a
downstream text classification task involving nine English-Language sentiment
analysis datasets; our representation outperforms six individual sentiment
lexica, as well as a straightforward combination thereof.
| 2,019 | Computation and Language |
Cross-Corpora Evaluation and Analysis of Grammatical Error Correction
Models --- Is Single-Corpus Evaluation Enough? | This study explores the necessity of performing cross-corpora evaluation for
grammatical error correction (GEC) models. GEC models have been previously
evaluated based on a single commonly applied corpus: the CoNLL-2014 benchmark.
However, the evaluation remains incomplete because the task difficulty varies
depending on the test corpus and conditions such as the proficiency levels of
the writers and essay topics. To overcome this limitation, we evaluate the
performance of several GEC models, including NMT-based (LSTM, CNN, and
transformer) and an SMT-based model, against various learner corpora
(CoNLL-2013, CoNLL-2014, FCE, JFLEG, ICNALE, and KJ). Evaluation results reveal
that the models' rankings considerably vary depending on the corpus, indicating
that single-corpus evaluation is insufficient for GEC models.
| 2,019 | Computation and Language |
Alternative Weighting Schemes for ELMo Embeddings | ELMo embeddings (Peters et. al, 2018) had a huge impact on the NLP community
and may recent publications use these embeddings to boost the performance for
downstream NLP tasks. However, integration of ELMo embeddings in existent NLP
architectures is not straightforward. In contrast to traditional word
embeddings, like GloVe or word2vec embeddings, the bi-directional language
model of ELMo produces three 1024 dimensional vectors per token in a sentence.
Peters et al. proposed to learn a task-specific weighting of these three
vectors for downstream tasks. However, this proposed weighting scheme is not
feasible for certain tasks, and, as we will show, it does not necessarily yield
optimal performance. We evaluate different methods that combine the three
vectors from the language model in order to achieve the best possible
performance in downstream NLP tasks. We notice that the third layer of the
published language model often decreases the performance. By learning a
weighted average of only the first two layers, we are able to improve the
performance for many datasets. Due to the reduced complexity of the language
model, we have a training speed-up of 19-44% for the downstream task.
| 2,019 | Computation and Language |
NL-FIIT at SemEval-2019 Task 9: Neural Model Ensemble for Suggestion
Mining | In this paper, we present neural model architecture submitted to the
SemEval-2019 Task 9 competition: "Suggestion Mining from Online Reviews and
Forums". We participated in both subtasks for domain specific and also
cross-domain suggestion mining. We proposed a recurrent neural network
architecture that employs Bi-LSTM layers and also self-attention mechanism. Our
architecture tries to encode words via word representations using ELMo and
ensembles multiple models to achieve better results. We performed experiments
with different setups of our proposed model involving weighting of prediction
classes for loss function. Our best model achieved in official test evaluation
score of 0.6816 for subtask A and 0.6850 for subtask B. In official results, we
achieved 12th and 10th place in subtasks A and B, respectively.
| 2,019 | Computation and Language |
Generating Knowledge Graph Paths from Textual Definitions using
Sequence-to-Sequence Models | We present a novel method for mapping unrestricted text to knowledge graph
entities by framing the task as a sequence-to-sequence problem. Specifically,
given the encoded state of an input text, our decoder directly predicts paths
in the knowledge graph, starting from the root and ending at the target node
following hypernym-hyponym relationships. In this way, and in contrast to other
text-to-entity mapping systems, our model outputs hierarchically structured
predictions that are fully interpretable in the context of the underlying
ontology, in an end-to-end manner. We present a proof-of-concept experiment
with encouraging results, comparable to those of state-of-the-art systems.
| 2,019 | Computation and Language |
Identifying and Reducing Gender Bias in Word-Level Language Models | Many text corpora exhibit socially problematic biases, which can be
propagated or amplified in the models trained on such data. For example, doctor
cooccurs more frequently with male pronouns than female pronouns. In this study
we (i) propose a metric to measure gender bias; (ii) measure bias in a text
corpus and the text generated from a recurrent neural network language model
trained on the text corpus; (iii) propose a regularization loss term for the
language model that minimizes the projection of encoder-trained embeddings onto
an embedding subspace that encodes gender; (iv) finally, evaluate efficacy of
our proposed method on reducing gender bias. We find this regularization method
to be effective in reducing gender bias up to an optimal weight assigned to the
loss term, beyond which the model becomes unstable as the perplexity increases.
We replicate this study on three training corpora---Penn Treebank, WikiText-2,
and CNN/Daily Mail---resulting in similar conclusions.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.