Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Are Automatic Methods for Cognate Detection Good Enough for Phylogenetic
Reconstruction in Historical Linguistics? | We evaluate the performance of state-of-the-art algorithms for automatic
cognate detection by comparing how useful automatically inferred cognates are
for the task of phylogenetic inference compared to classical manually annotated
cognate sets. Our findings suggest that phylogenies inferred from automated
cognate sets come close to phylogenies inferred from expert-annotated ones,
although on average, the latter are still superior. We conclude that future
work on phylogenetic reconstruction can profit much from automatic cognate
detection. Especially where scholars are merely interested in exploring the
bigger picture of a language family's phylogeny, algorithms for automatic
cognate detection are a useful complement for current research on language
phylogenies.
| 2,018 | Computation and Language |
Pragmatically Informative Image Captioning with Character-Level
Inference | We combine a neural image captioner with a Rational Speech Acts (RSA) model
to make a system that is pragmatically informative: its objective is to produce
captions that are not merely true but also distinguish their inputs from
similar images. Previous attempts to combine RSA with neural image captioning
require an inference which normalizes over the entire set of possible
utterances. This poses a serious problem of efficiency, previously solved by
sampling a small subset of possible utterances. We instead solve this problem
by implementing a version of RSA which operates at the level of characters
("a","b","c"...) during the unrolling of the caption. We find that the
utterance-level effect of referential captions can be obtained with only
character-level decisions. Finally, we introduce an automatic method for
testing the performance of pragmatic speaker models, and show that our model
outperforms a non-pragmatic baseline as well as a word-level RSA captioner.
| 2,018 | Computation and Language |
What Happened? Leveraging VerbNet to Predict the Effects of Actions in
Procedural Text | Our goal is to answer questions about paragraphs describing processes (e.g.,
photosynthesis). Texts of this genre are challenging because the effects of
actions are often implicit (unstated), requiring background knowledge and
inference to reason about the changing world states. To supply this knowledge,
we leverage VerbNet to build a rulebase (called the Semantic Lexicon) of the
preconditions and effects of actions, and use it along with commonsense
knowledge of persistence to answer questions about change. Our evaluation shows
that our system, ProComp, significantly outperforms two strong reading
comprehension (RC) baselines. Our contributions are two-fold: the Semantic
Lexicon rulebase itself, and a demonstration of how a simulation-based approach
to machine reading can outperform RC methods that rely on surface cues alone.
Since this work was performed, we have developed neural systems that
outperform ProComp, described elsewhere (Dalvi et al., NAACL'18). However, the
Semantic Lexicon remains a novel and potentially useful resource, and its
integration with neural systems remains a currently unexplored opportunity for
further improvements in machine reading about processes.
| 2,018 | Computation and Language |
Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal
Attentions for Video Captioning | A major challenge for video captioning is to combine audio and visual cues.
Existing multi-modal fusion methods have shown encouraging results in video
understanding. However, the temporal structures of multiple modalities at
different granularities are rarely explored, and how to selectively fuse the
multi-modal representations at different levels of details remains uncharted.
In this paper, we propose a novel hierarchically aligned cross-modal attention
(HACA) framework to learn and selectively fuse both global and local temporal
dynamics of different modalities. Furthermore, for the first time, we validate
the superior performance of the deep audio features on the video captioning
task. Finally, our HACA model significantly outperforms the previous best
systems and achieves new state-of-the-art results on the widely used MSR-VTT
dataset.
| 2,018 | Computation and Language |
Community Member Retrieval on Social Media using Textual Information | This paper addresses the problem of community membership detection using only
text features in a scenario where a small number of positive labeled examples
defines the community. The solution introduces an unsupervised proxy task for
learning user embeddings: user re-identification. Experiments with 16 different
communities show that the resulting embeddings are more effective for community
membership identification than common unsupervised representations.
| 2,018 | Computation and Language |
Arabic Named Entity Recognition using Word Representations | Recent work has shown the effectiveness of the word representations features
in significantly improving supervised NER for the English language. In this
study we investigate whether word representations can also boost supervised NER
in Arabic. We use word representations as additional features in a Conditional
Random Field (CRF) model and we systematically compare three popular neural
word embedding algorithms (SKIP-gram, CBOW and GloVe) and six different
approaches for integrating word representations into NER system. Experimental
results show that Brown Clustering achieves the best performance among the six
approaches. Concerning the word embedding features, the clustering embedding
features outperform other embedding features and the distributional prototypes
produce the second best result. Moreover, the combination of Brown clusters and
word embedding features provides additional improvement of nearly 10% in
F1-score over the baseline.
| 2,016 | Computation and Language |
A Discourse-Aware Attention Model for Abstractive Summarization of Long
Documents | Neural abstractive summarization models have led to promising results in
summarizing relatively short documents. We propose the first model for
abstractive summarization of single, longer-form documents (e.g., research
papers). Our approach consists of a new hierarchical encoder that models the
discourse structure of a document, and an attentive discourse-aware decoder to
generate the summary. Empirical results on two large-scale datasets of
scientific papers show that our model significantly outperforms
state-of-the-art models.
| 2,018 | Computation and Language |
Organization and Independence or Interdependence? Study of the
Neurophysiological Dynamics of Syntactic and Semantic Processing | In this article we present a multivariate model for determining the different
syntactic, semantic, and form (surface-structure) processes underlying the
comprehension of simple phrases. This model is applied to EEG signals recorded
during a reading task. The results show a hierarchical precedence of the
neurolinguistic processes : form, then syntactic and lastly semantic processes.
We also found (a) that verbs are at the heart of phrase syntax processing, (b)
an interaction between syntactic movement within the phrase, and semantic
processes derived from a person-centered reference frame. Eigenvectors of the
multivariate model provide electrode-times profiles that separate the
distinctive linguistic processes and/or highlight their interaction. The
accordance of these findings with different linguistic theories are discussed.
| 2,018 | Computation and Language |
The Relevance of Text and Speech Features in Automatic Non-native
English Accent Identification | This paper describes our experiments with automatically identifying native
accents from speech samples of non-native English speakers using low level
audio features, and n-gram features from manual transcriptions. Using a
publicly available non-native speech corpus and simple audio feature
representations that do not perform word/phoneme recognition, we show that it
is possible to achieve close to 90% classification accuracy for this task.
While character n-grams perform similar to speech features, we show that speech
features are not affected by prompt variation, whereas ngrams are. Since the
approach followed can be easily adapted to any language provided we have enough
training data, we believe these results will provide useful insights for the
development of accent recognition systems and for the study of accents in the
context of language learning.
| 2,018 | Computation and Language |
Learning How to Self-Learn: Enhancing Self-Training Using Neural
Reinforcement Learning | Self-training is a useful strategy for semi-supervised learning, leveraging
raw texts for enhancing model performances. Traditional self-training methods
depend on heuristics such as model confidence for instance selection, the
manual adjustment of which can be expensive. To address these challenges, we
propose a deep reinforcement learning method to learn the self-training
strategy automatically. Based on neural network representation of sentences,
our model automatically learns an optimal policy for instance selection.
Experimental results show that our approach outperforms the baseline solutions
in terms of better tagging performances and stability.
| 2,018 | Computation and Language |
ClaiRE at SemEval-2018 Task 7 - Extended Version | In this paper we describe our post-evaluation results for SemEval-2018 Task 7
on clas- sification of semantic relations in scientific literature for clean
(subtask 1.1) and noisy data (subtask 1.2). This is an extended ver- sion of
our workshop paper (Hettinger et al., 2018) including further technical details
(Sec- tions 3.2 and 4.3) and changes made to the preprocessing step in the
post-evaluation phase (Section 2.1). Due to these changes Classification of
Relations using Embeddings (ClaiRE) achieved an improved F1 score of 75.11% for
the first subtask and 81.44% for the second.
| 2,018 | Computation and Language |
Neologisms on Facebook | In this paper, we present a study of neologisms and loan words frequently
occurring in Facebook user posts. We have analyzed a dataset of several million
publically available posts written during 2006-2013 by Russian-speaking
Facebook users. From these, we have built a vocabulary of most frequent
lemmatized words missing from the OpenCorpora dictionary the assumption being
that many such words have entered common use only recently. This assumption is
certainly not true for all the words extracted in this way; for that reason, we
manually filtered the automatically obtained list in order to exclude
non-Russian or incorrectly lemmatized words, as well as words recorded by other
dictionaries or those occurring in texts from the Russian National Corpus. The
result is a list of 168 words that can potentially be considered neologisms. We
present an attempt at an etymological classification of these neologisms
(unsurprisingly, most of them have recently been borrowed from English, but
there are also quite a few new words composed of previously borrowed stems) and
identify various derivational patterns. We also classify words into several
large thematic areas, "internet", "marketing", and "multimedia" being among
those with the largest number of words. We believe that, together with the word
base collected in the process, they can serve as a starting point in further
studies of neologisms and lexical processes that lead to their acceptance into
the mainstream language.
| 2,018 | Computation and Language |
Universal Dependency Parsing for Hindi-English Code-switching | Code-switching is a phenomenon of mixing grammatical structures of two or
more languages under varied social constraints. The code-switching data differ
so radically from the benchmark corpora used in NLP community that the
application of standard technologies to these data degrades their performance
sharply. Unlike standard corpora, these data often need to go through
additional processes such as language identification, normalization and/or
back-transliteration for their efficient processing. In this paper, we
investigate these indispensable processes and other problems associated with
syntactic parsing of code-switching data and propose methods to mitigate their
effects. In particular, we study dependency parsing of code-switching data of
Hindi and English multilingual speakers from Twitter. We present a treebank of
Hindi-English code-switching tweets under Universal Dependencies scheme and
propose a neural stacking model for parsing that efficiently leverages
part-of-speech tag and syntactic tree annotations in the code-switching
treebank and the preexisting Hindi and English treebanks. We also present
normalization and back-transliteration models with a decoding process tailored
for code-switching data. Results show that our neural stacking parser is 1.5%
LAS points better than the augmented parsing model and our decoding process
improves results by 3.8% LAS points over the first-best normalization and/or
back-transliteration.
| 2,018 | Computation and Language |
Improving Implicit Discourse Relation Classification by Modeling
Inter-dependencies of Discourse Units in a Paragraph | We argue that semantic meanings of a sentence or clause can not be
interpreted independently from the rest of a paragraph, or independently from
all discourse relations and the overall paragraph-level discourse structure.
With the goal of improving implicit discourse relation classification, we
introduce a paragraph-level neural networks that model inter-dependencies
between discourse units as well as discourse relation continuity and patterns,
and predict a sequence of discourse relations in a paragraph. Experimental
results show that our model outperforms the previous state-of-the-art systems
on the benchmark corpus of PDTB.
| 2,018 | Computation and Language |
Neural Models for Reasoning over Multiple Mentions using Coreference | Many problems in NLP require aggregating information from multiple mentions
of the same entity which may be far apart in the text. Existing Recurrent
Neural Network (RNN) layers are biased towards short-term dependencies and
hence not suited to such tasks. We present a recurrent layer which is instead
biased towards coreferent dependencies. The layer uses coreference annotations
extracted from an external system to connect entity mentions belonging to the
same cluster. Incorporating this layer into a state-of-the-art reading
comprehension model improves performance on three datasets -- Wikihop, LAMBADA
and the bAbi AI tasks -- with large gains when training data is scarce.
| 2,018 | Computation and Language |
Approaching Neural Grammatical Error Correction as a Low-Resource
Machine Translation Task | Previously, neural methods in grammatical error correction (GEC) did not
reach state-of-the-art results compared to phrase-based statistical machine
translation (SMT) baselines. We demonstrate parallels between neural GEC and
low-resource neural MT and successfully adapt several methods from low-resource
MT to neural GEC. We further establish guidelines for trustable results in
neural GEC and propose a set of model-independent methods for neural GEC that
can be easily applied in most GEC settings. Proposed methods include adding
source-side noise, domain-adaptation techniques, a GEC-specific
training-objective, transfer learning with monolingual data, and ensembling of
independently trained GEC models and language models. The combined effects of
these methods result in better than state-of-the-art neural GEC models that
outperform previously best neural GEC systems by more than 10% M$^2$ on the
CoNLL-2014 benchmark and 5.9% on the JFLEG test set. Non-neural
state-of-the-art systems are outperformed by more than 2% on the CoNLL-2014
benchmark and by 4% on JFLEG.
| 2,018 | Computation and Language |
Near Human-Level Performance in Grammatical Error Correction with Hybrid
Machine Translation | We combine two of the most popular approaches to automated Grammatical Error
Correction (GEC): GEC based on Statistical Machine Translation (SMT) and GEC
based on Neural Machine Translation (NMT). The hybrid system achieves new
state-of-the-art results on the CoNLL-2014 and JFLEG benchmarks. This GEC
system preserves the accuracy of SMT output and, at the same time, generates
more fluent sentences as it typical for NMT. Our analysis shows that the
created systems are closer to reaching human-level performance than any other
GEC system reported so far.
| 2,018 | Computation and Language |
Can Neural Machine Translation be Improved with User Feedback? | We present the first real-world application of methods for improving neural
machine translation (NMT) with human reinforcement, based on explicit and
implicit user feedback collected on the eBay e-commerce platform. Previous work
has been confined to simulation experiments, whereas in this paper we work with
real logged feedback for offline bandit learning of NMT parameters. We conduct
a thorough analysis of the available explicit user judgments---five-star
ratings of translation quality---and show that they are not reliable enough to
yield significant improvements in bandit learning. In contrast, we successfully
utilize implicit task-based feedback collected in a cross-lingual search task
to improve task-specific and machine translation quality metrics.
| 2,018 | Computation and Language |
A Deeper Look into Dependency-Based Word Embeddings | We investigate the effect of various dependency-based word embeddings on
distinguishing between functional and domain similarity, word similarity
rankings, and two downstream tasks in English. Variations include word
embeddings trained using context windows from Stanford and Universal
dependencies at several levels of enhancement (ranging from unlabeled, to
Enhanced++ dependencies). Results are compared to basic linear contexts and
evaluated on several datasets. We found that embeddings trained with Universal
and Stanford dependency contexts excel at different tasks, and that enhanced
dependencies often improve performance.
| 2,018 | Computation and Language |
Learning Joint Semantic Parsers from Disjoint Data | We present a new approach to learning semantic parsers from multiple
datasets, even when the target semantic formalisms are drastically different,
and the underlying corpora do not overlap. We handle such "disjoint" data by
treating annotations for unobserved formalisms as latent structured variables.
Building on state-of-the-art baselines, we show improvements both in
frame-semantic parsing and semantic dependency parsing by modeling them
jointly.
| 2,018 | Computation and Language |
Monte Carlo Syntax Marginals for Exploring and Using Dependency Parses | Dependency parsing research, which has made significant gains in recent
years, typically focuses on improving the accuracy of single-tree predictions.
However, ambiguity is inherent to natural language syntax, and communicating
such ambiguity is important for error analysis and better-informed downstream
applications. In this work, we propose a transition sampling algorithm to
sample from the full joint distribution of parse trees defined by a
transition-based parsing model, and demonstrate the use of the samples in
probabilistic dependency analysis. First, we define the new task of dependency
path prediction, inferring syntactic substructures over part of a sentence, and
provide the first analysis of performance on this task. Second, we demonstrate
the usefulness of our Monte Carlo syntax marginal method for parser error
analysis and calibration. Finally, we use this method to propagate parse
uncertainty to two downstream information extraction applications: identifying
persons killed by police and semantic role assignment.
| 2,018 | Computation and Language |
Fortification of Neural Morphological Segmentation Models for
Polysynthetic Minimal-Resource Languages | Morphological segmentation for polysynthetic languages is challenging,
because a word may consist of many individual morphemes and training data can
be extremely scarce. Since neural sequence-to-sequence (seq2seq) models define
the state of the art for morphological segmentation in high-resource settings
and for (mostly) European languages, we first show that they also obtain
competitive performance for Mexican polysynthetic languages in minimal-resource
settings. We then propose two novel multi-task training approaches -one with,
one without need for external unlabeled resources-, and two corresponding data
augmentation methods, improving over the neural baseline for all languages.
Finally, we explore cross-lingual transfer as a third way to fortify our neural
model and show that we can train one single multi-lingual model for related
languages while maintaining comparable or even improved performance, thus
reducing the amount of parameters by close to 75%. We provide our morphological
segmentation datasets for Mexicanero, Nahuatl, Wixarika and Yorem Nokki for
future research.
| 2,018 | Computation and Language |
ListOps: A Diagnostic Dataset for Latent Tree Learning | Latent tree learning models learn to parse a sentence without syntactic
supervision, and use that parse to build the sentence representation. Existing
work on such models has shown that, while they perform well on tasks like
sentence classification, they do not learn grammars that conform to any
plausible semantic or syntactic formalism (Williams et al., 2018a). Studying
the parsing ability of such models in natural language can be challenging due
to the inherent complexities of natural language, like having several valid
parses for a single sentence. In this paper we introduce ListOps, a toy dataset
created to study the parsing ability of latent tree models. ListOps sequences
are in the style of prefix arithmetic. The dataset is designed to have a single
correct parsing strategy that a system needs to learn to succeed at the task.
We show that the current leading latent tree models are unable to learn to
parse and succeed at ListOps. These models achieve accuracies worse than purely
sequential RNNs.
| 2,018 | Computation and Language |
Reinforced Co-Training | Co-training is a popular semi-supervised learning framework to utilize a
large amount of unlabeled data in addition to a small labeled set. Co-training
methods exploit predicted labels on the unlabeled data and select samples based
on prediction confidence to augment the training. However, the selection of
samples in existing co-training methods is based on a predetermined policy,
which ignores the sampling bias between the unlabeled and the labeled subsets,
and fails to explore the data space. In this paper, we propose a novel method,
Reinforced Co-Training, to select high-quality unlabeled samples to better
co-train on. More specifically, our approach uses Q-learning to learn a data
selection policy with a small labeled dataset, and then exploits this policy to
train the co-training classifiers automatically. Experimental results on
clickbait detection and generic text classification tasks demonstrate that our
proposed method can obtain more accurate text classification results.
| 2,018 | Computation and Language |
Adversarial Example Generation with Syntactically Controlled Paraphrase
Networks | We propose syntactically controlled paraphrase networks (SCPNs) and use them
to generate adversarial examples. Given a sentence and a target syntactic form
(e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the
sentence with the desired syntax. We show it is possible to create training
data for this task by first doing backtranslation at a very large scale, and
then using a parser to label the syntactic transformations that naturally occur
during this process. Such data allows us to train a neural encoder-decoder
model with extra inputs to specify the target syntax. A combination of
automated and human evaluations show that SCPNs generate paraphrases that
follow their target specifications without decreasing paraphrase quality when
compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are
more capable of generating syntactically adversarial examples that both (1)
"fool" pretrained models and (2) improve the robustness of these models to
syntactic variation when used to augment their training data.
| 2,018 | Computation and Language |
SeerNet at SemEval-2018 Task 1: Domain Adaptation for Affect in Tweets | The paper describes the best performing system for the SemEval-2018 Affect in
Tweets (English) sub-tasks. The system focuses on the ordinal classification
and regression sub-tasks for valence and emotion. For ordinal classification
valence is classified into 7 different classes ranging from -3 to 3 whereas
emotion is classified into 4 different classes 0 to 3 separately for each
emotion namely anger, fear, joy and sadness. The regression sub-tasks estimate
the intensity of valence and each emotion. The system performs domain
adaptation of 4 different models and creates an ensemble to give the final
prediction. The proposed system achieved 1st position out of 75 teams which
participated in the fore-mentioned sub-tasks. We outperform the baseline model
by margins ranging from 49.2% to 76.4%, thus, pushing the state-of-the-art
significantly.
| 2,018 | Computation and Language |
Investigating Backtranslation in Neural Machine Translation | A prerequisite for training corpus-based machine translation (MT) systems --
either Statistical MT (SMT) or Neural MT (NMT) -- is the availability of
high-quality parallel data. This is arguably more important today than ever
before, as NMT has been shown in many studies to outperform SMT, but mostly
when large parallel corpora are available; in cases where data is limited, SMT
can still outperform NMT.
Recently researchers have shown that back-translating monolingual data can be
used to create synthetic parallel corpora, which in turn can be used in
combination with authentic parallel data to train a high-quality NMT system.
Given that large collections of new parallel text become available only quite
rarely, backtranslation has become the norm when building state-of-the-art NMT
systems, especially in resource-poor scenarios.
However, we assert that there are many unknown factors regarding the actual
effects of back-translated data on the translation capabilities of an NMT
model. Accordingly, in this work we investigate how using back-translated data
as a training corpus -- both as a separate standalone dataset as well as
combined with human-generated parallel data -- affects the performance of an
NMT model. We use incrementally larger amounts of back-translated data to train
a range of NMT systems for German-to-English, and analyse the resulting
translation performance.
| 2,018 | Computation and Language |
When and Why are Pre-trained Word Embeddings Useful for Neural Machine
Translation? | The performance of Neural Machine Translation (NMT) systems often suffers in
low-resource scenarios where sufficiently large-scale parallel corpora cannot
be obtained. Pre-trained word embeddings have proven to be invaluable for
improving performance in natural language analysis tasks, which often suffer
from paucity of data. However, their utility for NMT has not been extensively
explored. In this work, we perform five sets of experiments that analyze when
we can expect pre-trained word embeddings to help in NMT tasks. We show that
such embeddings can be surprisingly effective in some cases -- providing gains
of up to 20 BLEU points in the most favorable setting.
| 2,018 | Computation and Language |
Similarity between Learning Outcomes from Course Objectives using
Semantic Analysis, Blooms taxonomy and Corpus statistics | The course description provided by instructors is an essential piece of
information as it defines what is expected from the instructor and what he/she
is going to deliver during a particular course. One of the key components of a
course description is the Learning Objectives section. The contents of this
section are used by program managers who are tasked to compare and match two
different courses during the development of Transfer Agreements between various
institutions. This research introduces the development of semantic similarity
algorithms to calculate the similarity between two learning objectives of the
same domain. We present a novel methodology which deals with the semantic
similarity by using a previously established algorithm and integrating it with
the domain corpus utilizing domain statistics. The disambiguated domain serves
as a supervised learning data for the algorithm. We also introduce Bloom Index
to calculate the similarity between action verbs in the Learning Objectives
referring to the Blooms taxonomy.
| 2,018 | Computation and Language |
Bootstrapping Generators from Noisy Data | A core step in statistical data-to-text generation concerns learning
correspondences between structured data representations (e.g., facts in a
database) and associated texts. In this paper we aim to bootstrap generators
from large scale datasets where the data (e.g., DBPedia facts) and related
texts (e.g., Wikipedia abstracts) are loosely aligned. We tackle this
challenging task by introducing a special-purpose content selection mechanism.
We use multi-instance learning to automatically discover correspondences
between data and text pairs and show how these can be used to enhance the
content signal while training an encoder-decoder architecture. Experimental
results demonstrate that models trained with content-specific objectives
improve upon a vanilla encoder-decoder which solely relies on soft attention.
| 2,019 | Computation and Language |
Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style
Transfer | We consider the task of text attribute transfer: transforming a sentence to
alter a specific attribute (e.g., sentiment) while preserving its
attribute-independent content (e.g., changing "screen is just the right size"
to "screen is too small"). Our training data includes only sentences labeled
with their attribute (e.g., positive or negative), but not pairs of sentences
that differ only in their attributes, so we must learn to disentangle
attributes from attribute-independent content in an unsupervised way. Previous
work using adversarial methods has struggled to produce high-quality outputs.
In this paper, we propose simpler methods motivated by the observation that
text attributes are often marked by distinctive phrases (e.g., "too small").
Our strongest method extracts content words by deleting phrases associated with
the sentence's original attribute value, retrieves new phrases associated with
the target attribute, and uses a neural model to fluently combine these into a
final output. On human evaluation, our best method generates grammatical and
appropriate responses on 22% more inputs than the best previous system,
averaged over three attribute transfer datasets: altering sentiment of reviews
on Yelp, altering sentiment of reviews on Amazon, and altering image captions
to be more romantic or humorous.
| 2,018 | Computation and Language |
Personalized neural language models for real-world query auto completion | Query auto completion (QAC) systems are a standard part of search engines in
industry, helping users formulate their query. Such systems update their
suggestions after the user types each character, predicting the user's intent
using various signals - one of the most common being popularity. Recently, deep
learning approaches have been proposed for the QAC task, to specifically
address the main limitation of previous popularity-based methods: the inability
to predict unseen queries. In this work we improve previous methods based on
neural language modeling, with the goal of building an end-to-end system. We
particularly focus on using real-world data by integrating user information for
personalized suggestions when possible. We also make use of time information
and study how to increase diversity in the suggestions while studying the
impact on scalability. Our empirical results demonstrate a marked improvement
on two separate datasets over previous best methods in both accuracy and
scalability, making a step towards neural query auto-completion in production
search engines.
| 2,018 | Computation and Language |
Detecting Linguistic Characteristics of Alzheimer's Dementia by
Interpreting Neural Models | Alzheimer's disease (AD) is an irreversible and progressive brain disease
that can be stopped or slowed down with medical treatment. Language changes
serve as a sign that a patient's cognitive functions have been impacted,
potentially leading to early diagnosis. In this work, we use NLP techniques to
classify and analyze the linguistic characteristics of AD patients using the
DementiaBank dataset. We apply three neural models based on CNNs, LSTM-RNNs,
and their combination, to distinguish between language samples from AD and
control patients. We achieve a new independent benchmark accuracy for the AD
classification task. More importantly, we next interpret what these neural
models have learned about the linguistic characteristics of AD patients, via
analysis based on activation clustering and first-derivative saliency
techniques. We then perform novel automatic pattern discovery inside activation
clusters, and consolidate AD patients' distinctive grammar patterns.
Additionally, we show that first derivative saliency can not only rediscover
previous language patterns of AD patients, but also shed light on the
limitations of neural models. Lastly, we also include analysis of
gender-separated AD data.
| 2,018 | Computation and Language |
Multi-Reward Reinforced Summarization with Saliency and Entailment | Abstractive text summarization is the task of compressing and rewriting a
long document into a short summary while maintaining saliency, directed logical
entailment, and non-redundancy. In this work, we address these three important
aspects of a good summary via a reinforcement learning approach with two novel
reward functions: ROUGESal and Entail, on top of a coverage-based baseline. The
ROUGESal reward modifies the ROUGE metric by up-weighting the salient
phrases/words detected via a keyphrase classifier. The Entail reward gives high
(length-normalized) scores to logically-entailed summaries using an entailment
classifier. Further, we show superior performance improvement when these
rewards are combined with traditional metric (ROUGE) based rewards, via our
novel and effective multi-reward approach of optimizing multiple rewards
simultaneously in alternate mini-batches. Our method achieves the new
state-of-the-art results (including human evaluation) on the CNN/Daily Mail
dataset as well as strong improvements in a test-only transfer setup on
DUC-2002.
| 2,018 | Computation and Language |
Robust Machine Comprehension Models via Adversarial Training | It is shown that many published models for the Stanford Question Answering
Dataset (Rajpurkar et al., 2016) lack robustness, suffering an over 50%
decrease in F1 score during adversarial evaluation based on the AddSent (Jia
and Liang, 2017) algorithm. It has also been shown that retraining models on
data generated by AddSent has limited effect on their robustness. We propose a
novel alternative adversary-generation algorithm, AddSentDiverse, that
significantly increases the variance within the adversarial training data by
providing effective examples that punish the model for making certain
superficial assumptions. Further, in order to improve robustness to AddSent's
semantic perturbations (e.g., antonyms), we jointly improve the model's
semantic-relationship learning capabilities in addition to our
AddSentDiverse-based adversarial training data augmentation. With these
additions, we show that we can make a state-of-the-art model significantly more
robust, achieving a 36.5% increase in F1 score under many different types of
adversarial evaluation while maintaining performance on the regular SQuAD task.
| 2,018 | Computation and Language |
Improving Character-based Decoding Using Target-Side Morphological
Information for Neural Machine Translation | Recently, neural machine translation (NMT) has emerged as a powerful
alternative to conventional statistical approaches. However, its performance
drops considerably in the presence of morphologically rich languages (MRLs).
Neural engines usually fail to tackle the large vocabulary and high
out-of-vocabulary (OOV) word rate of MRLs. Therefore, it is not suitable to
exploit existing word-based models to translate this set of languages. In this
paper, we propose an extension to the state-of-the-art model of Chung et al.
(2016), which works at the character level and boosts the decoder with
target-side morphological information. In our architecture, an additional
morphology table is plugged into the model. Each time the decoder samples from
a target vocabulary, the table sends auxiliary signals from the most relevant
affixes in order to enrich the decoder's current state and constrain it to
provide better predictions. We evaluated our model to translate English into
German, Russian, and Turkish as three MRLs and observed significant
improvements.
| 2,018 | Computation and Language |
Dialogue Learning with Human Teaching and Feedback in End-to-End
Trainable Task-Oriented Dialogue Systems | In this work, we present a hybrid learning method for training task-oriented
dialogue systems through online user interactions. Popular methods for learning
task-oriented dialogues include applying reinforcement learning with user
feedback on supervised pre-training models. Efficiency of such learning method
may suffer from the mismatch of dialogue state distribution between offline
training and online interactive learning stages. To address this challenge, we
propose a hybrid imitation and reinforcement learning method, with which a
dialogue agent can effectively learn from its interaction with users by
learning from human teaching and feedback. We design a neural network based
task-oriented dialogue agent that can be optimized end-to-end with the proposed
learning method. Experimental results show that our end-to-end dialogue agent
can learn effectively from the mistake it makes via imitation learning from
user teaching. Applying reinforcement learning with user feedback after the
imitation learning stage further improves the agent's capability in
successfully completing a task.
| 2,018 | Computation and Language |
Diachronic Usage Relatedness (DURel): A Framework for the Annotation of
Lexical Semantic Change | We propose a framework that extends synchronic polysemy annotation to
diachronic changes in lexical meaning, to counteract the lack of resources for
evaluating computational models of lexical semantic change. Our framework
exploits an intuitive notion of semantic relatedness, and distinguishes between
innovative and reductive meaning changes with high inter-annotator agreement.
The resulting test set for German comprises ratings from five annotators for
the relatedness of 1,320 use pairs across 22 target words.
| 2,018 | Computation and Language |
Aspect Level Sentiment Classification with Attention-over-Attention
Neural Networks | Aspect-level sentiment classification aims to identify the sentiment
expressed towards some aspects given context sentences. In this paper, we
introduce an attention-over-attention (AOA) neural network for aspect level
sentiment classification. Our approach models aspects and sentences in a joint
way and explicitly captures the interaction between aspects and context
sentences. With the AOA module, our model jointly learns the representations
for aspects and sentences, and automatically focuses on the important parts in
sentences. Our experiments on laptop and restaurant datasets demonstrate our
approach outperforms previous LSTM-based architectures.
| 2,018 | Computation and Language |
Fast Lexically Constrained Decoding with Dynamic Beam Allocation for
Neural Machine Translation | The end-to-end nature of neural machine translation (NMT) removes many ways
of manually guiding the translation process that were available in older
paradigms. Recent work, however, has introduced a new capability: lexically
constrained or guided decoding, a modification to beam search that forces the
inclusion of pre-specified words and phrases in the output. However, while
theoretically sound, existing approaches have computational complexities that
are either linear (Hokamp and Liu, 2017) or exponential (Anderson et al., 2017)
in the number of constraints. We present a algorithm for lexically constrained
decoding with a complexity of O(1) in the number of constraints. We demonstrate
the algorithms remarkable ability to properly place these constraints, and use
it to explore the shaky relationship between model and BLEU scores. Our
implementation is available as part of Sockeye.
| 2,018 | Computation and Language |
End-to-end Graph-based TAG Parsing with Neural Networks | We present a graph-based Tree Adjoining Grammar (TAG) parser that uses
BiLSTMs, highway connections, and character-level CNNs. Our best end-to-end
parser, which jointly performs supertagging, POS tagging, and parsing,
outperforms the previously reported best results by more than 2.2 LAS and UAS
points. The graph-based parsing architecture allows for global inference and
rich feature representations for TAG parsing, alleviating the fundamental
trade-off between transition-based and graph-based parsing systems. We also
demonstrate that the proposed parser achieves state-of-the-art performance in
the downstream tasks of Parsing Evaluation using Textual Entailments (PETE) and
Unbounded Dependency Recovery. This provides further support for the claim that
TAG is a viable formalism for problems that require rich structural analysis of
sentences.
| 2,018 | Computation and Language |
Experiments with Universal CEFR Classification | The Common European Framework of Reference (CEFR) guidelines describe
language proficiency of learners on a scale of 6 levels. While the description
of CEFR guidelines is generic across languages, the development of automated
proficiency classification systems for different languages follow different
approaches. In this paper, we explore universal CEFR classification using
domain-specific and domain-agnostic, theory-guided as well as data-driven
features. We report the results of our preliminary experiments in monolingual,
cross-lingual, and multilingual classification with three languages: German,
Czech, and Italian. Our results show that both monolingual and multilingual
models achieve similar performance, and cross-lingual classification yields
lower, but comparable results to monolingual classification.
| 2,018 | Computation and Language |
NTUA-SLP at SemEval-2018 Task 2: Predicting Emojis using RNNs with
Context-aware Attention | In this paper we present a deep-learning model that competed at SemEval-2018
Task 2 "Multilingual Emoji Prediction". We participated in subtask A, in which
we are called to predict the most likely associated emoji in English tweets.
The proposed architecture relies on a Long Short-Term Memory network, augmented
with an attention mechanism, that conditions the weight of each word, on a
"context vector" which is taken as the aggregation of a tweet's meaning.
Moreover, we initialize the embedding layer of our model, with word2vec word
embeddings, pretrained on a dataset of 550 million English tweets. Finally, our
model does not rely on hand-crafted features or lexicons and is trained
end-to-end with back-propagation. We ranked 2nd out of 48 teams.
| 2,018 | Computation and Language |
NTUA-SLP at SemEval-2018 Task 1: Predicting Affective Content in Tweets
with Deep Attentive RNNs and Transfer Learning | In this paper we present deep-learning models that submitted to the
SemEval-2018 Task~1 competition: "Affect in Tweets". We participated in all
subtasks for English tweets. We propose a Bi-LSTM architecture equipped with a
multi-layer self attention mechanism. The attention mechanism improves the
model performance and allows us to identify salient words in tweets, as well as
gain insight into the models making them more interpretable. Our model utilizes
a set of word2vec word embeddings trained on a large collection of 550 million
Twitter messages, augmented by a set of word affective features. Due to the
limited amount of task-specific training data, we opted for a transfer learning
approach by pretraining the Bi-LSTMs on the dataset of Semeval 2017, Task 4A.
The proposed approach ranked 1st in Subtask E "Multi-Label Emotion
Classification", 2nd in Subtask A "Emotion Intensity Regression" and achieved
competitive results in other subtasks.
| 2,018 | Computation and Language |
NTUA-SLP at SemEval-2018 Task 3: Tracking Ironic Tweets using Ensembles
of Word and Character Level Attentive RNNs | In this paper we present two deep-learning systems that competed at
SemEval-2018 Task 3 "Irony detection in English tweets". We design and ensemble
two independent models, based on recurrent neural networks (Bi-LSTM), which
operate at the word and character level, in order to capture both the semantic
and syntactic information in tweets. Our models are augmented with a
self-attention mechanism, in order to identify the most informative words. The
embedding layer of our word-level model is initialized with word2vec word
embeddings, pretrained on a collection of 550 million English tweets. We did
not utilize any handcrafted features, lexicons or external datasets as prior
information and our models are trained end-to-end using back propagation on
constrained data. Furthermore, we provide visualizations of tweets with
annotations for the salient tokens of the attention layer that can help to
interpret the inner workings of the proposed models. We ranked 2nd out of 42
teams in Subtask A and 2nd out of 31 teams in Subtask B. However,
post-task-completion enhancements of our models achieve state-of-the-art
results ranking 1st for both subtasks.
| 2,018 | Computation and Language |
Alquist: The Alexa Prize Socialbot | This paper describes a new open domain dialogue system Alquist developed as
part of the Alexa Prize competition for the Amazon Echo line of products. The
Alquist dialogue system is designed to conduct a coherent and engaging
conversation on popular topics. We are presenting a hybrid system combining
several machine learning and rule based approaches. We discuss and describe the
Alquist pipeline, data acquisition, and processing, dialogue manager, NLG,
knowledge aggregation and hierarchy of sub-dialogs. We present some of the
experimental results.
| 2,018 | Computation and Language |
Demo of Sanskrit-Hindi SMT System | The demo proposal presents a Phrase-based Sanskrit-Hindi (SaHiT) Statistical
Machine Translation system. The system has been developed on Moses. 43k
sentences of Sanskrit-Hindi parallel corpus and 56k sentences of a monolingual
corpus in the target language (Hindi) have been used. This system gives 57 BLEU
score.
| 2,018 | Computation and Language |
Distribution-based Prediction of the Degree of Grammaticalization for
German Prepositions | We test the hypothesis that the degree of grammaticalization of German
prepositions correlates with their corpus-based contextual dispersion measured
by word entropy. We find that there is indeed a moderate correlation for
entropy, but a stronger correlation for frequency and number of context types.
| 2,018 | Computation and Language |
Forecasting the presence and intensity of hostility on Instagram using
linguistic and social features | Online antisocial behavior, such as cyberbullying, harassment, and trolling,
is a widespread problem that threatens free discussion and has negative
physical and mental health consequences for victims and communities. While
prior work has proposed automated methods to identify hostile comments in
online discussions, these methods work retrospectively on comments that have
already been posted, making it difficult to intervene before an interaction
escalates. In this paper we instead consider the problem of forecasting future
hostilities in online discussions, which we decompose into two tasks: (1) given
an initial sequence of non-hostile comments in a discussion, predict whether
some future comment will contain hostility; and (2) given the first hostile
comment in a discussion, predict whether this will lead to an escalation of
hostility in subsequent comments. Thus, we aim to forecast both the presence
and intensity of hostile comments based on linguistic and social features from
earlier comments. To evaluate our approach, we introduce a corpus of over 30K
annotated Instagram comments from over 1,100 posts. Our approach is able to
predict the appearance of a hostile comment on an Instagram post ten or more
hours in the future with an AUC of .82 (task 1), and can furthermore
distinguish between high and low levels of future hostility with an AUC of .91
(task 2).
| 2,018 | Computation and Language |
Quantifying the visual concreteness of words and topics in multimodal
datasets | Multimodal machine learning algorithms aim to learn visual-textual
correspondences. Previous work suggests that concepts with concrete visual
manifestations may be easier to learn than concepts with abstract ones. We give
an algorithm for automatically computing the visual concreteness of words and
topics within multimodal datasets. We apply the approach in four settings,
ranging from image captions to images/text scraped from historical books. In
addition to enabling explorations of concepts in multimodal datasets, our
concreteness scores predict the capacity of machine learning algorithms to
learn textual/visual relationships. We find that 1) concrete concepts are
indeed easier to learn; 2) the large number of algorithms we consider have
similar failure cases; 3) the precise positive relationship between
concreteness and performance varies between datasets. We conclude with
recommendations for using concreteness scores to facilitate future multimodal
research.
| 2,018 | Computation and Language |
Learning to Map Context-Dependent Sentences to Executable Formal Queries | We propose a context-dependent model to map utterances within an interaction
to executable formal queries. To incorporate interaction history, the model
maintains an interaction-level encoder that updates after each turn, and can
copy sub-sequences of previously predicted queries during generation. Our
approach combines implicit and explicit modeling of references between
utterances. We evaluate our model on the ATIS flight planning interactions, and
demonstrate the benefits of modeling context and explicit references.
| 2,018 | Computation and Language |
Object Ordering with Bidirectional Matchings for Visual Reasoning | Visual reasoning with compositional natural language instructions, e.g.,
based on the newly-released Cornell Natural Language Visual Reasoning (NLVR)
dataset, is a challenging task, where the model needs to have the ability to
create an accurate mapping between the diverse phrases and the several objects
placed in complex arrangements in the image. Further, this mapping needs to be
processed to answer the question in the statement given the ordering and
relationship of the objects across three similar images. In this paper, we
propose a novel end-to-end neural model for the NLVR task, where we first use
joint bidirectional attention to build a two-way conditioning between the
visual information and the language phrases. Next, we use an RL-based pointer
network to sort and process the varying number of unordered objects (so as to
match the order of the statement phrases) in each of the three images and then
pool over the three decisions. Our model achieves strong improvements (of 4-6%
absolute) over the state-of-the-art on both the structured representation and
raw image versions of the dataset.
| 2,018 | Computation and Language |
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods | We introduce a new benchmark, WinoBias, for coreference resolution focused on
gender bias. Our corpus contains Winograd-schema style sentences with entities
corresponding to people referred by their occupation (e.g. the nurse, the
doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a
neural coreference system all link gendered pronouns to pro-stereotypical
entities with higher accuracy than anti-stereotypical entities, by an average
difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation
approach that, in combination with existing word-embedding debiasing
techniques, removes the bias demonstrated by these systems in WinoBias without
significantly affecting their performance on existing coreference benchmark
datasets. Our dataset and code are available at http://winobias.org.
| 2,018 | Computation and Language |
Neural Automated Essay Scoring and Coherence Modeling for Adversarially
Crafted Input | We demonstrate that current state-of-the-art approaches to Automated Essay
Scoring (AES) are not well-suited to capturing adversarially crafted input of
grammatical but incoherent sequences of sentences. We develop a neural model of
local coherence that can effectively learn connectedness features between
sentences, and propose a framework for integrating and jointly training the
local coherence model with a state-of-the-art AES model. We evaluate our
approach against a number of baselines and experimentally demonstrate its
effectiveness on both the AES task and the task of flagging adversarial input,
further contributing to the development of an approach that strengthens the
validity of neural essay scoring models.
| 2,018 | Computation and Language |
Sentences with Gapping: Parsing and Reconstructing Elided Predicates | Sentences with gapping, such as Paul likes coffee and Mary tea, lack an overt
predicate to indicate the relation between two or more arguments. Surface
syntax representations of such sentences are often produced poorly by parsers,
and even if correct, not well suited to downstream natural language
understanding tasks such as relation extraction that are typically designed to
extract information from sentences with canonical clause structure. In this
paper, we present two methods for parsing to a Universal Dependencies graph
representation that explicitly encodes the elided material with additional
nodes and edges. We find that both methods can reconstruct elided material from
dependency trees with high accuracy when the parser correctly predicts the
existence of a gap. We further demonstrate that one of our methods can be
applied to other languages based on a case study on Swedish.
| 2,018 | Computation and Language |
Improving Distantly Supervised Relation Extraction using Word and Entity
Based Attention | Relation extraction is the problem of classifying the relationship between
two entities in a given sentence. Distant Supervision (DS) is a popular
technique for developing relation extractors starting with limited supervision.
We note that most of the sentences in the distant supervision relation
extraction setting are very long and may benefit from word attention for better
sentence representation. Our contributions in this paper are threefold.
Firstly, we propose two novel word attention models for distantly- supervised
relation extraction: (1) a Bi-directional Gated Recurrent Unit (Bi-GRU) based
word attention model (BGWA), (2) an entity-centric attention model (EA), and
(3) a combination model which combines multiple complementary models using
weighted voting method for improved relation extraction. Secondly, we introduce
GDS, a new distant supervision dataset for relation extraction. GDS removes
test data noise present in all previous distant- supervision benchmark
datasets, making credible automatic evaluation possible. Thirdly, through
extensive experiments on multiple real-world datasets, we demonstrate the
effectiveness of the proposed methods.
| 2,018 | Computation and Language |
Utilizing Neural Networks and Linguistic Metadata for Early Detection of
Depression Indications in Text Sequences | Depression is ranked as the largest contributor to global disability and is
also a major reason for suicide. Still, many individuals suffering from forms
of depression are not treated for various reasons. Previous studies have shown
that depression also has an effect on language usage and that many depressed
individuals use social media platforms or the internet in general to get
information or discuss their problems. This paper addresses the early detection
of depression using machine learning models based on messages on a social
platform. In particular, a convolutional neural network based on different word
embeddings is evaluated and compared to a classification based on user-level
linguistic metadata. An ensemble of both approaches is shown to achieve
state-of-the-art results in a current early detection task. Furthermore, the
currently popular ERDE score as metric for early detection systems is examined
in detail and its drawbacks in the context of shared tasks are illustrated. A
slightly modified metric is proposed and compared to the original score.
Finally, a new word embedding was trained on a large corpus of the same domain
as the described task and is evaluated as well.
| 2,018 | Computation and Language |
QuaSE: Accurate Text Style Transfer under Quantifiable Guidance | We propose the task of Quantifiable Sequence Editing (QuaSE): editing an
input sequence to generate an output sequence that satisfies a given numerical
outcome value measuring a certain property of the sequence, with the
requirement of keeping the main content of the input sequence. For example, an
input sequence could be a word sequence, such as review sentence and
advertisement text. For a review sentence, the outcome could be the review
rating; for an advertisement, the outcome could be the click-through rate. The
major challenge in performing QuaSE is how to perceive the outcome-related
wordings, and only edit them to change the outcome. In this paper, the proposed
framework contains two latent factors, namely, outcome factor and content
factor, disentangled from the input sentence to allow convenient editing to
change the outcome and keep the content. Our framework explores the
pseudo-parallel sentences by modeling their content similarity and outcome
differences to enable a better disentanglement of the latent factors, which
allows generating an output to better satisfy the desired outcome and keep the
content. The dual reconstruction structure further enhances the capability of
generating expected output by exploiting the couplings of latent factors of
pseudo-parallel sentences. For evaluation, we prepared a dataset of Yelp review
sentences with the ratings as outcome. Extensive experimental results are
reported and discussed to elaborate the peculiarities of our framework.
| 2,019 | Computation and Language |
Learning to Extract Coherent Summary via Deep Reinforcement Learning | Coherence plays a critical role in producing a high-quality summary from a
document. In recent years, neural extractive summarization is becoming
increasingly attractive. However, most of them ignore the coherence of
summaries when extracting sentences. As an effort towards extracting coherent
summaries, we propose a neural coherence model to capture the cross-sentence
semantic and syntactic coherence patterns. The proposed neural coherence model
obviates the need for feature engineering and can be trained in an end-to-end
fashion using unlabeled data. Empirical results show that the proposed neural
coherence model can efficiently capture the cross-sentence coherence patterns.
Using the combined output of the neural coherence model and ROUGE package as
the reward, we design a reinforcement learning method to train a proposed
neural extractive summarizer which is named Reinforced Neural Extractive
Summarization (RNES) model. The RNES model learns to optimize coherence and
informative importance of the summary simultaneously. Experimental results show
that the proposed RNES outperforms existing baselines and achieves
state-of-the-art performance in term of ROUGE on CNN/Daily Mail dataset. The
qualitative evaluation indicates that summaries produced by RNES are more
coherent and readable.
| 2,018 | Computation and Language |
Consistent CCG Parsing over Multiple Sentences for Improved Logical
Reasoning | In formal logic-based approaches to Recognizing Textual Entailment (RTE), a
Combinatory Categorial Grammar (CCG) parser is used to parse input premises and
hypotheses to obtain their logical formulas. Here, it is important that the
parser processes the sentences consistently; failing to recognize a similar
syntactic structure results in inconsistent predicate argument structures among
them, in which case the succeeding theorem proving is doomed to failure. In
this work, we present a simple method to extend an existing CCG parser to parse
a set of sentences consistently, which is achieved with an inter-sentence
modeling with Markov Random Fields (MRF). When combined with existing
logic-based systems, our method always shows improvement in the RTE experiments
on English and Japanese languages.
| 2,018 | Computation and Language |
Putting Question-Answering Systems into Practice: Transfer Learning for
Efficient Domain Customization | Traditional information retrieval (such as that offered by web search
engines) impedes users with information overload from extensive result pages
and the need to manually locate the desired information therein. Conversely,
question-answering systems change how humans interact with information systems:
users can now ask specific questions and obtain a tailored answer - both
conveniently in natural language. Despite obvious benefits, their use is often
limited to an academic context, largely because of expensive domain
customizations, which means that the performance in domain-specific
applications often fails to meet expectations. This paper proposes
cost-efficient remedies: (i) we leverage metadata through a filtering
mechanism, which increases the precision of document retrieval, and (ii) we
develop a novel fuse-and-oversample approach for transfer learning in order to
improve the performance of answer extraction. Here knowledge is inductively
transferred from a related, yet different, tasks to the domain-specific
application, while accounting for potential differences in the sample sizes
across both tasks. The resulting performance is demonstrated with actual use
cases from a finance company and the film industry, where fewer than 400
question-answer pairs had to be annotated in order to yield significant
performance gains. As a direct implication to management, this presents a
promising path to better leveraging of knowledge stored in information systems.
| 2,019 | Computation and Language |
Learning Disentangled Representations of Texts with Application to
Biomedical Abstracts | We propose a method for learning disentangled representations of texts that
code for distinct and complementary aspects, with the aim of affording
efficient model transfer and interpretability. To induce disentangled
embeddings, we propose an adversarial objective based on the (dis)similarity
between triplets of documents with respect to specific aspects. Our motivating
application is embedding biomedical abstracts describing clinical trials in a
manner that disentangles the populations, interventions, and outcomes in a
given trial. We show that our method learns representations that encode these
clinically salient aspects, and that these can be effectively used to perform
aspect-specific retrieval. We demonstrate that the approach generalizes beyond
our motivating application in experiments on two multi-aspect review corpora.
| 2,018 | Computation and Language |
Helping or Hurting? Predicting Changes in Users' Risk of Self-Harm
Through Online Community Interactions | In recent years, online communities have formed around suicide and self-harm
prevention. While these communities offer support in moment of crisis, they can
also normalize harmful behavior, discourage professional treatment, and
instigate suicidal ideation. In this work, we focus on how interaction with
others in such a community affects the mental state of users who are seeking
support. We first build a dataset of conversation threads between users in a
distressed state and community members offering support. We then show how to
construct a classifier to predict whether distressed users are helped or harmed
by the interactions in the thread, and we achieve a macro-F1 score of up to
0.69.
| 2,018 | Computation and Language |
Assessing Language Proficiency from Eye Movements in Reading | We present a novel approach for determining learners' second language
proficiency which utilizes behavioral traces of eye movements during reading.
Our approach provides stand-alone eyetracking based English proficiency scores
which reflect the extent to which the learner's gaze patterns in reading are
similar to those of native English speakers. We show that our scores correlate
strongly with standardized English proficiency tests. We also demonstrate that
gaze information can be used to accurately predict the outcomes of such tests.
Our approach yields the strongest performance when the test taker is presented
with a suite of sentences for which we have eyetracking data from other
readers. However, it remains effective even using eyetracking with sentences
for which eye movement data have not been previously collected. By deriving
proficiency as an automatic byproduct of eye movements during ordinary reading,
our approach offers a potentially valuable new tool for second language
proficiency assessment. More broadly, our results open the door to future
methods for inferring reader characteristics from the behavioral traces of
reading.
| 2,018 | Computation and Language |
Stylistic Variation in Social Media Part-of-Speech Tagging | Social media features substantial stylistic variation, raising new challenges
for syntactic analysis of online writing. However, this variation is often
aligned with author attributes such as age, gender, and geography, as well as
more readily-available social network metadata. In this paper, we report new
evidence on the link between language and social networks in the task of
part-of-speech tagging. We find that tagger error rates are correlated with
network structure, with high accuracy in some parts of the network, and lower
accuracy elsewhere. As a result, tagger accuracy depends on training from a
balanced sample of the network, rather than training on texts from a narrow
subcommunity. We also describe our attempts to add robustness to stylistic
variation, by building a mixture-of-experts model in which each expert is
associated with a region of the social network. While prior work found that
similar approaches yield performance improvements in sentiment analysis and
entity linking, we were unable to obtain performance improvements in
part-of-speech tagging, despite strong evidence for the link between
part-of-speech error rates and social network structure.
| 2,018 | Computation and Language |
A Predictive Model for Notional Anaphora in English | Notional anaphors are pronouns which disagree with their antecedents'
grammatical categories for notional reasons, such as plural to singular
agreement in: 'the government ... they'. Since such cases are rare and conflict
with evidence from strictly agreeing cases ('the government ... it'), they
present a substantial challenge to both coreference resolution and referring
expression generation. Using the OntoNotes corpus, this paper takes an ensemble
approach to predicting English notional anaphora in context on the basis of the
largest empirical data to date. In addition to state of the art prediction
accuracy, the results suggest that theoretical approaches positing a plural
construal at the antecedent's utterance are insufficient, and that
circumstances at the anaphor's utterance location, as well as global factors
such as genre, have a strong effect on the choice of referring expression.
| 2,018 | Computation and Language |
Video based Contextual Question Answering | The primary aim of this project is to build a contextual Question-Answering
model for videos. The current methodologies provide a robust model for image
based Question-Answering, but we are aim to generalize this approach to be
videos. We propose a graphical representation of video which is able to handle
several types of queries across the whole video. For example, if a frame has an
image of a man and a cat sitting, it should be able to handle queries like,
where is the cat sitting with respect to the man? or ,what is the man holding
in his hand?. It should be able to answer queries relating to temporal
relationships also.
| 2,018 | Computation and Language |
Sentence Simplification with Memory-Augmented Neural Networks | Sentence simplification aims to simplify the content and structure of complex
sentences, and thus make them easier to interpret for human readers, and easier
to process for downstream NLP applications. Recent advances in neural machine
translation have paved the way for novel approaches to the task. In this paper,
we adapt an architecture with augmented memory capacities called Neural
Semantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our
experiments demonstrate the effectiveness of our approach on different
simplification datasets, both in terms of automatic evaluation measures and
human judgments.
| 2,018 | Computation and Language |
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
Understanding | For natural language understanding (NLU) technology to be maximally useful,
both practically and as a scientific object of study, it must be general: it
must be able to process language in a way that is not exclusively tailored to
any one specific task or dataset. In pursuit of this objective, we introduce
the General Language Understanding Evaluation benchmark (GLUE), a tool for
evaluating and analyzing the performance of models across a diverse range of
existing NLU tasks. GLUE is model-agnostic, but it incentivizes sharing
knowledge across tasks because certain tasks have very limited training data.
We further provide a hand-crafted diagnostic test suite that enables detailed
linguistic analysis of NLU models. We evaluate baselines based on current
methods for multi-task and transfer learning and find that they do not
immediately give substantial improvements over the aggregate performance of
training a separate model per task, indicating room for improvement in
developing general and robust NLU systems.
| 2,019 | Computation and Language |
Automatic Stance Detection Using End-to-End Memory Networks | We present a novel end-to-end memory network for stance detection, which
jointly (i) predicts whether a document agrees, disagrees, discusses or is
unrelated with respect to a given target claim, and also (ii) extracts snippets
of evidence for that prediction. The network operates at the paragraph level
and integrates convolutional and recurrent neural networks, as well as a
similarity matrix as part of the overall architecture. The experimental
evaluation on the Fake News Challenge dataset shows state-of-the-art
performance.
| 2,018 | Computation and Language |
Approaches for Enriching and Improving Textual Knowledge Bases | Verifiability is one of the core editing principles in Wikipedia, where
editors are encouraged to provide citations for the added statements.
Statements can be any arbitrary piece of text, ranging from a sentence up to a
paragraph. However, in many cases, citations are either outdated, missing, or
link to non-existing references (e.g. dead URL, moved content etc.). In total,
20\% of the cases such citations refer to news articles and represent the
second most cited source. Even in cases where citations are provided, there are
no explicit indicators for the span of a citation for a given piece of text. In
addition to issues related with the verifiability principle, many Wikipedia
entity pages are incomplete, with relevant information that is already
available in online news sources missing. Even for the already existing
citations, there is often a delay between the news publication time and the
reference time.
In this thesis, we address the aforementioned issues and propose automated
approaches that enforce the verifiability principle in Wikipedia, and suggest
relevant and missing news references for further enriching Wikipedia entity
pages.
| 2,018 | Computation and Language |
ClaimRank: Detecting Check-Worthy Claims in Arabic and English | We present ClaimRank, an online system for detecting check-worthy claims.
While originally trained on political debates, the system can work for any kind
of text, e.g., interviews or regular news articles. Its aim is to facilitate
manual fact-checking efforts by prioritizing the claims that fact-checkers
should consider first. ClaimRank supports both Arabic and English, it is
trained on actual annotations from nine reputable fact-checking organizations
(PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and
Washington Post), and thus it can mimic the claim selection strategies for each
and any of them, as well as for the union of them all.
| 2,018 | Computation and Language |
Acquisition of Phrase Correspondences using Natural Deduction Proofs | How to identify, extract, and use phrasal knowledge is a crucial problem for
the task of Recognizing Textual Entailment (RTE). To solve this problem, we
propose a method for detecting paraphrases via natural deduction proofs of
semantic relations between sentence pairs. Our solution relies on a graph
reformulation of partial variable unifications and an algorithm that induces
subgraph alignments between meaning representations. Experiments show that our
method can automatically detect various paraphrases that are absent from
existing paraphrase databases. In addition, the detection of paraphrases using
proof information improves the accuracy of RTE tasks.
| 2,018 | Computation and Language |
Cross-domain Dialogue Policy Transfer via Simultaneous Speech-act and
Slot Alignment | Dialogue policy transfer enables us to build dialogue policies in a target
domain with little data by leveraging knowledge from a source domain with
plenty of data. Dialogue sentences are usually represented by speech-acts and
domain slots, and the dialogue policy transfer is usually achieved by assigning
a slot mapping matrix based on human heuristics. However, existing dialogue
policy transfer methods cannot transfer across dialogue domains with different
speech-acts, for example, between systems built by different companies. Also,
they depend on either common slots or slot entropy, which are not available
when the source and target slots are totally disjoint and no database is
available to calculate the slot entropy. To solve this problem, we propose a
Policy tRansfer across dOMaIns and SpEech-acts (PROMISE) model, which is able
to transfer dialogue policies across domains with different speech-acts and
disjoint slots. The PROMISE model can learn to align different speech-acts and
slots simultaneously, and it does not require common slots or the calculation
of the slot entropy. Experiments on both real-world dialogue data and
simulations demonstrate that PROMISE model can effectively transfer dialogue
policies across domains with different speech-acts and disjoint slots.
| 2,018 | Computation and Language |
Lightweight Adaptive Mixture of Neural and N-gram Language Models | It is often the case that the best performing language model is an ensemble
of a neural language model with n-grams. In this work, we propose a method to
improve how these two models are combined. By using a small network which
predicts the mixture weight between the two models, we adapt their relative
importance at each time step. Because the gating network is small, it trains
quickly on small amounts of held out data, and does not add overhead at scoring
time. Our experiments carried out on the One Billion Word benchmark show a
significant improvement over the state of the art ensemble without retraining
of the basic modules.
| 2,018 | Computation and Language |
Factorising AMR generation through syntax | Generating from Abstract Meaning Representation (AMR) is an underspecified
problem, as many syntactic decisions are not constrained by the semantic graph.
To explicitly account for this underspecification, we break down generating
from AMR into two steps: first generate a syntactic structure, and then
generate the surface form. We show that decomposing the generation process this
way leads to state-of-the-art single model performance generating from AMR
without additional unlabelled data. We also demonstrate that we can generate
meaning-preserving syntactic paraphrases of the same AMR graph, as judged by
humans.
| 2,019 | Computation and Language |
Phrase-Indexed Question Answering: A New Challenge for Scalable Document
Comprehension | We formalize a new modular variant of current question answering tasks by
enforcing complete independence of the document encoder from the question
encoder. This formulation addresses a key challenge in machine comprehension by
requiring a standalone representation of the document discourse. It
additionally leads to a significant scalability advantage since the encoding of
the answer candidate phrases in the document can be pre-computed and indexed
offline for efficient retrieval. We experiment with baseline models for the new
task, which achieve a reasonable accuracy but significantly underperform
unconstrained QA models. We invite the QA research community to engage in
Phrase-Indexed Question Answering (PIQA, pika) for closing the gap. The
leaderboard is at: nlp.cs.washington.edu/piqa
| 2,018 | Computation and Language |
Loss in Translation: Learning Bilingual Word Mapping with a Retrieval
Criterion | Continuous word representations learned separately on distinct languages can
be aligned so that their words become comparable in a common space. Existing
works typically solve a least-square regression problem to learn a rotation
aligning a small bilingual lexicon, and use a retrieval criterion for
inference. In this paper, we propose an unified formulation that directly
optimizes a retrieval criterion in an end-to-end fashion. Our experiments on
standard benchmarks show that our approach outperforms the state of the art on
word translation, with the biggest improvements observed for distant language
pairs such as English-Chinese.
| 2,018 | Computation and Language |
Learning Semantic Textual Similarity from Conversations | We present a novel approach to learn representations for sentence-level
semantic similarity using conversational data. Our method trains an
unsupervised model to predict conversational input-response pairs. The
resulting sentence embeddings perform well on the semantic textual similarity
(STS) benchmark and SemEval 2017's Community Question Answering (CQA) question
similarity subtask. Performance is further improved by introducing multitask
training combining the conversational input-response prediction task and a
natural language inference task. Extensive experiments show the proposed model
achieves the best performance among all neural models on the STS benchmark and
is competitive with the state-of-the-art feature engineered and mixed systems
in both tasks.
| 2,018 | Computation and Language |
Phrase-Based & Neural Unsupervised Machine Translation | Machine translation systems achieve near human-level performance on some
languages, yet their effectiveness strongly relies on the availability of large
amounts of parallel sentences, which hinders their applicability to the
majority of language pairs. This work investigates how to learn to translate
when having access to only large monolingual corpora in each language. We
propose two model variants, a neural and a phrase-based model. Both versions
leverage a careful initialization of the parameters, the denoising effect of
language models and automatic generation of parallel data by iterative
back-translation. These models are significantly better than methods from the
literature, while being simpler and having fewer hyper-parameters. On the
widely used WMT'14 English-French and WMT'16 German-English benchmarks, our
models respectively obtain 28.1 and 25.2 BLEU points without using a single
parallel sentence, outperforming the state of the art by more than 11 BLEU
points. On low-resource languages like English-Urdu and English-Romanian, our
methods achieve even better results than semi-supervised and supervised
approaches leveraging the paucity of available bitexts. Our code for NMT and
PBSMT is publicly available.
| 2,018 | Computation and Language |
Pathologies of Neural Models Make Interpretations Difficult | One way to interpret neural model predictions is to highlight the most
important input features---for example, a heatmap visualization over the words
in an input sentence. In existing interpretation methods for NLP, a word's
importance is determined by either input perturbation---measuring the decrease
in model confidence when that word is removed---or by the gradient with respect
to that word. To understand the limitations of these methods, we use input
reduction, which iteratively removes the least important word from the input.
This exposes pathological behaviors of neural models: the remaining words
appear nonsensical to humans and are not the ones determined as important by
interpretation methods. As we confirm with human experiments, the reduced
examples lack information to support the prediction of any label, but models
still make the same predictions with high confidence. To explain these
counterintuitive results, we draw connections to adversarial examples and
confidence calibration: pathological behaviors reveal difficulties in
interpreting neural models trained with maximum likelihood. To mitigate their
deficiencies, we fine-tune the models by encouraging high entropy outputs on
reduced examples. Fine-tuned models become more interpretable under input
reduction without accuracy loss on regular examples.
| 2,022 | Computation and Language |
Generating Descriptions from Structured Data Using a Bifocal Attention
Mechanism and Gated Orthogonalization | In this work, we focus on the task of generating natural language
descriptions from a structured table of facts containing fields (such as
nationality, occupation, etc) and values (such as Indian, actor, director,
etc). One simple choice is to treat the table as a sequence of fields and
values and then use a standard seq2seq model for this task. However, such a
model is too generic and does not exploit task-specific characteristics. For
example, while generating descriptions from a table, a human would attend to
information at two levels: (i) the fields (macro level) and (ii) the values
within the field (micro level). Further, a human would continue attending to a
field for a few timesteps till all the information from that field has been
rendered and then never return back to this field (because there is nothing
left to say about it). To capture this behavior we use (i) a fused bifocal
attention mechanism which exploits and combines this micro and macro level
information and (ii) a gated orthogonalization mechanism which tries to ensure
that a field is remembered for a few time steps and then forgotten. We
experiment with a recently released dataset which contains fact tables about
people and their corresponding one line biographical descriptions in English.
In addition, we also introduce two similar datasets for French and German. Our
experiments show that the proposed model gives 21% relative improvement over a
recently proposed state of the art method and 10% relative improvement over
basic seq2seq models. The code and the datasets developed as a part of this
work are publicly available.
| 2,019 | Computation and Language |
A Mixed Hierarchical Attention based Encoder-Decoder Approach for
Standard Table Summarization | Structured data summarization involves generation of natural language
summaries from structured input data. In this work, we consider summarizing
structured data occurring in the form of tables as they are prevalent across a
wide variety of domains. We formulate the standard table summarization problem,
which deals with tables conforming to a single predefined schema. To this end,
we propose a mixed hierarchical attention based encoder-decoder model which is
able to leverage the structure in addition to the content of the tables. Our
experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (~
30%) improvement over the current state-of-the-art.
| 2,019 | Computation and Language |
Efficient Contextualized Representation: Language Model Pruning for
Sequence Labeling | Many efforts have been made to facilitate natural language processing tasks
with pre-trained language models (LMs), and brought significant improvements to
various applications. To fully leverage the nearly unlimited corpora and
capture linguistic information of multifarious levels, large-size LMs are
required; but for a specific task, only parts of these information are useful.
Such large-sized LMs, even in the inference stage, may cause heavy computation
workloads, making them too time-consuming for large-scale applications. Here we
propose to compress bulky LMs while preserving useful information with regard
to a specific task. As different layers of the model keep different
information, we develop a layer selection method for model pruning using
sparsity-inducing regularization. By introducing the dense connectivity, we can
detach any layer without affecting others, and stretch shallow and wide LMs to
be deep and narrow. In model training, LMs are learned with layer-wise dropouts
for better robustness. Experiments on two benchmark datasets demonstrate the
effectiveness of our method.
| 2,018 | Computation and Language |
A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low
inter-annotator agreements (IAA) even between experts, suggesting that the
current annotation task needs a better definition. This paper proposes a new
multi-axis modeling to better capture the temporal structure of events. In
addition, we identify that event end-points are a major source of confusion in
annotation, so we also propose to annotate TempRels based on start-points only.
A pilot expert annotation using the proposed scheme shows significant
improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This
better-defined annotation scheme further enables the use of crowdsourcing to
alleviate the labor intensity for each annotator. We hope that this work can
foster more interesting studies towards event understanding.
| 2,018 | Computation and Language |
Direct Network Transfer: Transfer Learning of Sentence Embeddings for
Semantic Similarity | Sentence encoders, which produce sentence embeddings using neural networks,
are typically evaluated by how well they transfer to downstream tasks. This
includes semantic similarity, an important task in natural language
understanding. Although there has been much work dedicated to building sentence
encoders, the accompanying transfer learning techniques have received
relatively little attention. In this paper, we propose a transfer learning
setting specialized for semantic similarity, which we refer to as direct
network transfer. Through experiments on several standard text similarity
datasets, we show that applying direct network transfer to existing encoders
can lead to state-of-the-art performance. Additionally, we compare several
approaches to transfer sentence encoders to semantic similarity tasks, showing
that the choice of transfer learning setting greatly affects the performance in
many cases, and differs by encoder and dataset.
| 2,018 | Computation and Language |
Joint entity recognition and relation extraction as a multi-head
selection problem | State-of-the-art models for joint entity recognition and relation extraction
strongly rely on external natural language processing (NLP) tools such as POS
(part-of-speech) taggers and dependency parsers. Thus, the performance of such
joint models depends on the quality of the features obtained from these NLP
tools. However, these features are not always accurate for various languages
and contexts. In this paper, we propose a joint neural model which performs
entity recognition and relation extraction simultaneously, without the need of
any manually extracted features or the use of any external tool. Specifically,
we model the entity recognition task using a CRF (Conditional Random Fields)
layer and the relation extraction task as a multi-head selection problem (i.e.,
potentially identify multiple relations for each entity). We present an
extensive experimental setup, to demonstrate the effectiveness of our method
using datasets from various contexts (i.e., news, biomedical, real estate) and
languages (i.e., English, Dutch). Our model outperforms the previous neural
models that use automatically extracted features, while it performs within a
reasonable margin of feature-based neural models, or even beats them.
| 2,018 | Computation and Language |
Mutual Information Maximization for Simple and Accurate Part-Of-Speech
Induction | We address part-of-speech (POS) induction by maximizing the mutual
information between the induced label and its context. We focus on two training
objectives that are amenable to stochastic gradient descent (SGD): a novel
generalization of the classical Brown clustering objective and a recently
proposed variational lower bound. While both objectives are subject to noise in
gradient updates, we show through analysis and experiments that the variational
lower bound is robust whereas the generalized Brown objective is vulnerable. We
obtain competitive performance on a multitude of datasets and languages with a
simple architecture that encodes morphology and context.
| 2,019 | Computation and Language |
What's Going On in Neural Constituency Parsers? An Analysis | A number of differences have emerged between modern and classic approaches to
constituency parsing in recent years, with structural components like grammars
and feature-rich lexicons becoming less central while recurrent neural network
representations rise in popularity. The goal of this work is to analyze the
extent to which information provided directly by the model structure in
classical systems is still being captured by neural methods. To this end, we
propose a high-performance neural model (92.08 F1 on PTB) that is
representative of recent work and perform a series of investigative
experiments. We find that our model implicitly learns to encode much of the
same information that was explicitly provided by grammars and lexicons in the
past, indicating that this scaffolding can largely be subsumed by powerful
general-purpose neural machinery.
| 2,018 | Computation and Language |
Subgoal Discovery for Hierarchical Dialogue Policy Learning | Developing agents to engage in complex goal-oriented dialogues is challenging
partly because the main learning signals are very sparse in long conversations.
In this paper, we propose a divide-and-conquer approach that discovers and
exploits the hidden structure of the task to enable efficient policy learning.
First, given successful example dialogues, we propose the Subgoal Discovery
Network (SDN) to divide a complex goal-oriented task into a set of simpler
subgoals in an unsupervised fashion. We then use these subgoals to learn a
multi-level policy by hierarchical reinforcement learning. We demonstrate our
method by building a dialogue agent for the composite task of travel planning.
Experiments with simulated and real users show that our approach performs
competitively against a state-of-the-art method that requires human-defined
subgoals. Moreover, we show that the learned subgoals are often human
comprehensible.
| 2,018 | Computation and Language |
Multi-lingual Common Semantic Space Construction via Cluster-consistent
Word Embedding | We construct a multilingual common semantic space based on distributional
semantics, where words from multiple languages are projected into a shared
space to enable knowledge and resource transfer across languages. Beyond word
alignment, we introduce multiple cluster-level alignments and enforce the word
clusters to be consistently distributed across multiple languages. We exploit
three signals for clustering: (1) neighbor words in the monolingual word
embedding space; (2) character-level information; and (3) linguistic properties
(e.g., apposition, locative suffix) derived from linguistic structure knowledge
bases available for thousands of languages. We introduce a new
cluster-consistent correlational neural network to construct the common
semantic space by aligning words as well as clusters. Intrinsic evaluation on
monolingual and multilingual QVEC tasks shows our approach achieves
significantly higher correlation with linguistic features than state-of-the-art
multi-lingual embedding learning methods do. Using low-resource language name
tagging as a case study for extrinsic evaluation, our approach achieves up to
24.5\% absolute F-score gain over the state of the art.
| 2,018 | Computation and Language |
Massively Parallel Cross-Lingual Learning in Low-Resource Target
Language Translation | We work on translation from rich-resource languages to low-resource
languages. The main challenges we identify are the lack of low-resource
language data, effective methods for cross-lingual transfer, and the
variable-binding problem that is common in neural systems. We build a
translation system that addresses these challenges using eight European
language families as our test ground. Firstly, we add the source and the target
family labels and study intra-family and inter-family influences for effective
cross-lingual transfer. We achieve an improvement of +9.9 in BLEU score for
English-Swedish translation using eight families compared to the single-family
multi-source multi-target baseline. Moreover, we find that training on two
neighboring families closest to the low-resource language is often enough.
Secondly, we construct an ablation study and find that reasonably good results
can be achieved even with considerably less target data. Thirdly, we address
the variable-binding problem by building an order-preserving named entity
translation model. We obtain 60.6% accuracy in qualitative evaluation where our
translations are akin to human translations in a preliminary study.
| 2,018 | Computation and Language |
Event Extraction with Generative Adversarial Imitation Learning | We propose a new method for event extraction (EE) task based on an imitation
learning framework, specifically, inverse reinforcement learning (IRL) via
generative adversarial network (GAN). The GAN estimates proper rewards
according to the difference between the actions committed by the expert (or
ground truth) and the agent among complicated states in the environment. EE
task benefits from these dynamic rewards because instances and labels yield to
various extents of difficulty and the gains are expected to be diverse -- e.g.,
an ambiguous but correctly detected trigger or argument should receive high
gains -- while the traditional RL models usually neglect such differences and
pay equal attention on all instances. Moreover, our experiments also
demonstrate that the proposed framework outperforms state-of-the-art methods,
without explicit feature engineering.
| 2,018 | Computation and Language |
Stochastic Answer Networks for Natural Language Inference | We propose a stochastic answer network (SAN) to explore multi-step inference
strategies in Natural Language Inference. Rather than directly predicting the
results given the inputs, the model maintains a state and iteratively refines
its predictions. Our experiments show that SAN achieves the state-of-the-art
results on three benchmarks: Stanford Natural Language Inference (SNLI)
dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora
Question Pairs dataset.
| 2,019 | Computation and Language |
Entity-aware Image Caption Generation | Current image captioning approaches generate descriptions which lack specific
information, such as named entities that are involved in the images. In this
paper we propose a new task which aims to generate informative image captions,
given images and hashtags as input. We propose a simple but effective approach
to tackle this problem. We first train a convolutional neural networks - long
short term memory networks (CNN-LSTM) model to generate a template caption
based on the input image. Then we use a knowledge graph based collective
inference algorithm to fill in the template with specific named entities
retrieved via the hashtags. Experiments on a new benchmark dataset collected
from Flickr show that our model generates news-style image descriptions with
much richer information. Our model outperforms unimodal baselines significantly
with various evaluation metrics.
| 2,018 | Computation and Language |
Taylor's law for Human Linguistic Sequences | Taylor's law describes the fluctuation characteristics underlying a system in
which the variance of an event within a time span grows by a power law with
respect to the mean. Although Taylor's law has been applied in many natural and
social systems, its application for language has been scarce. This article
describes a new quantification of Taylor's law in natural language and reports
an analysis of over 1100 texts across 14 languages. The Taylor exponents of
written natural language texts were found to exhibit almost the same value. The
exponent was also compared for other language-related data, such as the
child-directed speech, music, and programming language code. The results show
how the Taylor exponent serves to quantify the fundamental structural
complexity underlying linguistic time series. The article also shows the
applicability of these findings in evaluating language models.
| 2,018 | Computation and Language |
Unsupervised Natural Language Generation with Denoising Autoencoders | Generating text from structured data is important for various tasks such as
question answering and dialog systems. We show that in at least one domain,
without any supervision and only based on unlabeled text, we are able to build
a Natural Language Generation (NLG) system with higher performance than
supervised approaches. In our approach, we interpret the structured data as a
corrupt representation of the desired output and use a denoising auto-encoder
to reconstruct the sentence. We show how to introduce noise into training
examples that do not contain structured data, and that the resulting denoising
auto-encoder generalizes to generate correct sentences when given structured
data.
| 2,018 | Computation and Language |
Multi-task Learning for Universal Sentence Embeddings: A Thorough
Evaluation using Transfer and Auxiliary Tasks | Learning distributed sentence representations is one of the key challenges in
natural language processing. Previous work demonstrated that a recurrent neural
network (RNNs) based sentence encoder trained on a large collection of
annotated natural language inference data, is efficient in the transfer
learning to facilitate other related tasks. In this paper, we show that joint
learning of multiple tasks results in better generalizable sentence
representations by conducting extensive experiments and analysis comparing the
multi-task and single-task learned sentence encoders. The quantitative analysis
using auxiliary tasks show that multi-task learning helps to embed better
semantic information in the sentence representations compared to single-task
learning. In addition, we compare multi-task sentence encoders with
contextualized word representations and show that combining both of them can
further boost the performance of transfer learning.
| 2,018 | Computation and Language |
A Stable and Effective Learning Strategy for Trainable Greedy Decoding | Beam search is a widely used approximate search strategy for neural network
decoders, and it generally outperforms simple greedy decoding on tasks like
machine translation. However, this improvement comes at substantial
computational cost. In this paper, we propose a flexible new method that allows
us to reap nearly the full benefits of beam search with nearly no additional
computational cost. The method revolves around a small neural network actor
that is trained to observe and manipulate the hidden state of a
previously-trained decoder. To train this actor network, we introduce the use
of a pseudo-parallel corpus built using the output of beam search on a base
model, ranked by a target quality metric like BLEU. Our method is inspired by
earlier work on this problem, but requires no reinforcement learning, and can
be trained reliably on a range of models. Experiments on three parallel corpora
and three architectures show that the method yields substantial improvements in
translation quality and speed over each base system.
| 2,018 | Computation and Language |
Decoupling Structure and Lexicon for Zero-Shot Semantic Parsing | Building a semantic parser quickly in a new domain is a fundamental challenge
for conversational interfaces, as current semantic parsers require expensive
supervision and lack the ability to generalize to new domains. In this paper,
we introduce a zero-shot approach to semantic parsing that can parse utterances
in unseen domains while only being trained on examples in other source domains.
First, we map an utterance to an abstract, domain-independent, logical form
that represents the structure of the logical form, but contains slots instead
of KB constants. Then, we replace slots with KB constants via lexical alignment
scores and global inference. Our model reaches an average accuracy of 53.4% on
7 domains in the Overnight dataset, substantially better than other zero-shot
baselines, and performs as good as a parser trained on over 30% of the target
domain examples.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.