Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
MS-UEdin Submission to the WMT2018 APE Shared Task: Dual-Source
Transformer for Automatic Post-Editing | This paper describes the Microsoft and University of Edinburgh submission to
the Automatic Post-editing shared task at WMT2018. Based on training data and
systems from the WMT2017 shared task, we re-implement our own models from the
last shared task and introduce improvements based on extensive parameter
sharing. Next we experiment with our implementation of dual-source transformer
models and data selection for the IT domain. Our submissions decisively wins
the SMT post-editing sub-task establishing the new state-of-the-art and is a
very close second (or equal, 16.46 vs 16.50 TER) in the NMT sub-task. Based on
the rather weak results in the NMT sub-task, we hypothesize that
neural-on-neural APE might not be actually useful.
| 2,018 | Computation and Language |
Microsoft's Submission to the WMT2018 News Translation Task: How I
Learned to Stop Worrying and Love the Data | This paper describes the Microsoft submission to the WMT2018 news translation
shared task. We participated in one language direction -- English-German. Our
system follows current best-practice and combines state-of-the-art models with
new data filtering (dual conditional cross-entropy filtering) and sentence
weighting methods. We trained fairly standard Transformer-big models with an
updated version of Edinburgh's training scheme for WMT2017 and experimented
with different filtering schemes for Paracrawl. According to automatic metrics
(BLEU) we reached the highest score for this subtask with a nearly 2 BLEU point
margin over the next strongest system. Based on human evaluation we ranked
first among constrained systems. We believe this is mostly caused by our data
filtering/weighting regime.
| 2,018 | Computation and Language |
Dual Conditional Cross-Entropy Filtering of Noisy Parallel Corpora | In this work we introduce dual conditional cross-entropy filtering for noisy
parallel data. For each sentence pair of the noisy parallel corpus we compute
cross-entropy scores according to two inverse translation models trained on
clean data. We penalize divergent cross-entropies and weigh the penalty by the
cross-entropy average of both models. Sorting or thresholding according to
these scores results in better subsets of parallel data. We achieve higher BLEU
scores with models trained on parallel data filtered only from Paracrawl than
with models trained on clean WMT data. We further evaluate our method in the
context of the WMT2018 shared task on parallel corpus filtering and achieve the
overall highest ranking scores of the shared task, scoring top in three out of
four subtasks.
| 2,019 | Computation and Language |
Improving Visual Relationship Detection using Semantic Modeling of Scene
Descriptions | Structured scene descriptions of images are useful for the automatic
processing and querying of large image databases. We show how the combination
of a semantic and a visual statistical model can improve on the task of mapping
images to their associated scene description. In this paper we consider scene
descriptions which are represented as a set of triples (subject, predicate,
object), where each triple consists of a pair of visual objects, which appear
in the image, and the relationship between them (e.g. man-riding-elephant,
man-wearing-hat). We combine a standard visual model for object detection,
based on convolutional neural networks, with a latent variable model for link
prediction. We apply multiple state-of-the-art link prediction methods and
compare their capability for visual relationship detection. One of the main
advantages of link prediction methods is that they can also generalize to
triples, which have never been observed in the training data. Our experimental
results on the recently published Stanford Visual Relationship dataset, a
challenging real world dataset, show that the integration of a semantic model
using link prediction methods can significantly improve the results for visual
relationship detection. Our combined approach achieves superior performance
compared to the state-of-the-art method from the Stanford computer vision
group.
| 2,018 | Computation and Language |
A Multilingual Information Extraction Pipeline for Investigative
Journalism | We introduce an advanced information extraction pipeline to automatically
process very large collections of unstructured textual data for the purpose of
investigative journalism. The pipeline serves as a new input processor for the
upcoming major release of our New/s/leak 2.0 software, which we develop in
cooperation with a large German news organization. The use case is that
journalists receive a large collection of files up to several Gigabytes
containing unknown contents. Collections may originate either from official
disclosures of documents, e.g. Freedom of Information Act requests, or
unofficial data leaks. Our software prepares a visually-aided exploration of
the collection to quickly learn about potential stories contained in the data.
It is based on the automatic extraction of entities and their co-occurrence in
documents. In contrast to comparable projects, we focus on the following three
major requirements particularly serving the use case of investigative
journalism in cross-border collaborations: 1) composition of multiple
state-of-the-art NLP tools for entity extraction, 2) support of multi-lingual
document sets up to 40 languages, 3) fast and easy-to-use extraction of
full-text, metadata and entities from various file formats.
| 2,018 | Computation and Language |
Parameter Sharing Methods for Multilingual Self-Attentional Translation
Models | In multilingual neural machine translation, it has been shown that sharing a
single translation model between multiple languages can achieve competitive
performance, sometimes even leading to performance gains over bilingually
trained models. However, these improvements are not uniform; often multilingual
parameter sharing results in a decrease in accuracy due to translation models
not being able to accommodate different languages in their limited parameter
space. In this work, we examine parameter sharing techniques that strike a
happy medium between full sharing and individual training, specifically
focusing on the self-attentional Transformer model. We find that the full
parameter sharing approach leads to increases in BLEU scores mainly when the
target languages are from a similar language family. However, even in the case
where target languages are from different families where full parameter sharing
leads to a noticeable drop in BLEU scores, our proposed methods for partial
sharing of parameters can lead to substantial improvements in translation
accuracy.
| 2,018 | Computation and Language |
Towards Automated Customer Support | Recent years have seen growing interest in conversational agents, such as
chatbots, which are a very good fit for automated customer support because the
domain in which they need to operate is narrow. This interest was in part
inspired by recent advances in neural machine translation, esp. the rise of
sequence-to-sequence (seq2seq) and attention-based models such as the
Transformer, which have been applied to various other tasks and have opened new
research directions in question answering, chatbots, and conversational
systems. Still, in many cases, it might be feasible and even preferable to use
simple information retrieval techniques. Thus, here we compare three different
models:(i) a retrieval model, (ii) a sequence-to-sequence model with attention,
and (iii) Transformer. Our experiments with the Twitter Customer Support
Dataset, which contains over two million posts from customer support services
of twenty major brands, show that the seq2seq model outperforms the other two
in terms of semantics and word overlap.
| 2,018 | Computation and Language |
Exploring Gap Filling as a Cheaper Alternative to Reading Comprehension
Questionnaires when Evaluating Machine Translation for Gisting | A popular application of machine translation (MT) is gisting: MT is consumed
as is to make sense of text in a foreign language. Evaluation of the usefulness
of MT for gisting is surprisingly uncommon. The classical method uses reading
comprehension questionnaires (RCQ), in which informants are asked to answer
professionally-written questions in their language about a foreign text that
has been machine-translated into their language. Recently, gap-filling (GF), a
form of cloze testing, has been proposed as a cheaper alternative to RCQ. In
GF, certain words are removed from reference translations and readers are asked
to fill the gaps left using the machine-translated text as a hint. This paper
reports, for thefirst time, a comparative evaluation, using both RCQ and GF, of
translations from multiple MT systems for the same foreign texts, and a
systematic study on the effect of variables such as gap density, gap-selection
strategies, and document context in GF. The main findings of the study are: (a)
both RCQ and GF clearly identify MT to be useful, (b) global RCQ and GF
rankings for the MT systems are mostly in agreement, (c) GF scores vary very
widely across informants, making comparisons among MT systems hard, and (d)
unlike RCQ, which is framed around documents, GF evaluation can be framed at
the sentence level. These findings support the use of GF as a cheaper
alternative to RCQ.
| 2,018 | Computation and Language |
Chinese Pinyin Aided IME, Input What You Have Not Keystroked Yet | Chinese pinyin input method engine (IME) converts pinyin into character so
that Chinese characters can be conveniently inputted into computer through
common keyboard. IMEs work relying on its core component, pinyin-to-character
conversion (P2C). Usually Chinese IMEs simply predict a list of character
sequences for user choice only according to user pinyin input at each turn.
However, Chinese inputting is a multi-turn online procedure, which can be
supposed to be exploited for further user experience promoting. This paper thus
for the first time introduces a sequence-to-sequence model with gated-attention
mechanism for the core task in IMEs. The proposed neural P2C model is learned
by encoding previous input utterance as extra context to enable our IME capable
of predicting character sequence with incomplete pinyin input. Our model is
evaluated in different benchmark datasets showing great user experience
improvement compared to traditional models, which demonstrates the first
engineering practice of building Chinese aided IME.
| 2,018 | Computation and Language |
Future-Prediction-Based Model for Neural Machine Translation | We propose a novel model for Neural Machine Translation (NMT). Different from
the conventional method, our model can predict the future text length and words
at each decoding time step so that the generation can be helped with the
information from the future prediction. With such information, the model does
not stop generation without having translated enough content. Experimental
results demonstrate that our model can significantly outperform the baseline
models. Besides, our analysis reflects that our model is effective in the
prediction of the length and words of the untranslated content.
| 2,018 | Computation and Language |
Chittron: An Automatic Bangla Image Captioning System | Automatic image caption generation aims to produce an accurate description of
an image in natural language automatically. However, Bangla, the fifth most
widely spoken language in the world, is lagging considerably in the research
and development of such domain. Besides, while there are many established data
sets to related to image annotation in English, no such resource exists for
Bangla yet. Hence, this paper outlines the development of "Chittron", an
automatic image captioning system in Bangla. Moreover, to address the data set
availability issue, a collection of 16,000 Bangladeshi contextual images has
been accumulated and manually annotated in Bangla. This data set is then used
to train a model which integrates a pre-trained VGG16 image embedding model
with stacked LSTM layers. The model is trained to predict the caption when the
input is an image, one word at a time. The results show that the model has
successfully been able to learn a working language model and to generate
captions of images quite accurately in many cases. The results are evaluated
mainly qualitatively. However, BLEU scores are also reported. It is expected
that a better result can be obtained with a bigger and more varied data set.
| 2,018 | Computation and Language |
Contextual Neural Model for Translating Bilingual Multi-Speaker
Conversations | Recent works in neural machine translation have begun to explore document
translation. However, translating online multi-speaker conversations is still
an open problem. In this work, we propose the task of translating Bilingual
Multi-Speaker Conversations, and explore neural architectures which exploit
both source and target-side conversation histories for this task. To initiate
an evaluation for this task, we introduce datasets extracted from Europarl v7
and OpenSubtitles2016. Our experiments on four language-pairs confirm the
significance of leveraging conversation history, both in terms of BLEU and
manual evaluation.
| 2,018 | Computation and Language |
Trivial Transfer Learning for Low-Resource Neural Machine Translation | Transfer learning has been proven as an effective technique for neural
machine translation under low-resource conditions. Existing methods require a
common target language, language relatedness, or specific training tricks and
regimes. We present a simple transfer learning method, where we first train a
"parent" model for a high-resource language pair and then continue the training
on a lowresource pair only by replacing the training corpus. This "child" model
performs significantly better than the baseline trained for lowresource pair
only. We are the first to show this for targeting different languages, and we
observe the improvements even for unrelated languages with different alphabets.
| 2,018 | Computation and Language |
Neural Ranking Models for Temporal Dependency Structure Parsing | We design and build the first neural temporal dependency parser. It utilizes
a neural ranking model with minimal feature engineering, and parses time
expressions and events in a text into a temporal dependency tree structure. We
evaluate our parser on two domains: news reports and narrative stories. In a
parsing-only evaluation setup where gold time expressions and events are
provided, our parser reaches 0.81 and 0.70 f-score on unlabeled and labeled
parsing respectively, a result that is very competitive against alternative
approaches. In an end-to-end evaluation setup where time expressions and events
are automatically recognized, our parser beats two strong baselines on both
data domains. Our experimental results and discussions shed light on the nature
of temporal dependency structures in different domains and provide insights
that we believe will be valuable to future research in this area.
| 2,018 | Computation and Language |
Neural Character-based Composition Models for Abuse Detection | The advent of social media in recent years has fed into some highly
undesirable phenomena such as proliferation of offensive language, hate speech,
sexist remarks, etc. on the Internet. In light of this, there have been several
efforts to automate the detection and moderation of such abusive content.
However, deliberate obfuscation of words by users to evade detection poses a
serious challenge to the effectiveness of these efforts. The current state of
the art approaches to abusive language detection, based on recurrent neural
networks, do not explicitly address this problem and resort to a generic OOV
(out of vocabulary) embedding for unseen words. However, in using a single
embedding for all unseen words we lose the ability to distinguish between
obfuscated and non-obfuscated or rare words. In this paper, we address this
problem by designing a model that can compose embeddings for unseen words. We
experimentally demonstrate that our approach significantly advances the current
state of the art in abuse detection on datasets from two different domains,
namely Twitter and Wikipedia talk page.
| 2,018 | Computation and Language |
Zero-shot User Intent Detection via Capsule Neural Networks | User intent detection plays a critical role in question-answering and dialog
systems. Most previous works treat intent detection as a classification problem
where utterances are labeled with predefined intents. However, it is
labor-intensive and time-consuming to label users' utterances as intents are
diversely expressed and novel intents will continually be involved. Instead, we
study the zero-shot intent detection problem, which aims to detect emerging
user intents where no labeled utterances are currently available. We propose
two capsule-based architectures: INTENT-CAPSNET that extracts semantic features
from utterances and aggregates them to discriminate existing intents, and
INTENTCAPSNET-ZSL which gives INTENTCAPSNET the zero-shot learning ability to
discriminate emerging intents via knowledge transfer from existing intents.
Experiments on two real-world datasets show that our model not only can better
discriminate diversely expressed existing intents, but is also able to
discriminate emerging intents when no labeled utterances are available.
| 2,018 | Computation and Language |
MTNT: A Testbed for Machine Translation of Noisy Text | Noisy or non-standard input text can cause disastrous mistranslations in most
modern Machine Translation (MT) systems, and there has been growing research
interest in creating noise-robust MT systems. However, as of yet there are no
publicly available parallel corpora of with naturally occurring noisy inputs
and translations, and thus previous work has resorted to evaluating on
synthetically created datasets. In this paper, we propose a benchmark dataset
for Machine Translation of Noisy Text (MTNT), consisting of noisy comments on
Reddit (www.reddit.com) and professionally sourced translations. We
commissioned translations of English comments into French and Japanese, as well
as French and Japanese comments into English, on the order of 7k-37k sentences
per language pair. We qualitatively and quantitatively examine the types of
noise included in this dataset, then demonstrate that existing MT models fail
badly on a number of noise-related phenomena, even after performing adaptation
on a small training set of in-domain data. This indicates that this dataset can
provide an attractive testbed for methods tailored to handling noisy text in
MT. The data is publicly available at www.cs.cmu.edu/~pmichel1/mtnt/.
| 2,018 | Computation and Language |
Modeling Topical Coherence in Discourse without Supervision | Coherence of text is an important attribute to be measured for both manually
and automatically generated discourse; but well-defined quantitative metrics
for it are still elusive. In this paper, we present a metric for scoring
topical coherence of an input paragraph on a real-valued scale by analyzing its
underlying topical structure. We first extract all possible topics that the
sentences of a paragraph of text are related to. Coherence of this text is then
measured by computing: (a) the degree of uncertainty of the topics with respect
to the paragraph, and (b) the relatedness between these topics. All components
of our modular framework rely only on unlabeled data and WordNet, thus making
it completely unsupervised, which is an important feature for general-purpose
usage of any metric. Experiments are conducted on two datasets - a publicly
available dataset for essay grading (representing human discourse), and a
synthetic dataset constructed by mixing content from multiple paragraphs
covering diverse topics. Our evaluation shows that the measured coherence
scores are positively correlated with the ground truth for both the datasets.
Further validation to our coherence scores is provided by conducting human
evaluation on the synthetic data, showing a significant agreement of 79.3%
| 2,018 | Computation and Language |
Data Augmentation for Neural Online Chat Response Selection | Data augmentation seeks to manipulate the available data for training to
improve the generalization ability of models. We investigate two data
augmentation proxies, permutation and flipping, for neural dialog response
selection task on various models over multiple datasets, including both Chinese
and English languages. Different from standard data augmentation techniques,
our method combines the original and synthesized data for prediction. Empirical
results show that our approach can gain 1 to 3 recall-at-1 points over baseline
models in both full-scale and small-scale settings.
| 2,018 | Computation and Language |
Adaptive Semi-supervised Learning for Cross-domain Sentiment
Classification | We consider the cross-domain sentiment classification problem, where a
sentiment classifier is to be learned from a source domain and to be
generalized to a target domain. Our approach explicitly minimizes the distance
between the source and the target instances in an embedded feature space. With
the difference between source and target minimized, we then exploit additional
information from the target domain by consolidating the idea of semi-supervised
learning, for which, we jointly employ two regularizations -- entropy
minimization and self-ensemble bootstrapping -- to incorporate the unlabeled
target data for classifier refinement. Our experimental results demonstrate
that the proposed approach can better leverage unlabeled data from the target
domain and achieve substantial improvements over baseline methods in various
experimental settings.
| 2,018 | Computation and Language |
Crowdsourcing Semantic Label Propagation in Relation Classification | Distant supervision is a popular method for performing relation extraction
from text that is known to produce noisy labels. Most progress in relation
extraction and classification has been made with crowdsourced corrections to
distant-supervised labels, and there is evidence that indicates still more
would be better. In this paper, we explore the problem of propagating human
annotation signals gathered for open-domain relation classification through the
CrowdTruth methodology for crowdsourcing, that captures ambiguity in
annotations by measuring inter-annotator disagreement. Our approach propagates
annotations to sentences that are similar in a low dimensional embedding space,
expanding the number of labels by two orders of magnitude. Our experiments show
significant improvement in a sentence-level multi-class relation classifier.
| 2,022 | Computation and Language |
Multilingual Clustering of Streaming News | Clustering news across languages enables efficient media monitoring by
aggregating articles from multilingual sources into coherent stories. Doing so
in an online setting allows scalable processing of massive news streams. To
this end, we describe a novel method for clustering an incoming stream of
multilingual documents into monolingual and crosslingual story clusters. Unlike
typical clustering approaches that consider a small and known number of labels,
we tackle the problem of discovering an ever growing number of cluster labels
in an online fashion, using real news datasets in multiple languages. Our
method is simple to implement, computationally efficient and produces
state-of-the-art results on datasets in German, English and Spanish.
| 2,018 | Computation and Language |
Emergence of Communication in an Interactive World with Consistent
Speakers | Training agents to communicate with one another given task-based supervision
only has attracted considerable attention recently, due to the growing interest
in developing models for human-agent interaction. Prior work on the topic
focused on simple environments, where training using policy gradient was
feasible despite the non-stationarity of the agents during training. In this
paper, we present a more challenging environment for testing the emergence of
communication from raw pixels, where training using policy gradient fails. We
propose a new model and training algorithm, that utilizes the structure of a
learned representation space to produce more consistent speakers at the initial
phases of training, which stabilizes learning. We empirically show that our
algorithm substantially improves performance compared to policy gradient. We
also propose a new alignment-based metric for measuring context-independence in
emerged communication and find our method increases context-independence
compared to policy gradient and other competitive baselines.
| 2,019 | Computation and Language |
End-to-End Argument Mining for Discussion Threads Based on Parallel
Constrained Pointer Architecture | Argument Mining (AM) is a relatively recent discipline, which concentrates on
extracting claims or premises from discourses, and inferring their structures.
However, many existing works do not consider micro-level AM studies on
discussion threads sufficiently. In this paper, we tackle AM for discussion
threads. Our main contributions are follows: (1) A novel combination scheme
focusing on micro-level inner- and inter- post schemes for a discussion thread.
(2) Annotation of large-scale civic discussion threads with the scheme. (3)
Parallel constrained pointer architecture (PCPA), a novel end-to-end technique
to discriminate sentence types, inner-post relations, and inter-post
interactions simultaneously. The experimental results demonstrate that our
proposed model shows better accuracy in terms of relations extraction, in
comparison to existing state-of-the-art models.
| 2,018 | Computation and Language |
Data-to-Text Generation with Content Selection and Planning | Recent advances in data-to-text generation have led to the use of large-scale
datasets and neural network models which are trained end-to-end, without
explicitly modeling what to say and in what order. In this work, we present a
neural network architecture which incorporates content selection and planning
without sacrificing end-to-end training. We decompose the generation task into
two stages. Given a corpus of data records (paired with descriptive documents),
we first generate a content plan highlighting which information should be
mentioned and in which order and then generate the document while taking the
content plan into account. Automatic and human-based evaluation experiments
show that our model outperforms strong baselines improving the state-of-the-art
on the recently released RotoWire dataset.
| 2,019 | Computation and Language |
Affordance Extraction and Inference based on Semantic Role Labeling | Common-sense reasoning is becoming increasingly important for the advancement
of Natural Language Processing. While word embeddings have been very
successful, they cannot explain which aspects of 'coffee' and 'tea' make them
similar, or how they could be related to 'shop'. In this paper, we propose an
explicit word representation that builds upon the Distributional Hypothesis to
represent meaning from semantic roles, and allow inference of relations from
their meshing, as supported by the affordance-based Indexical Hypothesis. We
find that our model improves the state-of-the-art on unsupervised word
similarity tasks while allowing for direct inference of new relations from the
same vector space.
| 2,018 | Computation and Language |
Deep learning for language understanding of mental health concepts
derived from Cognitive Behavioural Therapy | In recent years, we have seen deep learning and distributed representations
of words and sentences make impact on a number of natural language processing
tasks, such as similarity, entailment and sentiment analysis. Here we introduce
a new task: understanding of mental health concepts derived from Cognitive
Behavioural Therapy (CBT). We define a mental health ontology based on the CBT
principles, annotate a large corpus where this phenomena is exhibited and
perform understanding using deep learning and distributed representations. Our
results show that the performance of deep learning models combined with word
embeddings or sentence embeddings significantly outperform non-deep-learning
models in this difficult task. This understanding module will be an essential
component of a statistical dialogue system delivering therapy.
| 2,018 | Computation and Language |
Automatic Event Salience Identification | Identifying the salience (i.e. importance) of discourse units is an important
task in language understanding. While events play important roles in text
documents, little research exists on analyzing their saliency status. This
paper empirically studies the Event Salience task and proposes two salience
detection models based on content similarities and discourse relations. The
first is a feature based salience model that incorporates similarities among
discourse units. The second is a neural model that captures more complex
relations between discourse units. Tested on our new large-scale event salience
corpus, both methods significantly outperform the strong frequency baseline,
while our neural model further improves the feature based one by a large
margin. Our analyses demonstrate that our neural model captures interesting
connections between salience and discourse unit relations (e.g., scripts and
frame structures).
| 2,018 | Computation and Language |
Towards Dynamic Computation Graphs via Sparse Latent Structure | Deep NLP models benefit from underlying structures in the data---e.g., parse
trees---typically extracted using off-the-shelf parsers. Recent attempts to
jointly learn the latent structure encounter a tradeoff: either make
factorization assumptions that limit expressiveness, or sacrifice end-to-end
differentiability. Using the recently proposed SparseMAP inference, which
retrieves a sparse distribution over latent structures, we propose a novel
approach for end-to-end learning of latent structure predictors jointly with a
downstream predictor. To the best of our knowledge, our method is the first to
enable unrestricted dynamic computation graph construction from the global
latent structure, while maintaining differentiability.
| 2,018 | Computation and Language |
A3Net: Adversarial-and-Attention Network for Machine Reading
Comprehension | In this paper, we introduce Adversarial-and-attention Network (A3Net) for
Machine Reading Comprehension. This model extends existing approaches from two
perspectives. First, adversarial training is applied to several target
variables within the model, rather than only to the inputs or embeddings. We
control the norm of adversarial perturbations according to the norm of original
target variables, so that we can jointly add perturbations to several target
variables during training. As an effective regularization method, adversarial
training improves robustness and generalization of our model. Second, we
propose a multi-layer attention network utilizing three kinds of
high-efficiency attention mechanisms. Multi-layer attention conducts
interaction between question and passage within each layer, which contributes
to reasonable representation and understanding of the model. Combining these
two contributions, we enhance the diversity of dataset and the information
extracting ability of the model at the same time. Meanwhile, we construct A3Net
for the WebQA dataset. Results show that our model outperforms the
state-of-the-art models (improving Fuzzy Score from 73.50% to 77.0%).
| 2,018 | Computation and Language |
Multi-Level Structured Self-Attentions for Distantly Supervised Relation
Extraction | Attention mechanisms are often used in deep neural networks for distantly
supervised relation extraction (DS-RE) to distinguish valid from noisy
instances. However, traditional 1-D vector attention models are insufficient
for the learning of different contexts in the selection of valid instances to
predict the relationship for an entity pair. To alleviate this issue, we
propose a novel multi-level structured (2-D matrix) self-attention mechanism
for DS-RE in a multi-instance learning (MIL) framework using bidirectional
recurrent neural networks. In the proposed method, a structured word-level
self-attention mechanism learns a 2-D matrix where each row vector represents a
weight distribution for different aspects of an instance regarding two
entities. Targeting the MIL issue, the structured sentence-level attention
learns a 2-D matrix where each row vector represents a weight distribution on
selection of different valid in-stances. Experiments conducted on two publicly
available DS-RE datasets show that the proposed framework with a multi-level
structured self-attention mechanism significantly outperform state-of-the-art
baselines in terms of PR curves, P@N and F1 measures.
| 2,018 | Computation and Language |
NTUA-SLP at IEST 2018: Ensemble of Neural Transfer Methods for Implicit
Emotion Classification | In this paper we present our approach to tackle the Implicit Emotion Shared
Task (IEST) organized as part of WASSA 2018 at EMNLP 2018. Given a tweet, from
which a certain word has been removed, we are asked to predict the emotion of
the missing word. In this work, we experiment with neural Transfer Learning
(TL) methods. Our models are based on LSTM networks, augmented with a
self-attention mechanism. We use the weights of various pretrained models, for
initializing specific layers of our networks. We leverage a big collection of
unlabeled Twitter messages, for pretraining word2vec word embeddings and a set
of diverse language models. Moreover, we utilize a sentiment analysis dataset
for pretraining a model, which encodes emotion related information. The
submitted model consists of an ensemble of the aforementioned TL models. Our
team ranked 3rd out of 30 participants, achieving an F1 score of 0.703.
| 2,018 | Computation and Language |
emrQA: A Large Corpus for Question Answering on Electronic Medical
Records | We propose a novel methodology to generate domain-specific large-scale
question answering (QA) datasets by re-purposing existing annotations for other
NLP tasks. We demonstrate an instance of this methodology in generating a
large-scale QA dataset for electronic medical records by leveraging existing
expert annotations on clinical notes for various NLP tasks from the community
shared i2b2 datasets. The resulting corpus (emrQA) has 1 million
question-logical form and 400,000+ question-answer evidence pairs. We
characterize the dataset and explore its learning potential by training
baseline models for question to logical form and question to answer mapping.
| 2,018 | Computation and Language |
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic
Parsing | This paper proposes a neural semantic parsing approach -- Sequence-to-Action,
which models semantic parsing as an end-to-end semantic graph generation
process. Our method simultaneously leverages the advantages from two recent
promising directions of semantic parsing. Firstly, our model uses a semantic
graph to represent the meaning of a sentence, which has a tight-coupling with
knowledge bases. Secondly, by leveraging the powerful representation learning
and prediction ability of neural network models, we propose a RNN model which
can effectively map sentences to action sequences for semantic graph
generation. Experiments show that our method achieves state-of-the-art
performance on OVERNIGHT dataset and gets competitive performance on GEO and
ATIS datasets.
| 2,018 | Computation and Language |
Open Domain Question Answering Using Early Fusion of Knowledge Bases and
Text | Open Domain Question Answering (QA) is evolving from complex pipelined
systems to end-to-end deep neural networks. Specialized neural models have been
developed for extracting answers from either text alone or Knowledge Bases
(KBs) alone. In this paper we look at a more practical setting, namely QA over
the combination of a KB and entity-linked text, which is appropriate when an
incomplete KB is available with a large text corpus. Building on recent
advances in graph representation learning we propose a novel model, GRAFT-Net,
for extracting answers from a question-specific subgraph containing text and KB
entities and relations. We construct a suite of benchmark tasks for this
problem, varying the difficulty of questions, the amount of training data, and
KB completeness. We show that GRAFT-Net is competitive with the
state-of-the-art when tested using either KBs or text alone, and vastly
outperforms existing methods in the combined setting. Source code is available
at https://github.com/OceanskySun/GraftNet .
| 2,018 | Computation and Language |
Mapping Instructions to Actions in 3D Environments with Visual Goal
Prediction | We propose to decompose instruction execution to goal prediction and action
generation. We design a model that maps raw visual observations to goals using
LINGUNET, a language-conditioned image generation network, and then generates
the actions required to complete them. Our model is trained from demonstration
only without external resources. To evaluate our approach, we introduce two
benchmarks for instruction following: LANI, a navigation task; and CHAI, where
an agent executes household instructions. Our evaluation demonstrates the
advantages of our model decomposition, and illustrates the challenges posed by
our new benchmarks.
| 2,019 | Computation and Language |
Texar: A Modularized, Versatile, and Extensible Toolkit for Text
Generation | We introduce Texar, an open-source toolkit aiming to support the broad set of
text generation tasks that transform any inputs into natural language, such as
machine translation, summarization, dialog, content manipulation, and so forth.
With the design goals of modularity, versatility, and extensibility in mind,
Texar extracts common patterns underlying the diverse tasks and methodologies,
creates a library of highly reusable modules, and allows arbitrary model
architectures and algorithmic paradigms. In Texar, model architecture,
inference, and learning processes are properly decomposed. Modules at a high
concept level can be freely assembled and plugged in/swapped out. The toolkit
also supports a rich set of large-scale pretrained models. Texar is thus
particularly suitable for researchers and practitioners to do fast prototyping
and experimentation. The versatile toolkit also fosters technique sharing
across different text generation tasks. Texar supports both TensorFlow and
PyTorch, and is released under Apache License 2.0 at https://www.texar.io.
| 2,019 | Computation and Language |
Pointwise HSIC: A Linear-Time Kernelized Co-occurrence Norm for Sparse
Linguistic Expressions | In this paper, we propose a new kernel-based co-occurrence measure that can
be applied to sparse linguistic expressions (e.g., sentences) with a very short
learning time, as an alternative to pointwise mutual information (PMI). As well
as deriving PMI from mutual information, we derive this new measure from the
Hilbert--Schmidt independence criterion (HSIC); thus, we call the new measure
the pointwise HSIC (PHSIC). PHSIC can be interpreted as a smoothed variant of
PMI that allows various similarity metrics (e.g., sentence embeddings) to be
plugged in as kernels. Moreover, PHSIC can be estimated by simple and fast
(linear in the size of the data) matrix calculations regardless of whether we
use linear or nonlinear kernels. Empirically, in a dialogue response selection
task, PHSIC is learned thousands of times faster than an RNN-based PMI while
outperforming PMI in accuracy. In addition, we also demonstrate that PHSIC is
beneficial as a criterion of a data selection task for machine translation
owing to its ability to give high (low) scores to a consistent (inconsistent)
pair with other pairs.
| 2,018 | Computation and Language |
RecipeQA: A Challenge Dataset for Multimodal Comprehension of Cooking
Recipes | Understanding and reasoning about cooking recipes is a fruitful research
direction towards enabling machines to interpret procedural text. In this work,
we introduce RecipeQA, a dataset for multimodal comprehension of cooking
recipes. It comprises of approximately 20K instructional recipes with multiple
modalities such as titles, descriptions and aligned set of images. With over
36K automatically generated question-answer pairs, we design a set of
comprehension and reasoning tasks that require joint understanding of images
and text, capturing the temporal flow of events and making sense of procedural
knowledge. Our preliminary results indicate that RecipeQA will serve as a
challenging test bed and an ideal benchmark for evaluating machine
comprehension systems. The data and leaderboard are available at
http://hucvl.github.io/recipeqa.
| 2,018 | Computation and Language |
Segmentation-free Compositional $n$-gram Embedding | We propose a new type of representation learning method that models words,
phrases and sentences seamlessly. Our method does not depend on word
segmentation and any human-annotated resources (e.g., word dictionaries), yet
it is very effective for noisy corpora written in unsegmented languages such as
Chinese and Japanese. The main idea of our method is to ignore word boundaries
completely (i.e., segmentation-free), and construct representations for all
character $n$-grams in a raw corpus with embeddings of compositional
sub-$n$-grams. Although the idea is simple, our experiments on various
benchmarks and real-world datasets show the efficacy of our proposal.
| 2,019 | Computation and Language |
Improving generalization of vocal tract feature reconstruction: from
augmented acoustic inversion to articulatory feature reconstruction without
articulatory data | We address the problem of reconstructing articulatory movements, given audio
and/or phonetic labels. The scarce availability of multi-speaker articulatory
data makes it difficult to learn a reconstruction that generalizes to new
speakers and across datasets. We first consider the XRMB dataset where audio,
articulatory measurements and phonetic transcriptions are available. We show
that phonetic labels, used as input to deep recurrent neural networks that
reconstruct articulatory features, are in general more helpful than acoustic
features in both matched and mismatched training-testing conditions. In a
second experiment, we test a novel approach that attempts to build articulatory
features from prior articulatory information extracted from phonetic labels.
Such approach recovers vocal tract movements directly from an acoustic-only
dataset without using any articulatory measurement. Results show that
articulatory features generated by this approach can correlate up to 0.59
Pearson product-moment correlation with measured articulatory features.
| 2,018 | Computation and Language |
\'Etude de l'informativit\'e des transcriptions : une approche bas\'ee
sur le r\'esum\'e automatique | In this paper we propose a new approach to evaluate the informativeness of
transcriptions coming from Automatic Speech Recognition systems. This approach,
based in the notion of informativeness, is focused on the framework of
Automatic Text Summarization performed over these transcriptions. At a first
glance we estimate the informative content of the various automatic
transcriptions, then we explore the capacity of Automatic Text Summarization to
overcome the informative loss. To do this we use an automatic summary
evaluation protocol without reference (based on the informative content), which
computes the divergence between probability distributions of different textual
representations: manual and automatic transcriptions and their summaries. After
a set of evaluations this analysis allowed us to judge both the quality of the
transcriptions in terms of informativeness and to assess the ability of
automatic text summarization to compensate the problems raised during the
transcription phase.
| 2,018 | Computation and Language |
The Effect of Context on Metaphor Paraphrase Aptness Judgments | We conduct two experiments to study the effect of context on metaphor
paraphrase aptness judgments. The first is an AMT crowd source task in which
speakers rank metaphor paraphrase candidate sentence pairs in short document
contexts for paraphrase aptness. In the second we train a composite DNN to
predict these human judgments, first in binary classifier mode, and then as
gradient ratings. We found that for both mean human judgments and our DNN's
predictions, adding document context compresses the aptness scores towards the
center of the scale, raising low out of context ratings and decreasing high out
of context scores. We offer a provisional explanation for this compression
effect.
| 2,018 | Computation and Language |
A Novel Neural Sequence Model with Multiple Attentions for Word Sense
Disambiguation | Word sense disambiguation (WSD) is a well researched problem in computational
linguistics. Different research works have approached this problem in different
ways. Some state of the art results that have been achieved for this problem
are by supervised models in terms of accuracy, but they often fall behind
flexible knowledge-based solutions which use engineered features as well as
human annotators to disambiguate every target word. This work focuses on
bridging this gap using neural sequence models incorporating the well-known
attention mechanism. The main gist of our work is to combine multiple
attentions on different linguistic features through weights and to provide a
unified framework for doing this. This weighted attention allows the model to
easily disambiguate the sense of an ambiguous word by attending over a suitable
portion of a sentence. Our extensive experiments show that multiple attention
enables a more versatile encoder-decoder model leading to state of the art
results.
| 2,018 | Computation and Language |
IEST: WASSA-2018 Implicit Emotions Shared Task | Past shared tasks on emotions use data with both overt expressions of
emotions (I am so happy to see you!) as well as subtle expressions where the
emotions have to be inferred, for instance from event descriptions. Further,
most datasets do not focus on the cause or the stimulus of the emotion. Here,
for the first time, we propose a shared task where systems have to predict the
emotions in a large automatically labeled dataset of tweets without access to
words denoting emotions. Based on this intention, we call this the Implicit
Emotion Shared Task (IEST) because the systems have to infer the emotion mostly
from the context. Every tweet has an occurrence of an explicit emotion word
that is masked. The tweets are collected in a manner such that they are likely
to include a description of the cause of the emotion - the stimulus.
Altogether, 30 teams submitted results which range from macro F1 scores of 21 %
to 71 %. The baseline (MaxEnt bag of words and bigrams) obtains an F1 score of
60 % which was available to the participants during the development phase. A
study with human annotators suggests that automatic methods outperform human
predictions, possibly by honing into subtle textual clues not used by humans.
Corpora, resources, and results are available at the shared task website at
http://implicitemotions.wassa2018.com.
| 2,018 | Computation and Language |
Causal Explanation Analysis on Social Media | Understanding causal explanations - reasons given for happenings in one's
life - has been found to be an important psychological factor linked to
physical and mental health. Causal explanations are often studied through
manual identification of phrases over limited samples of personal writing.
Automatic identification of causal explanations in social media, while
challenging in relying on contextual and sequential cues, offers a larger-scale
alternative to expensive manual ratings and opens the door for new applications
(e.g. studying prevailing beliefs about causes, such as climate change). Here,
we explore automating causal explanation analysis, building on discourse
parsing, and presenting two novel subtasks: causality detection (determining
whether a causal explanation exists at all) and causal explanation
identification (identifying the specific phrase that is the explanation). We
achieve strong accuracies for both tasks but find different approaches best: an
SVM for causality prediction (F1 = 0.791) and a hierarchy of Bidirectional
LSTMs for causal explanation identification (F1 = 0.853). Finally, we explore
applications of our complete pipeline (F1 = 0.868), showing demographic
differences in mentions of causal explanation and that the association between
a word and sentiment can change when it is used within a causal explanation.
| 2,018 | Computation and Language |
Generating More Interesting Responses in Neural Conversation Models with
Distributional Constraints | Neural conversation models tend to generate safe, generic responses for most
inputs. This is due to the limitations of likelihood-based decoding objectives
in generation tasks with diverse outputs, such as conversation. To address this
challenge, we propose a simple yet effective approach for incorporating side
information in the form of distributional constraints over the generated
responses. We propose two constraints that help generate more content rich
responses that are based on a model of syntax and topics (Griffiths et al.,
2005) and semantic similarity (Arora et al., 2016). We evaluate our approach
against a variety of competitive baselines, using both automatic metrics and
human judgments, showing that our proposed approach generates responses that
are much less generic without sacrificing plausibility. A working demo of our
code can be found at https://github.com/abaheti95/DC-NeuralConversation.
| 2,018 | Computation and Language |
Graph-based Deep-Tree Recursive Neural Network (DTRNN) for Text
Classification | A novel graph-to-tree conversion mechanism called the deep-tree generation
(DTG) algorithm is first proposed to predict text data represented by graphs.
The DTG method can generate a richer and more accurate representation for nodes
(or vertices) in graphs. It adds flexibility in exploring the vertex
neighborhood information to better reflect the second order proximity and
homophily equivalence in a graph. Then, a Deep-Tree Recursive Neural Network
(DTRNN) method is presented and used to classify vertices that contains text
data in graphs. To demonstrate the effectiveness of the DTRNN method, we apply
it to three real-world graph datasets and show that the DTRNN method
outperforms several state-of-the-art benchmarking methods.
| 2,018 | Computation and Language |
Unsupervised Statistical Machine Translation | While modern machine translation has relied on large parallel corpora, a
recent line of work has managed to train Neural Machine Translation (NMT)
systems from monolingual corpora only (Artetxe et al., 2018c; Lample et al.,
2018). Despite the potential of this approach for low-resource settings,
existing systems are far behind their supervised counterparts, limiting their
practical interest. In this paper, we propose an alternative approach based on
phrase-based Statistical Machine Translation (SMT) that significantly closes
the gap with supervised systems. Our method profits from the modular
architecture of SMT: we first induce a phrase table from monolingual corpora
through cross-lingual embedding mappings, combine it with an n-gram language
model, and fine-tune hyperparameters through an unsupervised MERT variant. In
addition, iterative backtranslation improves results further, yielding, for
instance, 14.08 and 26.22 BLEU points in WMT 2014 English-German and
English-French, respectively, an improvement of more than 7-10 BLEU points over
previous unsupervised systems, and closing the gap with supervised SMT (Moses
trained on Europarl) down to 2-5 BLEU points. Our implementation is available
at https://github.com/artetxem/monoses
| 2,021 | Computation and Language |
Learning Concept Abstractness Using Weak Supervision | We introduce a weakly supervised approach for inferring the property of
abstractness of words and expressions in the complete absence of labeled data.
Exploiting only minimal linguistic clues and the contextual usage of a concept
as manifested in textual data, we train sufficiently powerful classifiers,
obtaining high correlation with human labels. The results imply the
applicability of this approach to additional properties of concepts, additional
languages, and resource-scarce scenarios.
| 2,018 | Computation and Language |
Policy Shaping and Generalized Update Equations for Semantic Parsing
from Denotations | Semantic parsing from denotations faces two key challenges in model training:
(1) given only the denotations (e.g., answers), search for good candidate
semantic parses, and (2) choose the best model update algorithm. We propose
effective and general solutions to each of them. Using policy shaping, we bias
the search procedure towards semantic parses that are more compatible to the
text, which provide better supervision signals for training. In addition, we
propose an update equation that generalizes three different families of
learning algorithms, which enables fast model exploration. When experimented on
a recently proposed sequential question answering dataset, our framework leads
to a new state-of-the-art model that outperforms previous work by 5.0% absolute
on exact match accuracy.
| 2,018 | Computation and Language |
BPE and CharCNNs for Translation of Morphology: A Cross-Lingual
Comparison and Analysis | Neural Machine Translation (NMT) in low-resource settings and of
morphologically rich languages is made difficult in part by data sparsity of
vocabulary words. Several methods have been used to help reduce this sparsity,
notably Byte-Pair Encoding (BPE) and a character-based CNN layer (charCNN).
However, the charCNN has largely been neglected, possibly because it has only
been compared to BPE rather than combined with it. We argue for a
reconsideration of the charCNN, based on cross-lingual improvements on
low-resource data. We translate from 8 languages into English, using a
multi-way parallel collection of TED transcripts. We find that in most cases,
using both BPE and a charCNN performs best, while in Hebrew, using a charCNN
over words is best.
| 2,018 | Computation and Language |
RNNs as psycholinguistic subjects: Syntactic state and grammatical
dependency | Recurrent neural networks (RNNs) are the state of the art in sequence
modeling for natural language. However, it remains poorly understood what
grammatical characteristics of natural language they implicitly learn and
represent as a consequence of optimizing the language modeling objective. Here
we deploy the methods of controlled psycholinguistic experimentation to shed
light on to what extent RNN behavior reflects incremental syntactic state and
grammatical dependency representations known to characterize human linguistic
behavior. We broadly test two publicly available long short-term memory (LSTM)
English sequence models, and learn and test a new Japanese LSTM. We demonstrate
that these models represent and maintain incremental syntactic state, but that
they do not always generalize in the same way as humans. Furthermore, none of
our models learn the appropriate grammatical dependency configurations
licensing reflexive pronouns or negative polarity items.
| 2,018 | Computation and Language |
Neural MultiVoice Models for Expressing Novel Personalities in Dialog | Natural language generators for task-oriented dialog should be able to vary
the style of the output utterance while still effectively realizing the system
dialog actions and their associated semantics. While the use of neural
generation for training the response generation component of conversational
agents promises to simplify the process of producing high quality responses in
new domains, to our knowledge, there has been very little investigation of
neural generators for task-oriented dialog that can vary their response style,
and we know of no experiments on models that can generate responses that are
different in style from those seen during training, while still maintain- ing
semantic fidelity to the input meaning representation. Here, we show that a
model that is trained to achieve a single stylis- tic personality target can
produce outputs that combine stylistic targets. We carefully evaluate the
multivoice outputs for both semantic fidelity and for similarities to and
differences from the linguistic features that characterize the original
training style. We show that contrary to our predictions, the learned models do
not always simply interpolate model parameters, but rather produce styles that
are distinct, and novel from the personalities they were trained on.
| 2,018 | Computation and Language |
Firearms and Tigers are Dangerous, Kitchen Knives and Zebras are Not:
Testing whether Word Embeddings Can Tell | This paper presents an approach for investigating the nature of semantic
information captured by word embeddings. We propose a method that extends an
existing human-elicited semantic property dataset with gold negative examples
using crowd judgments. Our experimental approach tests the ability of
supervised classifiers to identify semantic features in word embedding vectors
and com- pares this to a feature-identification method based on full vector
cosine similarity. The idea behind this method is that properties identified by
classifiers, but not through full vector comparison are captured by embeddings.
Properties that cannot be identified by either method are not. Our results
provide an initial indication that semantic properties relevant for the way
entities interact (e.g. dangerous) are captured, while perceptual information
(e.g. colors) is not represented. We conclude that, though preliminary, these
results show that our method is suitable for identifying which properties are
captured by embeddings.
| 2,018 | Computation and Language |
Pre-training on high-resource speech recognition improves low-resource
speech-to-text translation | We present a simple approach to improve direct speech-to-text translation
(ST) when the source language is low-resource: we pre-train the model on a
high-resource automatic speech recognition (ASR) task, and then fine-tune its
parameters for ST. We demonstrate that our approach is effective by
pre-training on 300 hours of English ASR data to improve Spanish-English ST
from 10.8 to 20.2 BLEU when only 20 hours of Spanish-English ST training data
are available. Through an ablation study, we find that the pre-trained encoder
(acoustic model) accounts for most of the improvement, despite the fact that
the shared language in these tasks is the target language text, not the source
language audio. Applying this insight, we show that pre-training on ASR helps
ST even when the ASR language differs from both source and target ST languages:
pre-training on French ASR also improves Spanish-English ST. Finally, we show
that the approach improves performance on a true low-resource task:
pre-training on a combination of English ASR and French ASR improves
Mboshi-French ST, where only 4 hours of data are available, from 3.5 to 7.1
BLEU.
| 2,019 | Computation and Language |
Free as in Free Word Order: An Energy Based Model for Word Segmentation
and Morphological Tagging in Sanskrit | The configurational information in sentences of a free word order language
such as Sanskrit is of limited use. Thus, the context of the entire sentence
will be desirable even for basic processing tasks such as word segmentation. We
propose a structured prediction framework that jointly solves the word
segmentation and morphological tagging tasks in Sanskrit. We build an energy
based model where we adopt approaches generally employed in graph based parsing
techniques (McDonald et al., 2005a; Carreras, 2007). Our model outperforms the
state of the art with an F-Score of 96.92 (percentage improvement of 7.06%)
while using less than one-tenth of the task-specific training data. We find
that the use of a graph based ap- proach instead of a traditional lattice-based
sequential labelling approach leads to a percentage gain of 12.6% in F-Score
for the segmentation task.
| 2,018 | Computation and Language |
Appendix - Recommended Statistical Significance Tests for NLP Tasks | Statistical significance testing plays an important role when drawing
conclusions from experimental results in NLP papers. Particularly, it is a
valuable tool when one would like to establish the superiority of one algorithm
over another. This appendix complements the guide for testing statistical
significance in NLP presented in \cite{dror2018hitchhiker} by proposing valid
statistical tests for the common tasks and evaluation measures in the field.
| 2,018 | Computation and Language |
Sentylic at IEST 2018: Gated Recurrent Neural Network and Capsule
Network Based Approach for Implicit Emotion Detection | In this paper, we present the system we have used for the Implicit WASSA 2018
Implicit Emotion Shared Task. The task is to predict the emotion of a tweet of
which the explicit mentions of emotion terms have been removed. The idea is to
come up with a model which has the ability to implicitly identify the emotion
expressed given the context words. We have used a Gated Recurrent Neural
Network (GRU) and a Capsule Network based model for the task. Pre-trained word
embeddings have been utilized to incorporate contextual knowledge about words
into the model. GRU layer learns latent representations using the input word
embeddings. Subsequent Capsule Network layer learns high-level features from
that hidden representation. The proposed model managed to achieve a macro-F1
score of 0.692.
| 2,018 | Computation and Language |
Interpretation of Natural Language Rules in Conversational Machine
Reading | Most work in machine reading focuses on question answering problems where the
answer is directly expressed in the text to read. However, many real-world
question answering problems require the reading of text not because it contains
the literal answer, but because it contains a recipe to derive an answer
together with the reader's background knowledge. One example is the task of
interpreting regulations to answer "Can I...?" or "Do I have to...?" questions
such as "I am working in Canada. Do I have to carry on paying UK National
Insurance?" after reading a UK government website about this topic. This task
requires both the interpretation of rules and the application of background
knowledge. It is further complicated due to the fact that, in practice, most
questions are underspecified, and a human assistant will regularly have to ask
clarification questions such as "How long have you been working abroad?" when
the answer cannot be directly derived from the question and text. In this
paper, we formalise this task and develop a crowd-sourcing strategy to collect
32k task instances based on real-world rules and crowd-generated questions and
scenarios. We analyse the challenges of this task and assess its difficulty by
evaluating the performance of rule-based and machine-learning baselines. We
observe promising results when no background knowledge is necessary, and
substantial room for improvement whenever background knowledge is needed.
| 2,018 | Computation and Language |
A Reinforcement Learning-driven Translation Model for Search-Oriented
Conversational Systems | Search-oriented conversational systems rely on information needs expressed in
natural language (NL). We focus here on the understanding of NL expressions for
building keyword-based queries. We propose a reinforcement-learning-driven
translation model framework able to 1) learn the translation from NL
expressions to queries in a supervised way, and, 2) to overcome the lack of
large-scale dataset by framing the translation model as a word selection
approach and injecting relevance feedback in the learning process. Experiments
are carried out on two TREC datasets and outline the effectiveness of our
approach.
| 2,018 | Computation and Language |
Learning Gender-Neutral Word Embeddings | Word embedding models have become a fundamental component in a wide range of
Natural Language Processing (NLP) applications. However, embeddings trained on
human-generated corpora have been demonstrated to inherit strong gender
stereotypes that reflect social constructs. To address this concern, in this
paper, we propose a novel training procedure for learning gender-neutral word
embeddings. Our approach aims to preserve gender information in certain
dimensions of word vectors while compelling other dimensions to be free of
gender influence. Based on the proposed method, we generate a Gender-Neutral
variant of GloVe (GN-GloVe). Quantitative and qualitative experiments
demonstrate that GN-GloVe successfully isolates gender information without
sacrificing the functionality of the embedding model.
| 2,018 | Computation and Language |
Chinese Discourse Segmentation Using Bilingual Discourse Commonality | Discourse segmentation aims to segment Elementary Discourse Units (EDUs) and
is a fundamental task in discourse analysis. For Chinese, previous researches
identify EDUs just through discriminating the functions of punctuations. In
this paper, we argue that Chinese EDUs may not end at the punctuation positions
and should follow the definition of EDU in RST-DT. With this definition, we
conduct Chinese discourse segmentation with the help of English labeled
data.Using discourse commonality between English and Chinese, we design an
adversarial neural network framework to extract common language-independent
features and language-specific features which are useful for discourse
segmentation, when there is no or only a small scale of Chinese labeled data
available. Experiments on discourse segmentation demonstrate that our models
can leverage common features from bilingual data, and learn efficient
Chinese-specific features from a small amount of Chinese labeled data,
outperforming the baseline models.
| 2,018 | Computation and Language |
Skip-gram word embeddings in hyperbolic space | Recent work has demonstrated that embeddings of tree-like graphs in
hyperbolic space surpass their Euclidean counterparts in performance by a large
margin. Inspired by these results and scale-free structure in the word
co-occurrence graph, we present an algorithm for learning word embeddings in
hyperbolic space from free text. An objective function based on the hyperbolic
distance is derived and included in the skip-gram negative-sampling
architecture of word2vec. The hyperbolic word embeddings are then evaluated on
word similarity and analogy benchmarks. The results demonstrate the potential
of hyperbolic word embeddings, particularly in low dimensions, though without
clear superiority over their Euclidean counterparts. We further discuss
subtleties in the formulation of the analogy task in curved spaces.
| 2,019 | Computation and Language |
Extractive Adversarial Networks: High-Recall Explanations for
Identifying Personal Attacks in Social Media Posts | We introduce an adversarial method for producing high-recall explanations of
neural text classifier decisions. Building on an existing architecture for
extractive explanations via hard attention, we add an adversarial layer which
scans the residual of the attention for remaining predictive signal. Motivated
by the important domain of detecting personal attacks in social media comments,
we additionally demonstrate the importance of manually setting a semantically
appropriate `default' behavior for the model by explicitly manipulating its
bias term. We develop a validation set of human-annotated personal attacks to
evaluate the impact of these changes.
| 2,018 | Computation and Language |
Neural DrugNet | In this paper, we describe the system submitted for the shared task on Social
Media Mining for Health Applications by the team Light. Previous works
demonstrate that LSTMs have achieved remarkable performance in natural language
processing tasks. We deploy an ensemble of two LSTM models. The first one is a
pretrained language model appended with a classifier and takes words as input,
while the second one is a LSTM model with an attention unit over it which takes
character tri-gram as input. We call the ensemble of these two models:
Neural-DrugNet. Our system ranks 2nd in the second shared task: Automatic
classification of posts describing medication intake.
| 2,018 | Computation and Language |
Utilizing Character and Word Embeddings for Text Normalization with
Sequence-to-Sequence Models | Text normalization is an important enabling technology for several NLP tasks.
Recently, neural-network-based approaches have outperformed well-established
models in this task. However, in languages other than English, there has been
little exploration in this direction. Both the scarcity of annotated data and
the complexity of the language increase the difficulty of the problem. To
address these challenges, we use a sequence-to-sequence model with
character-based attention, which in addition to its self-learned character
embeddings, uses word embeddings pre-trained with an approach that also models
subword information. This provides the neural model with access to more
linguistic information especially suitable for text normalization, without
large parallel corpora. We show that providing the model with word-level
features bridges the gap for the neural network approach to achieve a
state-of-the-art F1 score on a standard Arabic language correction shared task
dataset.
| 2,018 | Computation and Language |
Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context
with Explicit Morphosyntactic Decoding | This paper documents the Team Copenhagen system which placed first in the
CoNLL--SIGMORPHON 2018 shared task on universal morphological reinflection,
Task 2 with an overall accuracy of 49.87. Task 2 focuses on morphological
inflection in context: generating an inflected word form, given the lemma of
the word and the context it occurs in. Previous SIGMORPHON shared tasks have
focused on context-agnostic inflection---the "inflection in context" task was
introduced this year. We approach this with an encoder-decoder architecture
over character sequences with three core innovations, all contributing to an
improvement in performance: (1) a wide context window; (2) a multi-task
learning approach with the auxiliary task of MSD prediction; (3) training
models in a multilingual fashion.
| 2,018 | Computation and Language |
Dynamically Context-Sensitive Time-Decay Attention for Dialogue Modeling | Spoken language understanding (SLU) is an essential component in
conversational systems. Considering that contexts provide informative cues for
better understanding, history can be leveraged for contextual SLU. However,
most prior work only paid attention to the related content in history
utterances and ignored the temporal information. In dialogues, it is intuitive
that the most recent utterances are more important than the least recent ones,
and time-aware attention should be in a decaying manner. Therefore, this paper
allows the model to automatically learn a time-decay attention function where
the attentional weights can be dynamically decided based on the content of each
role's contexts, which effectively integrates both content-aware and time-aware
perspectives and demonstrates remarkable flexibility to complex dialogue
contexts. The experiments on the benchmark Dialogue State Tracking Challenge
(DSTC4) dataset show that the proposed dynamically context-sensitive time-decay
attention mechanisms significantly improve the state-of-the-art model for
contextual understanding performance.
| 2,018 | Computation and Language |
Stance Prediction for Russian: Data and Analysis | Stance detection is a critical component of rumour and fake news
identification. It involves the extraction of the stance a particular author
takes related to a given claim, both expressed in text. This paper investigates
stance classification for Russian. It introduces a new dataset, RuStance, of
Russian tweets and news comments from multiple sources, covering multiple
stories, as well as text classification approaches to stance detection as
benchmarks over this data in this language. As well as presenting this
openly-available dataset, the first of its kind for Russian, the paper presents
a baseline for stance prediction in the language.
| 2,018 | Computation and Language |
Document-Level Neural Machine Translation with Hierarchical Attention
Networks | Neural Machine Translation (NMT) can be improved by including document-level
contextual information. For this purpose, we propose a hierarchical attention
model to capture the context in a structured and dynamic manner. The model is
integrated in the original NMT architecture as another level of abstraction,
conditioning on the NMT model's own previous hidden states. Experiments show
that hierarchical attention significantly improves the BLEU score over a strong
NMT baseline with the state-of-the-art in context-aware methods, and that both
the encoder and decoder benefit from context in complementary ways.
| 2,018 | Computation and Language |
Accelerated Reinforcement Learning for Sentence Generation by Vocabulary
Prediction | A major obstacle in reinforcement learning-based sentence generation is the
large action space whose size is equal to the vocabulary size of the
target-side language. To improve the efficiency of reinforcement learning, we
present a novel approach for reducing the action space based on dynamic
vocabulary prediction. Our method first predicts a fixed-size small vocabulary
for each input to generate its target sentence. The input-specific vocabularies
are then used at supervised and reinforcement learning steps, and also at test
time. In our experiments on six machine translation and two image captioning
datasets, our method achieves faster reinforcement learning ($\sim$2.7x faster)
with less GPU memory ($\sim$2.3x less) than the full-vocabulary counterpart.
The reinforcement learning with our method consistently leads to significant
improvement of BLEU scores, and the scores are equal to or better than those of
baselines using the full vocabularies, with faster decoding time ($\sim$3x
faster) on CPUs.
| 2,019 | Computation and Language |
TVQA: Localized, Compositional Video Question Answering | Recent years have witnessed an increasing interest in image-based
question-answering (QA) tasks. However, due to data limitations, there has been
much less work on video-based QA. In this paper, we present TVQA, a large-scale
video QA dataset based on 6 popular TV shows. TVQA consists of 152,545 QA pairs
from 21,793 clips, spanning over 460 hours of video. Questions are designed to
be compositional in nature, requiring systems to jointly localize relevant
moments within a clip, comprehend subtitle-based dialogue, and recognize
relevant visual concepts. We provide analyses of this new dataset as well as
several baselines and a multi-stream end-to-end trainable neural network
framework for the TVQA task. The dataset is publicly available at
http://tvqa.cs.unc.edu.
| 2,019 | Computation and Language |
An Analysis of Hierarchical Text Classification Using Word Embeddings | Efficient distributed numerical word representation models (word embeddings)
combined with modern machine learning algorithms have recently yielded
considerable improvement on automatic document classification tasks. However,
the effectiveness of such techniques has not been assessed for the hierarchical
text classification (HTC) yet. This study investigates the application of those
models and algorithms on this specific problem by means of experimentation and
analysis. We trained classification models with prominent machine learning
algorithm implementations---fastText, XGBoost, SVM, and Keras' CNN---and
noticeable word embeddings generation methods---GloVe, word2vec, and
fastText---with publicly available data and evaluated them with measures
specifically appropriate for the hierarchical context. FastText achieved an
${}_{LCA}F_1$ of 0.893 on a single-labeled version of the RCV1 dataset. An
analysis indicates that using word embeddings and its flavors is a very
promising approach for HTC.
| 2,018 | Computation and Language |
Describing a Knowledge Base | We aim to automatically generate natural language descriptions about an input
structured knowledge base (KB). We build our generation framework based on a
pointer network which can copy facts from the input KB, and add two attention
mechanisms: (i) slot-aware attention to capture the association between a slot
type and its corresponding slot value; and (ii) a new \emph{table position
self-attention} to capture the inter-dependencies among related slots. For
evaluation, besides standard metrics including BLEU, METEOR, and ROUGE, we
propose a KB reconstruction based metric by extracting a KB from the generation
output and comparing it with the input KB. We also create a new data set which
includes 106,216 pairs of structured KBs and their corresponding natural
language descriptions for two distinct entity types. Experiments show that our
approach significantly outperforms state-of-the-art methods. The reconstructed
KB achieves 68.8% - 72.6% F-score.
| 2,020 | Computation and Language |
Noise Contrastive Estimation and Negative Sampling for Conditional
Models: Consistency and Statistical Efficiency | Noise Contrastive Estimation (NCE) is a powerful parameter estimation method
for log-linear models, which avoids calculation of the partition function or
its derivatives at each training step, a computationally demanding step in many
cases. It is closely related to negative sampling methods, now widely used in
NLP. This paper considers NCE-based estimation of conditional models.
Conditional models are frequently encountered in practice; however there has
not been a rigorous theoretical analysis of NCE in this setting, and we will
argue there are subtle but important questions when generalizing NCE to the
conditional case. In particular, we analyze two variants of NCE for conditional
models: one based on a classification objective, the other based on a ranking
objective. We show that the ranking-based variant of NCE gives consistent
parameter estimates under weaker assumptions than the classification-based
method; we analyze the statistical efficiency of the ranking-based and
classification-based variants of NCE; finally we describe experiments on
synthetic data and language modeling showing the effectiveness and trade-offs
of both methods.
| 2,018 | Computation and Language |
Top-down Tree Structured Decoding with Syntactic Connections for Neural
Machine Translation and Parsing | The addition of syntax-aware decoding in Neural Machine Translation (NMT)
systems requires an effective tree-structured neural network, a syntax-aware
attention model and a language generation model that is sensitive to sentence
structure. We exploit a top-down tree-structured model called DRNN
(Doubly-Recurrent Neural Networks) first proposed by Alvarez-Melis and Jaakola
(2017) to create an NMT model called Seq2DRNN that combines a sequential
encoder with tree-structured decoding augmented with a syntax-aware attention
model. Unlike previous approaches to syntax-based NMT which use dependency
parsing models our method uses constituency parsing which we argue provides
useful information for translation. In addition, we use the syntactic structure
of the sentence to add new connections to the tree-structured decoder neural
network (Seq2DRNN+SynC). We compare our NMT model with sequential and state of
the art syntax-based NMT models and show that our model produces more fluent
translations with better reordering. Since our model is capable of doing
translation and constituency parsing at the same time we also compare our
parsing accuracy against other neural parsing models.
| 2,018 | Computation and Language |
Why are Sequence-to-Sequence Models So Dull? Understanding the
Low-Diversity Problem of Chatbots | Diversity is a long-studied topic in information retrieval that usually
refers to the requirement that retrieved results should be non-repetitive and
cover different aspects. In a conversational setting, an additional dimension
of diversity matters: an engaging response generation system should be able to
output responses that are diverse and interesting. Sequence-to-sequence
(Seq2Seq) models have been shown to be very effective for response generation.
However, dialogue responses generated by Seq2Seq models tend to have low
diversity. In this paper, we review known sources and existing approaches to
this low-diversity problem. We also identify a source of low diversity that has
been little studied so far, namely model over-confidence. We sketch several
directions for tackling model over-confidence and, hence, the low-diversity
problem, including confidence penalties and label smoothing.
| 2,018 | Computation and Language |
Code-switched Language Models Using Dual RNNs and Same-Source
Pretraining | This work focuses on building language models (LMs) for code-switched text.
We propose two techniques that significantly improve these LMs: 1) A novel
recurrent neural network unit with dual components that focus on each language
in the code-switched text separately 2) Pretraining the LM using synthetic text
from a generative model estimated using the training data. We demonstrate the
effectiveness of our proposed techniques by reporting perplexities on a
Mandarin-English task and derive significant reductions in perplexity.
| 2,018 | Computation and Language |
Training Millions of Personalized Dialogue Agents | Current dialogue systems are not very engaging for users, especially when
trained end-to-end without relying on proactive reengaging scripted strategies.
Zhang et al. (2018) showed that the engagement level of end-to-end dialogue
models increases when conditioning them on text personas providing some
personalized back-story to the model. However, the dataset used in Zhang et al.
(2018) is synthetic and of limited size as it contains around 1k different
personas. In this paper we introduce a new dataset providing 5 million personas
and 700 million persona-based dialogues. Our experiments show that, at this
scale, training using personas still improves the performance of end-to-end
systems. In addition, we show that other tasks benefit from the wide coverage
of our dataset by fine-tuning our model on the data from Zhang et al. (2018)
and achieving state-of-the-art results.
| 2,018 | Computation and Language |
Evaluating Syntactic Properties of Seq2seq Output with a Broad Coverage
HPSG: A Case Study on Machine Translation | Sequence to sequence (seq2seq) models are often employed in settings where
the target output is natural language. However, the syntactic properties of the
language generated from these models are not well understood. We explore
whether such output belongs to a formal and realistic grammar, by employing the
English Resource Grammar (ERG), a broad coverage, linguistically precise
HPSG-based grammar of English. From a French to English parallel corpus, we
analyze the parseability and grammatical constructions occurring in output from
a seq2seq translation model. Over 93\% of the model translations are parseable,
suggesting that it learns to generate conforming to a grammar. The model has
trouble learning the distribution of rarer syntactic rules, and we pinpoint
several constructions that differentiate translations between the references
and our model.
| 2,018 | Computation and Language |
Exploring Graph-structured Passage Representation for Multi-hop Reading
Comprehension with Graph Neural Networks | Multi-hop reading comprehension focuses on one type of factoid question,
where a system needs to properly integrate multiple pieces of evidence to
correctly answer a question. Previous work approximates global evidence with
local coreference information, encoding coreference chains with DAG-styled GRU
layers within a gated-attention reader. However, coreference is limited in
providing information for rich inference. We introduce a new method for better
connecting global evidence, which forms more complex graphs compared to DAGs.
To perform evidence integration on our graphs, we investigate two recent graph
neural networks, namely graph convolutional network (GCN) and graph recurrent
network (GRN). Experiments on two standard datasets show that richer global
information leads to better answers. Our method performs better than all
published results on these datasets.
| 2,018 | Computation and Language |
Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue
Models | We present two categories of model-agnostic adversarial strategies that
reveal the weaknesses of several generative, task-oriented dialogue models:
Should-Not-Change strategies that evaluate over-sensitivity to small and
semantics-preserving edits, as well as Should-Change strategies that test if a
model is over-stable against subtle yet semantics-changing modifications. We
next perform adversarial training with each strategy, employing a max-margin
approach for negative generative examples. This not only makes the target
dialogue model more robust to the adversarial inputs, but also helps it perform
significantly better on the original inputs. Moreover, training on all
strategies combined achieves further improvements, achieving a new
state-of-the-art performance on the original task (also verified via human
evaluation). In addition to adversarial training, we also address the
robustness task at the model-level, by feeding it subword units as both inputs
and outputs, and show that the resulting model is equally competitive, requires
only 1/4 of the original vocabulary size, and is robust to one of the
adversarial strategies (to which the original model is vulnerable) even without
adversarial training.
| 2,018 | Computation and Language |
Uncovering divergent linguistic information in word embeddings with
lessons for intrinsic and extrinsic evaluation | Following the recent success of word embeddings, it has been argued that
there is no such thing as an ideal representation for words, as different
models tend to capture divergent and often mutually incompatible aspects like
semantics/syntax and similarity/relatedness. In this paper, we show that each
embedding model captures more information than directly apparent. A linear
transformation that adjusts the similarity order of the model without any
external resource can tailor it to achieve better results in those aspects,
providing a new perspective on how embeddings encode divergent linguistic
information. In addition, we explore the relation between intrinsic and
extrinsic evaluation, as the effect of our transformations in downstream tasks
is higher for unsupervised systems than for supervised ones.
| 2,021 | Computation and Language |
Upcycle Your OCR: Reusing OCRs for Post-OCR Text Correction in Romanised
Sanskrit | We propose a post-OCR text correction approach for digitising texts in
Romanised Sanskrit. Owing to the lack of resources our approach uses OCR models
trained for other languages written in Roman. Currently, there exists no
dataset available for Romanised Sanskrit OCR. So, we bootstrap a dataset of 430
images, scanned in two different settings and their corresponding ground truth.
For training, we synthetically generate training images for both the settings.
We find that the use of copying mechanism (Gu et al., 2016) yields a percentage
increase of 7.69 in Character Recognition Rate (CRR) than the current state of
the art model in solving monotone sequence-to-sequence tasks (Schnober et al.,
2016). We find that our system is robust in combating OCR-prone errors, as it
obtains a CRR of 87.01% from an OCR output with CRR of 35.76% for one of the
dataset settings. A human judgment survey performed on the models shows that
our proposed model results in predictions which are faster to comprehend and
faster to improve for a human than the other systems.
| 2,018 | Computation and Language |
Object Hallucination in Image Captioning | Despite continuously improving performance, contemporary image captioning
models are prone to "hallucinating" objects that are not actually in a scene.
One problem is that standard metrics only measure similarity to ground truth
captions and may not fully capture image relevance. In this work, we propose a
new image relevance metric to evaluate current models with veridical visual
labels and assess their rate of object hallucination. We analyze how captioning
model architectures and learning objectives contribute to object hallucination,
explore when hallucination is likely due to image misclassification or language
priors, and assess how well current sentence metrics capture object
hallucination. We investigate these questions on the standard image captioning
benchmark, MSCOCO, using a diverse set of models. Our analysis yields several
interesting findings, including that models which score best on standard
sentence metrics do not always have lower hallucination and that models which
hallucinate more tend to make errors driven by language priors.
| 2,019 | Computation and Language |
Character-Aware Decoder for Translation into Morphologically Rich
Languages | Neural machine translation (NMT) systems operate primarily on words (or
sub-words), ignoring lower-level patterns of morphology. We present a
character-aware decoder designed to capture such patterns when translating into
morphologically rich languages. We achieve character-awareness by augmenting
both the softmax and embedding layers of an attention-based encoder-decoder
model with convolutional neural networks that operate on the spelling of a
word. To investigate performance on a wide variety of morphological phenomena,
we translate English into 14 typologically diverse target languages using the
TED multi-target dataset. In this low-resource setting, the character-aware
decoder provides consistent improvements with BLEU score gains of up to
$+3.05$. In addition, we analyze the relationship between the gains obtained
and properties of the target language and find evidence that our model does
indeed exploit morphological patterns.
| 2,019 | Computation and Language |
82 Treebanks, 34 Models: Universal Dependency Parsing with
Multi-Treebank Models | We present the Uppsala system for the CoNLL 2018 Shared Task on universal
dependency parsing. Our system is a pipeline consisting of three components:
the first performs joint word and sentence segmentation; the second predicts
part-of- speech tags and morphological features; the third predicts dependency
trees from words and tags. Instead of training a single parsing model for each
treebank, we trained models with multiple treebanks for one language or closely
related languages, greatly reducing the number of models. On the official test
run, we ranked 7th of 27 teams for the LAS and MLAS metrics. Our system
obtained the best scores overall for word segmentation, universal POS tagging,
and morphological features.
| 2,018 | Computation and Language |
Adversarial Domain Adaptation for Duplicate Question Detection | We address the problem of detecting duplicate questions in forums, which is
an important step towards automating the process of answering new questions. As
finding and annotating such potential duplicates manually is very tedious and
costly, automatic methods based on machine learning are a viable alternative.
However, many forums do not have annotated data, i.e., questions labeled by
experts as duplicates, and thus a promising solution is to use domain
adaptation from another forum that has such annotations. Here we focus on
adversarial domain adaptation, deriving important findings about when it
performs well and what properties of the domains are important in this regard.
Our experiments with StackExchange data show an average improvement of 5.6%
over the best baseline across multiple pairs of domains.
| 2,018 | Computation and Language |
Multi-Source Domain Adaptation with Mixture of Experts | We propose a mixture-of-experts approach for unsupervised domain adaptation
from multiple sources. The key idea is to explicitly capture the relationship
between a target example and different source domains. This relationship,
expressed by a point-to-set metric, determines how to combine predictors
trained on various domains. The metric is learned in an unsupervised fashion
using meta-training. Experimental results on sentiment analysis and
part-of-speech tagging demonstrate that our approach consistently outperforms
multiple baselines and can robustly handle negative transfer.
| 2,018 | Computation and Language |
Cell-aware Stacked LSTMs for Modeling Sentences | We propose a method of stacking multiple long short-term memory (LSTM) layers
for modeling sentences. In contrast to the conventional stacked LSTMs where
only hidden states are fed as input to the next layer, the suggested
architecture accepts both hidden and memory cell states of the preceding layer
and fuses information from the left and the lower context using the soft gating
mechanism of LSTMs. Thus the architecture modulates the amount of information
to be delivered not only in horizontal recurrence but also in vertical
connections, from which useful features extracted from lower layers are
effectively conveyed to upper layers. We dub this architecture Cell-aware
Stacked LSTM (CAS-LSTM) and show from experiments that our models bring
significant performance gain over the standard LSTMs on benchmark datasets for
natural language inference, paraphrase detection, sentiment classification, and
machine translation. We also conduct extensive qualitative analysis to
understand the internal behavior of the suggested approach.
| 2,019 | Computation and Language |
Dynamic Compositionality in Recursive Neural Networks with
Structure-aware Tag Representations | Most existing recursive neural network (RvNN) architectures utilize only the
structure of parse trees, ignoring syntactic tags which are provided as
by-products of parsing. We present a novel RvNN architecture that can provide
dynamic compositionality by considering comprehensive syntactic information
derived from both the structure and linguistic tags. Specifically, we introduce
a structure-aware tag representation constructed by a separate tag-level
tree-LSTM. With this, we can control the composition function of the existing
word-level tree-LSTM by augmenting the representation as a supplementary input
to the gate functions of the tree-LSTM. In extensive experiments, we show that
models built upon the proposed architecture obtain superior or competitive
performance on several sentence-level tasks such as sentiment analysis and
natural language inference when compared against previous tree-structured
models and other sophisticated neural models.
| 2,018 | Computation and Language |
Data Augmentation for Spoken Language Understanding via Joint
Variational Generation | Data scarcity is one of the main obstacles of domain adaptation in spoken
language understanding (SLU) due to the high cost of creating manually tagged
SLU datasets. Recent works in neural text generative models, particularly
latent variable models such as variational autoencoder (VAE), have shown
promising results in regards to generating plausible and natural sentences. In
this paper, we propose a novel generative architecture which leverages the
generative power of latent variable models to jointly synthesize fully
annotated utterances. Our experiments show that existing SLU models trained on
the additional synthetic examples achieve performance gains. Our approach not
only helps alleviate the data scarcity issue in the SLU task for many datasets
but also indiscriminately improves language understanding performances for
various SLU models, supported by extensive experiments and rigorous statistical
testing.
| 2,018 | Computation and Language |
Unsupervised Cross-lingual Word Embedding by Multilingual Neural
Language Models | We propose an unsupervised method to obtain cross-lingual embeddings without
any parallel data or pre-trained word embeddings. The proposed model, which we
call multilingual neural language models, takes sentences of multiple languages
as an input. The proposed model contains bidirectional LSTMs that perform as
forward and backward language models, and these networks are shared among all
the languages. The other parameters, i.e. word embeddings and linear
transformation between hidden states and outputs, are specific to each
language. The shared LSTMs can capture the common sentence structure among all
languages. Accordingly, word embeddings of each language are mapped into a
common latent space, making it possible to measure the similarity of words
across multiple languages. We evaluate the quality of the cross-lingual word
embeddings on a word alignment task. Our experiments demonstrate that our model
can obtain cross-lingual embeddings of much higher quality than existing
unsupervised models when only a small amount of monolingual data (i.e. 50k
sentences) are available, or the domains of monolingual data are different
across languages.
| 2,018 | Computation and Language |
Improving Neural Question Generation using Answer Separation | Neural question generation (NQG) is the task of generating a question from a
given passage with deep neural networks. Previous NQG models suffer from a
problem that a significant proportion of the generated questions include words
in the question target, resulting in the generation of unintended questions. In
this paper, we propose answer-separated seq2seq, which better utilizes the
information from both the passage and the target answer. By replacing the
target answer in the original passage with a special token, our model learns to
identify which interrogative word should be used. We also propose a new module
termed keyword-net, which helps the model better capture the key information in
the target answer and generate an appropriate question. Experimental results
demonstrate that our answer separation method significantly reduces the number
of improper questions which include answers. Consequently, our model
significantly outperforms previous state-of-the-art NQG models.
| 2,018 | Computation and Language |
Multitask and Multilingual Modelling for Lexical Analysis | In Natural Language Processing (NLP), one traditionally considers a single
task (e.g. part-of-speech tagging) for a single language (e.g. English) at a
time. However, recent work has shown that it can be beneficial to take
advantage of relatedness between tasks, as well as between languages. In this
work I examine the concept of relatedness and explore how it can be utilised to
build NLP models that require less manually annotated data. A large selection
of NLP tasks is investigated for a substantial language sample comprising 60
languages. The results show potential for joint multitask and multilingual
modelling, and hints at linguistic insights which can be gained from such
models.
| 2,018 | Computation and Language |
Meteorologists and Students: A resource for language grounding of
geographical descriptors | We present a data resource which can be useful for research purposes on
language grounding tasks in the context of geographical referring expression
generation. The resource is composed of two data sets that encompass 25
different geographical descriptors and a set of associated graphical
representations, drawn as polygons on a map by two groups of human subjects:
teenage students and expert meteorologists.
| 2,018 | Computation and Language |
Using Sparse Semantic Embeddings Learned from Multimodal Text and Image
Data to Model Human Conceptual Knowledge | Distributional models provide a convenient way to model semantics using dense
embedding spaces derived from unsupervised learning algorithms. However, the
dimensions of dense embedding spaces are not designed to resemble human
semantic knowledge. Moreover, embeddings are often built from a single source
of information (typically text data), even though neurocognitive research
suggests that semantics is deeply linked to both language and perception. In
this paper, we combine multimodal information from both text and image-based
representations derived from state-of-the-art distributional models to produce
sparse, interpretable vectors using Joint Non-Negative Sparse Embedding.
Through in-depth analyses comparing these sparse models to human-derived
behavioural and neuroimaging data, we demonstrate their ability to predict
interpretable linguistic descriptions of human ground-truth semantic knowledge.
| 2,018 | Computation and Language |
Logographic Subword Model for Neural Machine Translation | A novel logographic subword model is proposed to reinterpret logograms as
abstract subwords for neural machine translation. Our approach drastically
reduces the size of an artificial neural network, while maintaining comparable
BLEU scores as those attained with the baseline RNN and CNN seq2seq models. The
smaller model size also leads to shorter training and inference time.
Experiments demonstrate that in the tasks of English-Chinese/Chinese-English
translation, the reduction of those aspects can be from $11\%$ to as high as
$77\%$. Compared to previous subword models, abstract subwords can be applied
to various logographic languages. Considering most of the logographic languages
are ancient and very low resource languages, these advantages are very
desirable for archaeological computational linguistic applications such as a
resource-limited offline hand-held Demotic-English translator.
| 2,018 | Computation and Language |
Neural Generation of Diverse Questions using Answer Focus, Contextual
and Linguistic Features | Question Generation is the task of automatically creating questions from
textual input. In this work we present a new Attentional Encoder--Decoder
Recurrent Neural Network model for automatic question generation. Our model
incorporates linguistic features and an additional sentence embedding to
capture meaning at both sentence and word levels. The linguistic features are
designed to capture information related to named entity recognition, word case,
and entity coreference resolution. In addition our model uses a copying
mechanism and a special answer signal that enables generation of numerous
diverse questions on a given sentence. Our model achieves state of the art
results of 19.98 Bleu_4 on a benchmark Question Generation dataset,
outperforming all previously published results by a significant margin. A human
evaluation also shows that these added features improve the quality of the
generated questions.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.