Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Neural Named Entity Recognition from Subword Units | Named entity recognition (NER) is a vital task in spoken language
understanding, which aims to identify mentions of named entities in text e.g.,
from transcribed speech. Existing neural models for NER rely mostly on
dedicated word-level representations, which suffer from two main shortcomings.
First, the vocabulary size is large, yielding large memory requirements and
training time. Second, these models are not able to learn morphological or
phonological representations. To remedy the above shortcomings, we adopt a
neural solution based on bidirectional LSTMs and conditional random fields,
where we rely on subword units, namely characters, phonemes, and bytes. For
each word in an utterance, our model learns a representation from each of the
subword units. We conducted experiments in a real-world large-scale setting for
the use case of a voice-controlled device covering four languages with up to
5.5M utterances per language. Our experiments show that (1) with increasing
training data, performance of models trained solely on subword units becomes
closer to that of models with dedicated word-level embeddings (91.35 vs 93.92
F1 for English), while using a much smaller vocabulary size (332 vs 74K), (2)
subword units enhance models with dedicated word-level embeddings, and (3)
combining different subword units improves performance.
| 2,019 | Computation and Language |
Learning When to Concentrate or Divert Attention: Self-Adaptive
Attention Temperature for Neural Machine Translation | Most of the Neural Machine Translation (NMT) models are based on the
sequence-to-sequence (Seq2Seq) model with an encoder-decoder framework equipped
with the attention mechanism. However, the conventional attention mechanism
treats the decoding at each time step equally with the same matrix, which is
problematic since the softness of the attention for different types of words
(e.g. content words and function words) should differ. Therefore, we propose a
new model with a mechanism called Self-Adaptive Control of Temperature (SACT)
to control the softness of attention by means of an attention temperature.
Experimental results on the Chinese-English translation and English-Vietnamese
translation demonstrate that our model outperforms the baseline models, and the
analysis and the case study show that our model can attend to the most relevant
elements in the source-side contexts and generate the translation of high
quality.
| 2,018 | Computation and Language |
SwitchOut: an Efficient Data Augmentation Algorithm for Neural Machine
Translation | In this work, we examine methods for data augmentation for text-based tasks
such as neural machine translation (NMT). We formulate the design of a data
augmentation policy with desirable properties as an optimization problem, and
derive a generic analytic solution. This solution not only subsumes some
existing augmentation schemes, but also leads to an extremely simple data
augmentation strategy for NMT: randomly replacing words in both the source
sentence and the target sentence with other random words from their
corresponding vocabularies. We name this method SwitchOut. Experiments on three
translation datasets of different scales show that SwitchOut yields consistent
improvements of about 0.5 BLEU, achieving better or comparable performances to
strong alternatives such as word dropout (Sennrich et al., 2016a). Code to
implement this method is included in the appendix.
| 2,018 | Computation and Language |
Sarcasm Analysis using Conversation Context | Computational models for sarcasm detection have often relied on the content
of utterances in isolation. However, the speaker's sarcastic intent is not
always apparent without additional context. Focusing on social media
discussions, we investigate three issues: (1) does modeling conversation
context help in sarcasm detection; (2) can we identify what part of
conversation context triggered the sarcastic reply; and (3) given a sarcastic
post that contains multiple sentences, can we identify the specific sentence
that is sarcastic. To address the first issue, we investigate several types of
Long Short-Term Memory (LSTM) networks that can model both the conversation
context and the current turn. We show that LSTM networks with sentence-level
attention on context and current turn, as well as the conditional LSTM network
(Rocktaschel et al. 2016), outperform the LSTM model that reads only the
current turn. As conversation context, we consider the prior turn, the
succeeding turn or both. Our computational models are tested on two types of
social media platforms: Twitter and discussion forums. We discuss several
differences between these datasets ranging from their size to the nature of the
gold-label annotations. To address the last two issues, we present a
qualitative analysis of attention weights produced by the LSTM models (with
attention) and discuss the results compared with human performance on the two
tasks.
| 2,018 | Computation and Language |
Training Deeper Neural Machine Translation Models with Transparent
Attention | While current state-of-the-art NMT models, such as RNN seq2seq and
Transformers, possess a large number of parameters, they are still shallow in
comparison to convolutional models used for both text and vision applications.
In this work we attempt to train significantly (2-3x) deeper Transformer and
Bi-RNN encoders for machine translation. We propose a simple modification to
the attention mechanism that eases the optimization of deeper models, and
results in consistent gains of 0.7-1.1 BLEU on the benchmark WMT'14
English-German and WMT'15 Czech-English tasks for both architectures.
| 2,018 | Computation and Language |
TreeGAN: Syntax-Aware Sequence Generation with Generative Adversarial
Networks | Generative Adversarial Networks (GANs) have shown great capacity on image
generation, in which a discriminative model guides the training of a generative
model to construct images that resemble real images. Recently, GANs have been
extended from generating images to generating sequences (e.g., poems, music and
codes). Existing GANs on sequence generation mainly focus on general sequences,
which are grammar-free. In many real-world applications, however, we need to
generate sequences in a formal language with the constraint of its
corresponding grammar. For example, to test the performance of a database, one
may want to generate a collection of SQL queries, which are not only similar to
the queries of real users, but also follow the SQL syntax of the target
database. Generating such sequences is highly challenging because both the
generator and discriminator of GANs need to consider the structure of the
sequences and the given grammar in the formal language. To address these
issues, we study the problem of syntax-aware sequence generation with GANs, in
which a collection of real sequences and a set of pre-defined grammatical rules
are given to both discriminator and generator. We propose a novel GAN
framework, namely TreeGAN, to incorporate a given Context-Free Grammar (CFG)
into the sequence generation process. In TreeGAN, the generator employs a
recurrent neural network (RNN) to construct a parse tree. Each generated parse
tree can then be translated to a valid sequence of the given grammar. The
discriminator uses a tree-structured RNN to distinguish the generated trees
from real trees. We show that TreeGAN can generate sequences for any CFG and
its generation fully conforms with the given syntax. Experiments on synthetic
and real data sets demonstrated that TreeGAN significantly improves the quality
of the sequence generation in context-free languages.
| 2,018 | Computation and Language |
Structured Interpretation of Temporal Relations | Temporal relations between events and time expressions in a document are
often modeled in an unstructured manner where relations between individual
pairs of time expressions and events are considered in isolation. This often
results in inconsistent and incomplete annotation and computational modeling.
We propose a novel annotation approach where events and time expressions in a
document form a dependency tree in which each dependency relation corresponds
to an instance of temporal anaphora where the antecedent is the parent and the
anaphor is the child. We annotate a corpus of 235 documents using this approach
in the two genres of news and narratives, with 48 documents doubly annotated.
We report a stable and high inter-annotator agreement on the doubly annotated
subset, validating our approach, and perform a quantitative comparison between
the two genres of the entire corpus. We make this corpus publicly available.
| 2,018 | Computation and Language |
Review-Driven Multi-Label Music Style Classification by Exploiting Style
Correlations | This paper explores a new natural language processing task, review-driven
multi-label music style classification. This task requires the system to
identify multiple styles of music based on its reviews on websites. The biggest
challenge lies in the complicated relations of music styles. It has brought
failure to many multi-label classification methods. To tackle this problem, we
propose a novel deep learning approach to automatically learn and exploit style
correlations. The proposed method consists of two parts: a label-graph based
neural network, and a soft training mechanism with correlation-based continuous
label representation. Experimental results show that our approach achieves
large improvements over the baselines on the proposed dataset. Especially, the
micro F1 is improved from 53.9 to 64.5, and the one-error is reduced from 30.5
to 22.6. Furthermore, the visualized analysis shows that our approach performs
well in capturing style correlations.
| 2,018 | Computation and Language |
Exploiting Rich Syntactic Information for Semantic Parsing with
Graph-to-Sequence Model | Existing neural semantic parsers mainly utilize a sequence encoder, i.e., a
sequential LSTM, to extract word order features while neglecting other valuable
syntactic information such as dependency graph or constituent trees. In this
paper, we first propose to use the \textit{syntactic graph} to represent three
types of syntactic information, i.e., word order, dependency and constituency
features. We further employ a graph-to-sequence model to encode the syntactic
graph and decode a logical form. Experimental results on benchmark datasets
show that our model is comparable to the state-of-the-art on Jobs640, ATIS and
Geo880. Experimental results on adversarial examples demonstrate the robustness
of the model is also improved by encoding more syntactic information.
| 2,018 | Computation and Language |
Weakly-supervised Neural Semantic Parsing with a Generative Ranker | Weakly-supervised semantic parsers are trained on utterance-denotation pairs,
treating logical forms as latent. The task is challenging due to the large
search space and spuriousness of logical forms. In this paper we introduce a
neural parser-ranker system for weakly-supervised semantic parsing. The parser
generates candidate tree-structured logical forms from utterances using clues
of denotations. These candidates are then ranked based on two criterion: their
likelihood of executing to the correct denotation, and their agreement with the
utterance semantics. We present a scheduled training procedure to balance the
contribution of the two objectives. Furthermore, we propose to use a neurally
encoded lexicon to inject prior domain knowledge to the model. Experiments on
three Freebase datasets demonstrate the effectiveness of our semantic parser,
achieving results within the state-of-the-art range.
| 2,018 | Computation and Language |
Attention-Guided Answer Distillation for Machine Reading Comprehension | Despite that current reading comprehension systems have achieved significant
advancements, their promising performances are often obtained at the cost of
making an ensemble of numerous models. Besides, existing approaches are also
vulnerable to adversarial attacks. This paper tackles these problems by
leveraging knowledge distillation, which aims to transfer knowledge from an
ensemble model to a single model. We first demonstrate that vanilla knowledge
distillation applied to answer span prediction is effective for reading
comprehension systems. We then propose two novel approaches that not only
penalize the prediction on confusing answers but also guide the training with
alignment information distilled from the ensemble. Experiments show that our
best student model has only a slight drop of 0.4% F1 on the SQuAD test set
compared to the ensemble teacher, while running 12x faster during inference. It
even outperforms the teacher on adversarial SQuAD datasets and NarrativeQA
benchmark.
| 2,018 | Computation and Language |
Arap-Tweet: A Large Multi-Dialect Twitter Corpus for Gender, Age and
Language Variety Identification | In this paper, we present Arap-Tweet, which is a large-scale and
multi-dialectal corpus of Tweets from 11 regions and 16 countries in the Arab
world representing the major Arabic dialectal varieties. To build this corpus,
we collected data from Twitter and we provided a team of experienced annotators
with annotation guidelines that they used to annotate the corpus for age
categories, gender, and dialectal variety. During the data collection effort,
we based our search on distinctive keywords that are specific to the different
Arabic dialects and we also validated the location using Twitter API. In this
paper, we report on the corpus data collection and annotation efforts. We also
present some issues that we encountered during these phases. Then, we present
the results of the evaluation performed to ensure the consistency of the
annotation. The provided corpus will enrich the limited set of available
language resources for Arabic and will be an invaluable enabler for developing
author profiling tools and NLP tools for Arabic.
| 2,018 | Computation and Language |
Guidelines and Annotation Framework for Arabic Author Profiling | In this paper, we present the annotation pipeline and the guidelines we wrote
as part of an effort to create a large manually annotated Arabic author
profiling dataset from various social media sources covering 16 Arabic
countries and 11 dialectal regions. The target size of the annotated ARAP-Tweet
corpus is more than 2.4 million words. We illustrate and summarize our general
and dialect-specific guidelines for each of the dialectal regions selected. We
also present the annotation framework and logistics. We control the annotation
quality frequently by computing the inter-annotator agreement during the
annotation process. Finally, we describe the issues encountered during the
annotation phase, especially those related to the peculiarities of Arabic
dialectal varieties as used in social media.
| 2,018 | Computation and Language |
Role of Intonation in Scoring Spoken English | In this paper, we have introduced and evaluated intonation based feature for
scoring the English speech of nonnative English speakers in Indian context. For
this, we created an automated spoken English scoring engine to learn from the
manual evaluation of spoken English. This involved using an existing Automatic
Speech Recognition (ASR) engine to convert the speech to text. Thereafter,
macro features like accuracy, fluency and prosodic features were used to build
a scoring model. In the process, we introduced SimIntonation, short for
similarity between spoken intonation pattern and "ideal" i.e. training
intonation pattern. Our results show that it is a highly predictive feature
under controlled environment. We also categorized interword pauses into 4
distinct types for a granular evaluation of pauses and their impact on speech
evaluation. Moreover, we took steps to moderate test difficulty through its
evaluation across parameters like difficult word count, average sentence
readability and lexical density. Our results show that macro features like
accuracy and intonation, and micro features like pause-topography are strongly
predictive. The scoring of spoken English is not within the purview of this
paper.
| 2,019 | Computation and Language |
End-to-End Neural Entity Linking | Entity Linking (EL) is an essential task for semantic text understanding and
information extraction. Popular methods separately address the Mention
Detection (MD) and Entity Disambiguation (ED) stages of EL, without leveraging
their mutual dependency. We here propose the first neural end-to-end EL system
that jointly discovers and links entities in a text document. The main idea is
to consider all possible spans as potential mentions and learn contextual
similarity scores over their entity candidates that are useful for both MD and
ED decisions. Key components are context-aware mention embeddings, entity
embeddings and a probabilistic mention - entity map, without demanding other
engineered features. Empirically, we show that our end-to-end method
significantly outperforms popular systems on the Gerbil platform when enough
training data is available. Conversely, if testing datasets follow different
annotation conventions compared to the training set (e.g. queries/ tweets vs
news documents), our ED model coupled with a traditional NER system offers the
best or second best EL accuracy.
| 2,018 | Computation and Language |
Mapping Text to Knowledge Graph Entities using Multi-Sense LSTMs | This paper addresses the problem of mapping natural language text to
knowledge base entities. The mapping process is approached as a composition of
a phrase or a sentence into a point in a multi-dimensional entity space
obtained from a knowledge graph. The compositional model is an LSTM equipped
with a dynamic disambiguation mechanism on the input word embeddings (a
Multi-Sense LSTM), addressing polysemy issues. Further, the knowledge base
space is prepared by collecting random walks from a graph enhanced with textual
features, which act as a set of semantic bridges between text and knowledge
base entities. The ideas of this work are demonstrated on large-scale
text-to-entity mapping and entity classification tasks, with state of the art
results.
| 2,018 | Computation and Language |
Revisiting the Importance of Encoding Logic Rules in Sentiment
Classification | We analyze the performance of different sentiment classification models on
syntactically complex inputs like A-but-B sentences. The first contribution of
this analysis addresses reproducible research: to meaningfully compare
different models, their accuracies must be averaged over far more random seeds
than what has traditionally been reported. With proper averaging in place, we
notice that the distillation model described in arXiv:1603.06318v4 [cs.LG],
which incorporates explicit logic rules for sentiment classification, is
ineffective. In contrast, using contextualized ELMo embeddings
(arXiv:1802.05365v2 [cs.CL]) instead of logic rules yields significantly better
performance. Additionally, we provide analysis and visualizations that
demonstrate ELMo's ability to implicitly learn logic rules. Finally, a
crowdsourced analysis reveals how ELMo outperforms baseline models even on
sentences with ambiguous sentiment labels.
| 2,018 | Computation and Language |
Sentiment Index of the Russian Speaking Facebook | A sentiment index measures the average emotional level in a corpus. We
introduce four such indexes and use them to gauge average "positiveness" of a
population during some period based on posts in a social network. This article
for the first time presents a text-, rather than word-based sentiment index.
Furthermore, this study presents the first large-scale study of the sentiment
index of the Russian-speaking Facebook. Our results are consistent with the
prior experiments for the English language.
| 2,018 | Computation and Language |
Style Transfer as Unsupervised Machine Translation | Language style transferring rephrases text with specific stylistic attributes
while preserving the original attribute-independent content. One main challenge
in learning a style transfer system is a lack of parallel data where the source
sentence is in one style and the target sentence in another style. With this
constraint, in this paper, we adapt unsupervised machine translation methods
for the task of automatic style transfer. We first take advantage of
style-preference information and word embedding similarity to produce
pseudo-parallel data with a statistical machine translation (SMT) framework.
Then the iterative back-translation approach is employed to jointly train two
neural machine translation (NMT) based transfer systems. To control the noise
generated during joint training, a style classifier is introduced to guarantee
the accuracy of style transfer and penalize bad candidates in the generated
pseudo data. Experiments on benchmark datasets show that our proposed method
outperforms previous state-of-the-art models in terms of both accuracy of style
transfer and quality of input-output correspondence.
| 2,018 | Computation and Language |
Improving Abstraction in Text Summarization | Abstractive text summarization aims to shorten long text documents into a
human readable form that contains the most important facts from the original
document. However, the level of actual abstraction as measured by novel phrases
that do not appear in the source document remains low in existing approaches.
We propose two techniques to improve the level of abstraction of generated
summaries. First, we decompose the decoder into a contextual network that
retrieves relevant parts of the source document, and a pretrained language
model that incorporates prior knowledge about language generation. Second, we
propose a novelty metric that is optimized directly through policy learning to
encourage the generation of novel phrases. Our model achieves results
comparable to state-of-the-art models, as determined by ROUGE scores and human
evaluations, while achieving a significantly higher level of abstraction as
measured by n-gram overlap with the source document.
| 2,018 | Computation and Language |
Financial Aspect-Based Sentiment Analysis using Deep Representations | The topic of aspect-based sentiment analysis (ABSA) has been explored for a
variety of industries, but it still remains much unexplored in finance. The
recent release of data for an open challenge (FiQA) from the companion
proceedings of WWW '18 has provided valuable finance-specific annotations. FiQA
contains high quality labels, but it still lacks data quantity to apply
traditional ABSA deep learning architecture. In this paper, we employ
high-level semantic representations and methods of inductive transfer learning
for NLP. We experiment with extensions of recently developed domain adaptation
methods and target task fine-tuning that significantly improve performance on a
small dataset. Our results show an 8.7% improvement in the F1 score for
classification and an 11% improvement over the MSE for regression on current
state-of-the-art results.
| 2,018 | Computation and Language |
Proximal Policy Optimization and its Dynamic Version for Sequence
Generation | In sequence generation task, many works use policy gradient for model
optimization to tackle the intractable backpropagation issue when maximizing
the non-differentiable evaluation metrics or fooling the discriminator in
adversarial learning. In this paper, we replace policy gradient with proximal
policy optimization (PPO), which is a proved more efficient reinforcement
learning algorithm, and propose a dynamic approach for PPO (PPO-dynamic). We
demonstrate the efficacy of PPO and PPO-dynamic on conditional sequence
generation tasks including synthetic experiment and chit-chat chatbot. The
results show that PPO and PPO-dynamic can beat policy gradient by stability and
performance.
| 2,018 | Computation and Language |
Features of word similarity | In this theoretical note we compare different types of computational models
of word similarity and association in their ability to predict a set of about
900 rating data. Using regression and predictive modeling tools (neural net,
decision tree) the performance of a total of 28 models using different
combinations of both surface and semantic word features is evaluated. The
results present evidence for the hypothesis that word similarity ratings are
based on more than only semantic relatedness. The limited cross-validated
performance of the models asks for the development of psychological process
models of the word similarity rating task.
| 2,018 | Computation and Language |
Approximate Distribution Matching for Sequence-to-Sequence Learning | Sequence-to-Sequence models were introduced to tackle many real-life problems
like machine translation, summarization, image captioning, etc. The standard
optimization algorithms are mainly based on example-to-example matching like
maximum likelihood estimation, which is known to suffer from data sparsity
problem. Here we present an alternate view to explain sequence-to-sequence
learning as a distribution matching problem, where each source or target
example is viewed to represent a local latent distribution in the source or
target domain. Then, we interpret sequence-to-sequence learning as learning a
transductive model to transform the source local latent distributions to match
their corresponding target distributions. In our framework, we approximate both
the source and target latent distributions with recurrent neural networks
(augmenter). During training, the parallel augmenters learn to better
approximate the local latent distributions, while the sequence prediction model
learns to minimize the KL-divergence of the transformed source distributions
and the approximated target distributions. This algorithm can alleviate the
data sparsity issues in sequence learning by locally augmenting more unseen
data pairs and increasing the model's robustness. Experiments conducted on
machine translation and image captioning consistently demonstrate the
superiority of our proposed algorithm over the other competing algorithms.
| 2,018 | Computation and Language |
Role Semantics for Better Models of Implicit Discourse Relations | Predicting the structure of a discourse is challenging because relations
between discourse segments are often implicit and thus hard to distinguish
computationally. I extend previous work to classify implicit discourse
relations by introducing a novel set of features on the level of semantic
roles. My results demonstrate that such features are helpful, yielding results
competitive with other feature-rich approaches on the PDTB. My main
contribution is an analysis of improvements that can be traced back to
role-based features, providing insights into why and when role semantics is
helpful.
| 2,018 | Computation and Language |
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve
how Language Models Track Agreement Information | How do neural language models keep track of number agreement between subject
and verb? We show that `diagnostic classifiers', trained to predict number from
the internal states of a language model, provide a detailed understanding of
how, when, and where this information is represented. Moreover, they give us
insight into when and where number information is corrupted in cases where the
language model ends up making agreement errors. To demonstrate the causal role
played by the representations we find, we then use agreement information to
influence the course of the LSTM during the processing of difficult sentences.
Results from such an intervention reveal a large increase in the language
model's accuracy. Together, these results show that diagnostic classifiers give
us an unrivalled detailed look into the representation of linguistic
information in neural models, and demonstrate that this knowledge can be used
to improve their performance.
| 2,021 | Computation and Language |
Measuring LDA Topic Stability from Clusters of Replicated Runs | Background: Unstructured and textual data is increasing rapidly and Latent
Dirichlet Allocation (LDA) topic modeling is a popular data analysis methods
for it. Past work suggests that instability of LDA topics may lead to
systematic errors. Aim: We propose a method that relies on replicated LDA runs,
clustering, and providing a stability metric for the topics. Method: We
generate k LDA topics and replicate this process n times resulting in n*k
topics. Then we use K-medioids to cluster the n*k topics to k clusters. The k
clusters now represent the original LDA topics and we present them like normal
LDA topics showing the ten most probable words. For the clusters, we try
multiple stability metrics, out of which we recommend Rank-Biased Overlap,
showing the stability of the topics inside the clusters. Results: We provide an
initial validation where our method is used for 270,000 Mozilla Firefox commit
messages with k=20 and n=20. We show how our topic stability metrics are
related to the contents of the topics. Conclusions: Advances in text mining
enable us to analyze large masses of text in software engineering but
non-deterministic algorithms, such as LDA, may lead to unreplicable
conclusions. Our approach makes LDA stability transparent and is also
complementary rather than alternative to many prior works that focus on LDA
parameter tuning.
| 2,018 | Computation and Language |
From Random to Supervised: A Novel Dropout Mechanism Integrated with
Global Information | Dropout is used to avoid overfitting by randomly dropping units from the
neural networks during training. Inspired by dropout, this paper presents
GI-Dropout, a novel dropout method integrating with global information to
improve neural networks for text classification. Unlike the traditional dropout
method in which the units are dropped randomly according to the same
probability, we aim to use explicit instructions based on global information of
the dataset to guide the training process. With GI-Dropout, the model is
supposed to pay more attention to inapparent features or patterns. Experiments
demonstrate the effectiveness of the dropout with global information on seven
text classification tasks, including sentiment analysis and topic
classification.
| 2,018 | Computation and Language |
A Visual Attention Grounding Neural Model for Multimodal Machine
Translation | We introduce a novel multimodal machine translation model that utilizes
parallel visual and textual information. Our model jointly optimizes the
learning of a shared visual-language embedding and a translator. The model
leverages a visual attention grounding mechanism that links the visual
semantics with the corresponding textual semantics. Our approach achieves
competitive state-of-the-art results on the Multi30K and the Ambiguous COCO
datasets. We also collected a new multilingual multimodal product description
dataset to simulate a real-world international online shopping scenario. On
this dataset, our visual attention grounding model outperforms other methods by
a large margin.
| 2,018 | Computation and Language |
MADARi: A Web Interface for Joint Arabic Morphological Annotation and
Spelling Correction | In this paper, we introduce MADARi, a joint morphological annotation and
spelling correction system for texts in Standard and Dialectal Arabic. The
MADARi framework provides intuitive interfaces for annotating text and managing
the annotation process of a large number of sizable documents. Morphological
annotation includes indicating, for a word, in context, its baseword, clitics,
part-of-speech, lemma, gloss, and dialect identification. MADARi has a suite of
utilities to help with annotator productivity. For example, annotators are
provided with pre-computed analyses to assist them in their task and reduce the
amount of work needed to complete it. MADARi also allows annotators to query a
morphological analyzer for a list of possible analyses in multiple dialects or
look up previously submitted analyses. The MADARi management interface enables
a lead annotator to easily manage and organize the whole annotation process
remotely and concurrently. We describe the motivation, design and
implementation of this interface; and we present details from a user study
working with this system.
| 2,018 | Computation and Language |
Improving the results of string kernels in sentiment analysis and Arabic
dialect identification by adapting them to your test set | Recently, string kernels have obtained state-of-the-art results in various
text classification tasks such as Arabic dialect identification or native
language identification. In this paper, we apply two simple yet effective
transductive learning approaches to further improve the results of string
kernels. The first approach is based on interpreting the pairwise string kernel
similarities between samples in the training set and samples in the test set as
features. Our second approach is a simple self-training method based on two
learning iterations. In the first iteration, a classifier is trained on the
training set and tested on the test set, as usual. In the second iteration, a
number of test samples (to which the classifier associated higher confidence
scores) are added to the training set for another round of training. However,
the ground-truth labels of the added test samples are not necessary. Instead,
we use the labels predicted by the classifier in the first training iteration.
By adapting string kernels to the test set, we report significantly better
accuracy rates in English polarity classification and Arabic dialect
identification.
| 2,018 | Computation and Language |
Churn Intent Detection in Multilingual Chatbot Conversations and Social
Media | We propose a new method to detect when users express the intent to leave a
service, also known as churn. While previous work focuses solely on social
media, we show that this intent can be detected in chatbot conversations. As
companies increasingly rely on chatbots they need an overview of potentially
churny users. To this end, we crowdsource and publish a dataset of churn intent
expressions in chatbot interactions in German and English. We show that
classifiers trained on social media data can detect the same intent in the
context of chatbots.
We introduce a classification architecture that outperforms existing work on
churn intent detection in social media. Moreover, we show that, using bilingual
word embeddings, a system trained on combined English and German data
outperforms monolingual approaches. As the only existing dataset is in English,
we crowdsource and publish a novel dataset of German tweets. We thus underline
the universal aspect of the problem, as examples of churn intent in English
help us identify churn in German tweets and chatbot conversations.
| 2,018 | Computation and Language |
Meta-Learning for Low-Resource Neural Machine Translation | In this paper, we propose to extend the recently introduced model-agnostic
meta-learning algorithm (MAML) for low-resource neural machine translation
(NMT). We frame low-resource translation as a meta-learning problem, and we
learn to adapt to low-resource languages based on multilingual high-resource
language tasks. We use the universal lexical
representation~\citep{gu2018universal} to overcome the input-output mismatch
across different languages. We evaluate the proposed meta-learning strategy
using eighteen European languages (Bg, Cs, Da, De, El, Es, Et, Fr, Hu, It, Lt,
Nl, Pl, Pt, Sk, Sl, Sv and Ru) as source tasks and five diverse languages (Ro,
Lv, Fi, Tr and Ko) as target tasks. We show that the proposed approach
significantly outperforms the multilingual, transfer learning based
approach~\citep{zoph2016transfer} and enables us to train a competitive NMT
system with only a fraction of training examples. For instance, the proposed
approach can achieve as high as 22.04 BLEU on Romanian-English WMT'16 by seeing
only 16,000 translated words (~600 parallel sentences).
| 2,018 | Computation and Language |
Paraphrases as Foreign Languages in Multilingual Neural Machine
Translation | Paraphrases, the rewordings of the same semantic meaning, are useful for
improving generalization and translation. However, prior works only explore
paraphrases at the word or phrase level, not at the sentence or corpus level.
Unlike previous works that only explore paraphrases at the word or phrase
level, we use different translations of the whole training data that are
consistent in structure as paraphrases at the corpus level. We train on
parallel paraphrases in multiple languages from various sources. We treat
paraphrases as foreign languages, tag source sentences with paraphrase labels,
and train on parallel paraphrases in the style of multilingual Neural Machine
Translation (NMT). Our multi-paraphrase NMT that trains only on two languages
outperforms the multilingual baselines. Adding paraphrases improves the rare
word translation and increases entropy and diversity in lexical choice. Adding
the source paraphrases boosts performance better than adding the target ones.
Combining both the source and the target paraphrases lifts performance further;
combining paraphrases with multilingual data helps but has mixed performance.
We achieve a BLEU score of 57.2 for French-to-English translation using 24
corpus-level paraphrases of the Bible, which outperforms the multilingual
baselines and is +34.7 above the single-source single-target NMT baseline.
| 2,019 | Computation and Language |
Comparing CNN and LSTM character-level embeddings in BiLSTM-CRF models
for chemical and disease named entity recognition | We compare the use of LSTM-based and CNN-based character-level word
embeddings in BiLSTM-CRF models to approach chemical and disease named entity
recognition (NER) tasks. Empirical results over the BioCreative V CDR corpus
show that the use of either type of character-level word embeddings in
conjunction with the BiLSTM-CRF models leads to comparable state-of-the-art
performance. However, the models using CNN-based character-level word
embeddings have a computational performance advantage, increasing training time
over word-based models by 25% while the LSTM-based character-level word
embeddings more than double the required training time.
| 2,018 | Computation and Language |
Representing Social Media Users for Sarcasm Detection | We explore two methods for representing authors in the context of textual
sarcasm detection: a Bayesian approach that directly represents authors'
propensities to be sarcastic, and a dense embedding approach that can learn
interactions between the author and the text. Using the SARC dataset of Reddit
comments, we show that augmenting a bidirectional RNN with these
representations improves performance; the Bayesian approach suffices in
homogeneous contexts, whereas the added power of the dense embeddings proves
valuable in more diverse ones.
| 2,018 | Computation and Language |
Exploring Recombination for Efficient Decoding of Neural Machine
Translation | In Neural Machine Translation (NMT), the decoder can capture the features of
the entire prediction history with neural connections and representations. This
means that partial hypotheses with different prefixes will be regarded
differently no matter how similar they are. However, this might be inefficient
since some partial hypotheses can contain only local differences that will not
influence future predictions. In this work, we introduce recombination in NMT
decoding based on the concept of the "equivalence" of partial hypotheses.
Heuristically, we use a simple $n$-gram suffix based equivalence function and
adapt it into beam search decoding. Through experiments on large-scale
Chinese-to-English and English-to-Germen translation tasks, we show that the
proposed method can obtain similar translation quality with a smaller beam
size, making NMT decoding more efficient.
| 2,018 | Computation and Language |
Deep Probabilistic Logic: A Unifying Framework for Indirect Supervision | Deep learning has emerged as a versatile tool for a wide range of NLP tasks,
due to its superior capacity in representation learning. But its applicability
is limited by the reliance on annotated examples, which are difficult to
produce at scale. Indirect supervision has emerged as a promising direction to
address this bottleneck, either by introducing labeling functions to
automatically generate noisy examples from unlabeled text, or by imposing
constraints over interdependent label decisions. A plethora of methods have
been proposed, each with respective strengths and limitations. Probabilistic
logic offers a unifying language to represent indirect supervision, but
end-to-end modeling with probabilistic logic is often infeasible due to
intractable inference and learning. In this paper, we propose deep
probabilistic logic (DPL) as a general framework for indirect supervision, by
composing probabilistic logic with deep learning. DPL models label decisions as
latent variables, represents prior knowledge on their relations using weighted
first-order logical formulas, and alternates between learning a deep neural
network for the end task and refining uncertain formula weights for indirect
supervision, using variational EM. This framework subsumes prior indirect
supervision methods as special cases, and enables novel combination via
infusion of rich domain and linguistic knowledge. Experiments on biomedical
machine reading demonstrate the promise of this approach.
| 2,018 | Computation and Language |
Contextual Parameter Generation for Universal Neural Machine Translation | We propose a simple modification to existing neural machine translation (NMT)
models that enables using a single universal model to translate between
multiple languages while allowing for language specific parameterization, and
that can also be used for domain adaptation. Our approach requires no changes
to the model architecture of a standard NMT system, but instead introduces a
new component, the contextual parameter generator (CPG), that generates the
parameters of the system (e.g., weights in a neural network). This parameter
generator accepts source and target language embeddings as input, and generates
the parameters for the encoder and the decoder, respectively. The rest of the
model remains unchanged and is shared across all languages. We show how this
simple modification enables the system to use monolingual data for training and
also perform zero-shot translation. We further show it is able to surpass
state-of-the-art performance for both the IWSLT-15 and IWSLT-17 datasets and
that the learned language embeddings are able to uncover interesting
relationships between languages.
| 2,018 | Computation and Language |
Event Detection with Neural Networks: A Rigorous Empirical Evaluation | Detecting events and classifying them into predefined types is an important
step in knowledge extraction from natural language texts. While the neural
network models have generally led the state-of-the-art, the differences in
performance between different architectures have not been rigorously studied.
In this paper we present a novel GRU-based model that combines syntactic
information along with temporal structure through an attention mechanism. We
show that it is competitive with other neural network architectures through
empirical evaluations under different random initializations and
training-validation-test splits of ACE2005 dataset.
| 2,018 | Computation and Language |
Word Sense Induction with Neural biLM and Symmetric Patterns | An established method for Word Sense Induction (WSI) uses a language model to
predict probable substitutes for target words, and induces senses by clustering
these resulting substitute vectors.
We replace the ngram-based language model (LM) with a recurrent one. Beyond
being more accurate, the use of the recurrent LM allows us to effectively query
it in a creative way, using what we call dynamic symmetric patterns.
The combination of the RNN-LM and the dynamic symmetric patterns results in
strong substitute vectors for WSI, allowing to surpass the current
state-of-the-art on the SemEval 2013 WSI shared task by a large margin.
| 2,018 | Computation and Language |
Semantic-Unit-Based Dilated Convolution for Multi-Label Text
Classification | We propose a novel model for multi-label text classification, which is based
on sequence-to-sequence learning. The model generates higher-level semantic
unit representations with multi-level dilated convolution as well as a
corresponding hybrid attention mechanism that extracts both the information at
the word-level and the level of the semantic unit. Our designed dilated
convolution effectively reduces dimension and supports an exponential expansion
of receptive fields without loss of local information, and the
attention-over-attention mechanism is able to capture more summary relevant
information from the source context. Results of our experiments show that the
proposed model has significant advantages over the baseline models on the
dataset RCV1-V2 and Ren-CECps, and our analysis demonstrates that our model is
competitive to the deterministic hierarchical models and it is more robust to
classifying low-frequency labels.
| 2,018 | Computation and Language |
Analyzing Learned Representations of a Deep ASR Performance Prediction
Model | This paper addresses a relatively new task: prediction of ASR performance on
unseen broadcast programs. In a previous paper, we presented an ASR performance
prediction system using CNNs that encode both text (ASR transcript) and speech,
in order to predict word error rate. This work is dedicated to the analysis of
speech signal embeddings and text embeddings learnt by the CNN while training
our prediction model. We try to better understand which information is captured
by the deep model and its relation with different conditioning factors. It is
shown that hidden layers convey a clear signal about speech style, accent and
broadcast type. We then try to leverage these 3 types of information at
training time through multi-task learning. Our experiments show that this
allows to train slightly more efficient ASR performance prediction systems that
- in addition - simultaneously tag the analyzed utterances according to their
speech style, accent and broadcast program origin.
| 2,018 | Computation and Language |
Title-Guided Encoding for Keyphrase Generation | Keyphrase generation (KG) aims to generate a set of keyphrases given a
document, which is a fundamental task in natural language processing (NLP).
Most previous methods solve this problem in an extractive manner, while
recently, several attempts are made under the generative setting using deep
neural networks. However, the state-of-the-art generative methods simply treat
the document title and the document main body equally, ignoring the leading
role of the title to the overall document. To solve this problem, we introduce
a new model called Title-Guided Network (TG-Net) for automatic keyphrase
generation task based on the encoder-decoder architecture with two new
features: (i) the title is additionally employed as a query-like input, and
(ii) a title-guided encoder gathers the relevant information from the title to
each word in the document. Experiments on a range of KG datasets demonstrate
that our model outperforms the state-of-the-art models with a large margin,
especially for documents with either very low or very high title length ratios.
| 2,019 | Computation and Language |
Semi-Autoregressive Neural Machine Translation | Existing approaches to neural machine translation are typically
autoregressive models. While these models attain state-of-the-art translation
quality, they are suffering from low parallelizability and thus slow at
decoding long sequences. In this paper, we propose a novel model for fast
sequence generation --- the semi-autoregressive Transformer (SAT). The SAT
keeps the autoregressive property in global but relieves in local and thus is
able to produce multiple successive words in parallel at each time step.
Experiments conducted on English-German and Chinese-English translation tasks
show that the SAT achieves a good balance between translation quality and
decoding speed. On WMT'14 English-German translation, the SAT achieves
5.58$\times$ speedup while maintains 88\% translation quality, significantly
better than the previous non-autoregressive methods. When produces two words at
each time step, the SAT is almost lossless (only 1\% degeneration in BLEU
score).
| 2,018 | Computation and Language |
Semi-Supervised Event Extraction with Paraphrase Clusters | Supervised event extraction systems are limited in their accuracy due to the
lack of available training data. We present a method for self-training event
extraction systems by bootstrapping additional training data. This is done by
taking advantage of the occurrence of multiple mentions of the same event
instances across newswire articles from multiple sources. If our system can
make a highconfidence extraction of some mentions in such a cluster, it can
then acquire diverse training examples by adding the other mentions as well.
Our experiments show significant performance improvements on multiple event
extractors over ACE 2005 and TAC-KBP 2015 datasets.
| 2,018 | Computation and Language |
Identifying Domain Adjacent Instances for Semantic Parsers | When the semantics of a sentence are not representable in a semantic parser's
output schema, parsing will inevitably fail. Detection of these instances is
commonly treated as an out-of-domain classification problem. However, there is
also a more subtle scenario in which the test data is drawn from the same
domain. In addition to formalizing this problem of domain-adjacency, we present
a comparison of various baselines that could be used to solve it. We also
propose a new simple sentence representation that emphasizes words which are
unexpected. This approach improves the performance of a downstream semantic
parser run on in-domain and domain-adjacent instances.
| 2,018 | Computation and Language |
Predicting Semantic Relations using Global Graph Properties | Semantic graphs, such as WordNet, are resources which curate natural language
on two distinguishable layers. On the local level, individual relations between
synsets (semantic building blocks) such as hypernymy and meronymy enhance our
understanding of the words used to express their meanings. Globally, analysis
of graph-theoretic properties of the entire net sheds light on the structure of
human language as a whole. In this paper, we combine global and local
properties of semantic graphs through the framework of Max-Margin Markov Graph
Models (M3GM), a novel extension of Exponential Random Graph Model (ERGM) that
scales to large multi-relational graphs. We demonstrate how such global
modeling improves performance on the local task of predicting semantic
relations between synsets, yielding new state-of-the-art results on the WN18RR
dataset, a challenging version of WordNet link prediction in which "easy"
reciprocal cases are removed. In addition, the M3GM model identifies
multirelational motifs that are characteristic of well-formed lexical semantic
ontologies.
| 2,018 | Computation and Language |
Fast and Accurate Recognition of Chinese Clinical Named Entities with
Residual Dilated Convolutions | Clinical Named Entity Recognition (CNER) aims to identify and classify
clinical terms such as diseases, symptoms, treatments, exams, and body parts in
electronic health records, which is a fundamental and crucial task for clinical
and translation research. In recent years, deep learning methods have achieved
significant success in CNER tasks. However, these methods depend greatly on
Recurrent Neural Networks (RNNs), which maintain a vector of hidden activations
that are propagated through time, thus causing too much time to train models.
In this paper, we propose a Residual Dilated Convolutional Neural Network with
Conditional Random Field (RD-CNN-CRF) to solve it. Specifically, Chinese
characters and dictionary features are first projected into dense vector
representations, then they are fed into the residual dilated convolutional
neural network to capture contextual features. Finally, a conditional random
field is employed to capture dependencies between neighboring tags.
Computational results on the CCKS-2017 Task 2 benchmark dataset show that our
proposed RD-CNN-CRF method competes favorably with state-of-the-art RNN-based
methods both in terms of computational performance and training time.
| 2,018 | Computation and Language |
IIIDYT at IEST 2018: Implicit Emotion Classification With Deep
Contextualized Word Representations | In this paper we describe our system designed for the WASSA 2018 Implicit
Emotion Shared Task (IEST), which obtained 2$^{\text{nd}}$ place out of 26
teams with a test macro F1 score of $0.710$. The system is composed of a single
pre-trained ELMo layer for encoding words, a Bidirectional Long-Short Memory
Network BiLSTM for enriching word representations with context, a max-pooling
operation for creating sentence representations from said word vectors, and a
Dense Layer for projecting the sentence representations into label space. Our
official submission was obtained by ensembling 6 of these models initialized
with different random seeds. The code for replicating this paper is available
at https://github.com/jabalazs/implicit_emotion.
| 2,018 | Computation and Language |
Generating Text through Adversarial Training using Skip-Thought Vectors | GANs have been shown to perform exceedingly well on tasks pertaining to image
generation and style transfer. In the field of language modelling, word
embeddings such as GLoVe and word2vec are state-of-the-art methods for applying
neural network models on textual data. Attempts have been made to utilize GANs
with word embeddings for text generation. This study presents an approach to
text generation using Skip-Thought sentence embeddings with GANs based on
gradient penalty functions and f-measures. The proposed architecture aims to
reproduce writing style in the generated text by modelling the way of
expression at a sentence level across all the works of an author. Extensive
experiments were run in different embedding settings on a variety of tasks
including conditional text generation and language generation. The model
outperforms baseline text generation networks across several automated
evaluation metrics like BLEU-n, METEOR and ROUGE. Further, wide applicability
and effectiveness in real life tasks are demonstrated through human judgement
scores.
| 2,019 | Computation and Language |
simNet: Stepwise Image-Topic Merging Network for Generating Detailed and
Comprehensive Image Captions | The encode-decoder framework has shown recent success in image captioning.
Visual attention, which is good at detailedness, and semantic attention, which
is good at comprehensiveness, have been separately proposed to ground the
caption on the image. In this paper, we propose the Stepwise Image-Topic
Merging Network (simNet) that makes use of the two kinds of attention at the
same time. At each time step when generating the caption, the decoder
adaptively merges the attentive information in the extracted topics and the
image according to the generated context, so that the visual information and
the semantic information can be effectively combined. The proposed approach is
evaluated on two benchmark datasets and reaches the state-of-the-art
performances.(The code is available at https://github.com/lancopku/simNet)
| 2,018 | Computation and Language |
Comparing Attention-based Convolutional and Recurrent Neural Networks:
Success and Limitations in Machine Reading Comprehension | We propose a machine reading comprehension model based on the
compare-aggregate framework with two-staged attention that achieves
state-of-the-art results on the MovieQA question answering dataset. To
investigate the limitations of our model as well as the behavioral difference
between convolutional and recurrent neural networks, we generate adversarial
examples to confuse the model and compare to human performance. Furthermore, we
assess the generalizability of our model by analyzing its differences to human
inference,
| 2,018 | Computation and Language |
Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional
Neural Networks for Extreme Summarization | We introduce extreme summarization, a new single-document summarization task
which does not favor extractive strategies and calls for an abstractive
modeling approach. The idea is to create a short, one-sentence news summary
answering the question "What is the article about?". We collect a real-world,
large-scale dataset for this task by harvesting online articles from the
British Broadcasting Corporation (BBC). We propose a novel abstractive model
which is conditioned on the article's topics and based entirely on
convolutional neural networks. We demonstrate experimentally that this
architecture captures long-range dependencies in a document and recognizes
pertinent content, outperforming an oracle extractive system and
state-of-the-art abstractive approaches when evaluated automatically and by
humans.
| 2,018 | Computation and Language |
Sentence Embeddings in NLI with Iterative Refinement Encoders | Sentence-level representations are necessary for various NLP tasks. Recurrent
neural networks have proven to be very effective in learning distributed
representations and can be trained efficiently on natural language inference
tasks. We build on top of one such model and propose a hierarchy of BiLSTM and
max pooling layers that implements an iterative refinement strategy and yields
state of the art results on the SciTail dataset as well as strong results for
SNLI and MultiNLI. We can show that the sentence embeddings learned in this way
can be utilized in a wide variety of transfer learning tasks, outperforming
InferSent on 7 out of 10 and SkipThought on 8 out of 9 SentEval sentence
embedding evaluation tasks. Furthermore, our model beats the InferSent model in
8 out of 10 recently published SentEval probing tasks designed to evaluate
sentence embeddings' ability to capture some of the important linguistic
properties of sentences.
| 2,019 | Computation and Language |
Improving Cross-Lingual Word Embeddings by Meeting in the Middle | Cross-lingual word embeddings are becoming increasingly important in
multilingual NLP. Recently, it has been shown that these embeddings can be
effectively learned by aligning two disjoint monolingual vector spaces through
linear transformations, using no more than a small bilingual dictionary as
supervision. In this work, we propose to apply an additional transformation
after the initial alignment step, which moves cross-lingual synonyms towards a
middle point between them. By applying this transformation our aim is to obtain
a better cross-lingual integration of the vector spaces. In addition, and
perhaps surprisingly, the monolingual spaces also improve by this
transformation. This is in contrast to the original alignment, which is
typically learned such that the structure of the monolingual spaces is
preserved. Our experiments confirm that the resulting cross-lingual embeddings
outperform state-of-the-art models in both monolingual and cross-lingual
evaluation tasks.
| 2,018 | Computation and Language |
Amobee at IEST 2018: Transfer Learning from Language Models | This paper describes the system developed at Amobee for the WASSA 2018
implicit emotions shared task (IEST). The goal of this task was to predict the
emotion expressed by missing words in tweets without an explicit mention of
those words. We developed an ensemble system consisting of language models
together with LSTM-based networks containing a CNN attention mechanism. Our
approach represents a novel use of language models (specifically trained on a
large Twitter dataset) to predict and classify emotions. Our system reached 1st
place with a macro $\text{F}_1$ score of 0.7145.
| 2,019 | Computation and Language |
An Auto-Encoder Matching Model for Learning Utterance-Level Semantic
Dependency in Dialogue Generation | Generating semantically coherent responses is still a major challenge in
dialogue generation. Different from conventional text generation tasks, the
mapping between inputs and responses in conversations is more complicated,
which highly demands the understanding of utterance-level semantic dependency,
a relation between the whole meanings of inputs and outputs. To address this
problem, we propose an Auto-Encoder Matching (AEM) model to learn such
dependency. The model contains two auto-encoders and one mapping module. The
auto-encoders learn the semantic representations of inputs and responses, and
the mapping module learns to connect the utterance-level representations.
Experimental results from automatic and human evaluations demonstrate that our
model is capable of generating responses of high coherence and fluency compared
to baseline models. The code is available at https://github.com/lancopku/AMM
| 2,018 | Computation and Language |
A strong baseline for question relevancy ranking | The best systems at the SemEval-16 and SemEval-17 community question
answering shared tasks -- a task that amounts to question relevancy ranking --
involve complex pipelines and manual feature engineering. Despite this, many of
these still fail at beating the IR baseline, i.e., the rankings provided by
Google's search engine. We present a strong baseline for question relevancy
ranking by training a simple multi-task feed forward network on a bag of 14
distance measures for the input question pair. This baseline model, which is
fast to train and uses only language-independent features, outperforms the best
shared task systems on the task of retrieving relevant previously asked
questions.
| 2,018 | Computation and Language |
WiSeBE: Window-based Sentence Boundary Evaluation | Sentence Boundary Detection (SBD) has been a major research topic since
Automatic Speech Recognition transcripts have been used for further Natural
Language Processing tasks like Part of Speech Tagging, Question Answering or
Automatic Summarization. But what about evaluation? Do standard evaluation
metrics like precision, recall, F-score or classification error; and more
important, evaluating an automatic system against a unique reference is enough
to conclude how well a SBD system is performing given the final application of
the transcript? In this paper we propose Window-based Sentence Boundary
Evaluation (WiSeBE), a semi-supervised metric for evaluating Sentence Boundary
Detection systems based on multi-reference (dis)agreement. We evaluate and
compare the performance of different SBD systems over a set of Youtube
transcripts using WiSeBE and standard metrics. This double evaluation gives an
understanding of how WiSeBE is a more reliable metric for the SBD task.
| 2,018 | Computation and Language |
Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and
They Are Both Weakly Supervised | We present a neural framework for opinion summarization from online product
reviews which is knowledge-lean and only requires light supervision (e.g., in
the form of product domain labels and user-provided ratings). Our method
combines two weakly supervised components to identify salient opinions and form
extractive summaries from multiple reviews: an aspect extractor trained under a
multi-task objective, and a sentiment predictor based on multiple instance
learning. We introduce an opinion summarization dataset that includes a
training set of product reviews from six diverse domains and human-annotated
development and test sets with gold standard aspect annotations, salience
labels, and opinion summaries. Automatic evaluation shows significant
improvements over baselines, and a large-scale study indicates that our opinion
summaries are preferred by human judges according to multiple criteria.
| 2,018 | Computation and Language |
Accelerating Asynchronous Stochastic Gradient Descent for Neural Machine
Translation | In order to extract the best possible performance from asynchronous
stochastic gradient descent one must increase the mini-batch size and scale the
learning rate accordingly. In order to achieve further speedup we introduce a
technique that delays gradient updates effectively increasing the mini-batch
size. Unfortunately with the increase of mini-batch size we worsen the stale
gradient problem in asynchronous stochastic gradient descent (SGD) which makes
the model convergence poor. We introduce local optimizers which mitigate the
stale gradient problem and together with fine tuning our momentum we are able
to train a shallow machine translation system 27% faster than an optimized
baseline with negligible penalty in BLEU.
| 2,018 | Computation and Language |
Extracting Sentiment Attitudes From Analytical Texts | In this paper we present the RuSentRel corpus including analytical texts in
the sphere of international relations. For each document we annotated
sentiments from the author to mentioned named entities, and sentiments of
relations between mentioned entities. In the current experiments, we considered
the problem of extracting sentiment relations between entities for the whole
documents as a three-class machine learning task. We experimented with
conventional machine-learning methods (Naive Bayes, SVM, Random Forest).
| 2,018 | Computation and Language |
Unsupervised Multilingual Word Embeddings | Multilingual Word Embeddings (MWEs) represent words from multiple languages
in a single distributional vector space. Unsupervised MWE (UMWE) methods
acquire multilingual embeddings without cross-lingual supervision, which is a
significant advantage over traditional supervised approaches and opens many new
possibilities for low-resource languages. Prior art for learning UMWEs,
however, merely relies on a number of independently trained Unsupervised
Bilingual Word Embeddings (UBWEs) to obtain multilingual embeddings. These
methods fail to leverage the interdependencies that exist among many languages.
To address this shortcoming, we propose a fully unsupervised framework for
learning MWEs that directly exploits the relations between all language pairs.
Our model substantially outperforms previous approaches in the experiments on
multilingual word translation and cross-lingual word similarity. In addition,
our model even beats supervised approaches trained with cross-lingual
resources.
| 2,018 | Computation and Language |
Why Self-Attention? A Targeted Evaluation of Neural Machine Translation
Architectures | Recently, non-recurrent architectures (convolutional, self-attentional) have
outperformed RNNs in neural machine translation. CNNs and self-attentional
networks can connect distant words via shorter network paths than RNNs, and it
has been speculated that this improves their ability to model long-range
dependencies. However, this theoretical argument has not been tested
empirically, nor have alternative explanations for their strong performance
been explored in-depth. We hypothesize that the strong performance of CNNs and
self-attentional networks could also be due to their ability to extract
semantic features from the source text, and we evaluate RNNs, CNNs and
self-attention networks on two tasks: subject-verb agreement (where capturing
long-range dependencies is required) and word sense disambiguation (where
semantic feature extraction is required). Our experimental results show that:
1) self-attentional networks and CNNs do not outperform RNNs in modeling
subject-verb agreement over long distances; 2) self-attentional networks
perform distinctly better than RNNs and CNNs on word sense disambiguation.
| 2,018 | Computation and Language |
Dissecting Contextual Word Embeddings: Architecture and Representation | Contextual word representations derived from pre-trained bidirectional
language models (biLMs) have recently been shown to provide significant
improvements to the state of the art for a wide range of NLP tasks. However,
many questions remain as to how and why these models are so effective. In this
paper, we present a detailed empirical study of how the choice of neural
architecture (e.g. LSTM, CNN, or self attention) influences both end task
accuracy and qualitative properties of the representations that are learned. We
show there is a tradeoff between speed and accuracy, but all architectures
learn high quality contextual representations that outperform word embeddings
for four challenging NLP tasks. Additionally, all architectures learn
representations that vary with network depth, from exclusively morphological
based at the word embedding layer through local syntax based in the lower
contextual layers to longer range semantics such coreference at the upper
layers. Together, these results suggest that unsupervised biLMs, independent of
architecture, are learning much more about the structure of language than
previously appreciated.
| 2,018 | Computation and Language |
Large Margin Neural Language Model | We propose a large margin criterion for training neural language models.
Conventionally, neural language models are trained by minimizing perplexity
(PPL) on grammatical sentences. However, we demonstrate that PPL may not be the
best metric to optimize in some tasks, and further propose a large margin
formulation. The proposed method aims to enlarge the margin between the "good"
and "bad" sentences in a task-specific sense. It is trained end-to-end and can
be widely applied to tasks that involve re-scoring of generated text. Compared
with minimum-PPL training, our method gains up to 1.1 WER reduction for speech
recognition and 1.0 BLEU increase for machine translation.
| 2,018 | Computation and Language |
Back-Translation Sampling by Targeting Difficult Words in Neural Machine
Translation | Neural Machine Translation has achieved state-of-the-art performance for
several language pairs using a combination of parallel and synthetic data.
Synthetic data is often generated by back-translating sentences randomly
sampled from monolingual data using a reverse translation model. While
back-translation has been shown to be very effective in many cases, it is not
entirely clear why. In this work, we explore different aspects of
back-translation, and show that words with high prediction loss during training
benefit most from the addition of synthetic data. We introduce several
variations of sampling strategies targeting difficult-to-predict words using
prediction losses and frequencies of words. In addition, we also target the
contexts of difficult words and sample sentences that are similar in context.
Experimental results for the WMT news translation task show that our method
improves translation quality by up to 1.7 and 1.2 Bleu points over
back-translation using random sampling for German-English and English-German,
respectively.
| 2,018 | Computation and Language |
Natural Language Generation with Neural Variational Models | In this thesis, we explore the use of deep neural networks for generation of
natural language. Specifically, we implement two sequence-to-sequence neural
variational models - variational autoencoders (VAE) and variational
encoder-decoders (VED). VAEs for text generation are difficult to train due to
issues associated with the Kullback-Leibler (KL) divergence term of the loss
function vanishing to zero. We successfully train VAEs by implementing
optimization heuristics such as KL weight annealing and word dropout. We also
demonstrate the effectiveness of this continuous latent space through
experiments such as random sampling, linear interpolation and sampling from the
neighborhood of the input. We argue that if VAEs are not designed
appropriately, it may lead to bypassing connections which results in the latent
space being ignored during training. We show experimentally with the example of
decoder hidden state initialization that such bypassing connections degrade the
VAE into a deterministic model, thereby reducing the diversity of generated
sentences. We discover that the traditional attention mechanism used in
sequence-to-sequence VED models serves as a bypassing connection, thereby
deteriorating the model's latent space. In order to circumvent this issue, we
propose the variational attention mechanism where the attention context vector
is modeled as a random variable that can be sampled from a distribution. We
show empirically using automatic evaluation metrics, namely entropy and
distinct measures, that our variational attention model generates more diverse
output sentences than the deterministic attention model. A qualitative analysis
with human evaluation study proves that our model simultaneously produces
sentences that are of high quality and equally fluent as the ones generated by
the deterministic attention counterpart.
| 2,018 | Computation and Language |
Pyramidal Recurrent Unit for Language Modeling | LSTMs are powerful tools for modeling contextual information, as evidenced by
their success at the task of language modeling. However, modeling contexts in
very high dimensional space can lead to poor generalizability. We introduce the
Pyramidal Recurrent Unit (PRU), which enables learning representations in high
dimensional space with more generalization power and fewer parameters. PRUs
replace the linear transformation in LSTMs with more sophisticated interactions
including pyramidal and grouped linear transformations. This architecture gives
strong results on word-level language modeling while reducing the number of
parameters significantly. In particular, PRU improves the perplexity of a
recent state-of-the-art language model Merity et al. (2018) by up to 1.3 points
while learning 15-20% fewer parameters. For similar number of model parameters,
PRU outperforms all previous RNN models that exploit different gating
mechanisms and transformations. We provide a detailed examination of the PRU
and its behavior on the language modeling tasks. Our code is open-source and
available at https://sacmehta.github.io/PRU/
| 2,018 | Computation and Language |
Targeted Syntactic Evaluation of Language Models | We present a dataset for evaluating the grammaticality of the predictions of
a language model. We automatically construct a large number of minimally
different pairs of English sentences, each consisting of a grammatical and an
ungrammatical sentence. The sentence pairs represent different variations of
structure-sensitive phenomena: subject-verb agreement, reflexive anaphora and
negative polarity items. We expect a language model to assign a higher
probability to the grammatical sentence than the ungrammatical one. In an
experiment using this data set, an LSTM language model performed poorly on many
of the constructions. Multi-task training with a syntactic objective (CCG
supertagging) improved the LSTM's accuracy, but a large gap remained between
its performance and the accuracy of human participants recruited online. This
suggests that there is considerable room for improvement over LSTMs in
capturing syntax in a language model.
| 2,018 | Computation and Language |
One-Shot Relational Learning for Knowledge Graphs | Knowledge graphs (KGs) are the key components of various natural language
processing applications. To further expand KGs' coverage, previous studies on
knowledge graph completion usually require a large number of training instances
for each relation. However, we observe that long-tail relations are actually
more common in KGs and those newly added relations often do not have many known
triples for training. In this work, we aim at predicting new facts under a
challenging setting where only one training instance is available. We propose a
one-shot relational learning framework, which utilizes the knowledge extracted
by embedding models and learns a matching metric by considering both the
learned embeddings and one-hop graph structures. Empirically, our model yields
considerable performance improvements over existing embedding models, and also
eliminates the need of re-training the embedding models when dealing with newly
added relations.
| 2,018 | Computation and Language |
Adversarial Decomposition of Text Representation | In this paper, we present a method for adversarial decomposition of text
representation. This method can be used to decompose a representation of an
input sentence into several independent vectors, each of them responsible for a
specific aspect of the input sentence. We evaluate the proposed method on two
case studies: the conversion between different social registers and diachronic
language change. We show that the proposed method is capable of fine-grained
controlled change of these aspects of the input sentence. It is also learning a
continuous (rather than categorical) representation of the style of the
sentence, which is more linguistically realistic. The model uses
adversarial-motivational training and includes a special motivational loss,
which acts opposite to the discriminator and encourages a better decomposition.
Furthermore, we evaluate the obtained meaning embeddings on a downstream task
of paraphrase detection and show that they significantly outperform the
embeddings of a regular autoencoder.
| 2,019 | Computation and Language |
Parameter sharing between dependency parsers for related languages | Previous work has suggested that parameter sharing between transition-based
neural dependency parsers for related languages can lead to better performance,
but there is no consensus on what parameters to share. We present an evaluation
of 27 different parameter sharing strategies across 10 languages, representing
five pairs of related languages, each pair from a different language family. We
find that sharing transition classifier parameters always helps, whereas the
usefulness of sharing word and/or character LSTM parameters varies. Based on
this result, we propose an architecture where the transition classifier is
shared, and the sharing of word and character parameters is controlled by a
parameter that can be tuned on validation data. This model is linguistically
motivated and obtains significant improvements over a monolingually trained
baseline. We also find that sharing transition classifier parameters helps when
training a parser on unrelated language pairs, but we find that, in the case of
unrelated languages, sharing too many parameters does not help.
| 2,018 | Computation and Language |
An Investigation of the Interactions Between Pre-Trained Word
Embeddings, Character Models and POS Tags in Dependency Parsing | We provide a comprehensive analysis of the interactions between pre-trained
word embeddings, character models and POS tags in a transition-based dependency
parser. While previous studies have shown POS information to be less important
in the presence of character models, we show that in fact there are complex
interactions between all three techniques. In isolation each produces large
improvements over a baseline system using randomly initialised word embeddings
only, but combining them quickly leads to diminishing returns. We categorise
words by frequency, POS tag and language in order to systematically investigate
how each of the techniques affects parsing quality. For many word categories,
applying any two of the three techniques is almost as good as the full combined
system. Character models tend to be more important for low-frequency open-class
words, especially in morphologically rich languages, while POS tags can help
disambiguate high-frequency function words. We also show that large character
embedding sizes help even for languages with small character sets, especially
in morphologically rich languages.
| 2,018 | Computation and Language |
Evaluating the Utility of Hand-crafted Features in Sequence Labelling | Conventional wisdom is that hand-crafted features are redundant for deep
learning models, as they already learn adequate representations of text
automatically from corpora. In this work, we test this claim by proposing a new
method for exploiting handcrafted features as part of a novel hybrid learning
approach, incorporating a feature auto-encoder loss component. We evaluate on
the task of named entity recognition (NER), where we show that including manual
features for part-of-speech, word shapes and gazetteers can improve the
performance of a neural CRF model. We obtain a $F_1$ of 91.89 for the
CoNLL-2003 English shared task, which significantly outperforms a collection of
highly competitive baseline models. We also present an ablation study showing
the importance of auto-encoding, over using features as either inputs or
outputs alone, and moreover, show including the autoencoder components reduces
training requirements to 60\%, while retaining the same predictive accuracy.
| 2,018 | Computation and Language |
Disfluency Detection using a Noisy Channel Model and a Deep Neural
Language Model | This paper presents a model for disfluency detection in spontaneous speech
transcripts called LSTM Noisy Channel Model. The model uses a Noisy Channel
Model (NCM) to generate n-best candidate disfluency analyses and a Long
Short-Term Memory (LSTM) language model to score the underlying fluent
sentences of each analysis. The LSTM language model scores, along with other
features, are used in a MaxEnt reranker to identify the most plausible
analysis. We show that using an LSTM language model in the reranking process of
noisy channel disfluency model improves the state-of-the-art in disfluency
detection.
| 2,018 | Computation and Language |
Disfluency Detection using Auto-Correlational Neural Networks | In recent years, the natural language processing community has moved away
from task-specific feature engineering, i.e., researchers discovering ad-hoc
feature representations for various tasks, in favor of general-purpose methods
that learn the input representation by themselves. However, state-of-the-art
approaches to disfluency detection in spontaneous speech transcripts currently
still depend on an array of hand-crafted features, and other representations
derived from the output of pre-existing systems such as language models or
dependency parsers. As an alternative, this paper proposes a simple yet
effective model for automatic disfluency detection, called an
auto-correlational neural network (ACNN). The model uses a convolutional neural
network (CNN) and augments it with a new auto-correlation operator at the
lowest layer that can capture the kinds of "rough copy" dependencies that are
characteristic of repair disfluencies in speech. In experiments, the ACNN model
outperforms the baseline CNN on a disfluency detection task with a 5% increase
in f-score, which is close to the previous best result on this task.
| 2,020 | Computation and Language |
N-ary Relation Extraction using Graph State LSTM | Cross-sentence $n$-ary relation extraction detects relations among $n$
entities across multiple sentences. Typical methods formulate an input as a
\textit{document graph}, integrating various intra-sentential and
inter-sentential dependencies. The current state-of-the-art method splits the
input graph into two DAGs, adopting a DAG-structured LSTM for each. Though
being able to model rich linguistic knowledge by leveraging graph edges,
important information can be lost in the splitting procedure. We propose a
graph-state LSTM model, which uses a parallel state to model each word,
recurrently enriching state values via message passing. Compared with DAG
LSTMs, our graph LSTM keeps the original graph structure, and speeds up
computation by allowing more parallelization. On a standard benchmark, our
model shows the best result in the literature.
| 2,018 | Computation and Language |
Unsupervised Learning of Syntactic Structure with Invertible Neural
Projections | Unsupervised learning of syntactic structure is typically performed using
generative models with discrete latent variables and multinomial parameters. In
most cases, these models have not leveraged continuous word representations. In
this work, we propose a novel generative model that jointly learns discrete
syntactic structure and continuous word representations in an unsupervised
fashion by cascading an invertible neural network with a structured generative
prior. We show that the invertibility condition allows for efficient exact
inference and marginal likelihood computation in our model so long as the prior
is well-behaved. In experiments we instantiate our approach with both Markov
and tree-structured priors, evaluating on two tasks: part-of-speech (POS)
induction, and unsupervised dependency parsing without gold POS annotation. On
the Penn Treebank, our Markov-structured model surpasses state-of-the-art
results on POS induction. Similarly, we find that our tree-structured model
achieves state-of-the-art performance on unsupervised dependency parsing for
the difficult training condition where neither gold POS annotation nor
punctuation-based constraints are available.
| 2,018 | Computation and Language |
All You Need is "Love": Evading Hate-speech Detection | With the spread of social networks and their unfortunate use for hate speech,
automatic detection of the latter has become a pressing problem. In this paper,
we reproduce seven state-of-the-art hate speech detection models from prior
work, and show that they perform well only when tested on the same type of data
they were trained on. Based on these results, we argue that for successful hate
speech detection, model architecture is less important than the type of data
and labeling criteria. We further show that all proposed detection techniques
are brittle against adversaries who can (automatically) insert typos, change
word boundaries or add innocuous words to the original hate speech. A
combination of these methods is also effective against Google Perspective -- a
cutting-edge solution from industry. Our experiments demonstrate that
adversarial training does not completely mitigate the attacks, and using
character-level features makes the models systematically more attack-resistant
than using word-level features.
| 2,018 | Computation and Language |
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive
Meaning Representations | By design, word embeddings are unable to model the dynamic nature of words'
semantics, i.e., the property of words to correspond to potentially different
meanings. To address this limitation, dozens of specialized meaning
representation techniques such as sense or contextualized embeddings have been
proposed. However, despite the popularity of research on this topic, very few
evaluation benchmarks exist that specifically focus on the dynamic semantics of
words. In this paper we show that existing models have surpassed the
performance ceiling of the standard evaluation dataset for the purpose, i.e.,
Stanford Contextual Word Similarity, and highlight its shortcomings. To address
the lack of a suitable benchmark, we put forward a large-scale Word in Context
dataset, called WiC, based on annotations curated by experts, for generic
evaluation of context-sensitive representations. WiC is released in
https://pilehvar.github.io/wic/.
| 2,019 | Computation and Language |
Mapping Natural Language Commands to Web Elements | The web provides a rich, open-domain environment with textual, structural,
and spatial properties. We propose a new task for grounding language in this
environment: given a natural language command (e.g., "click on the second
article"), choose the correct element on the web page (e.g., a hyperlink or
text box). We collected a dataset of over 50,000 commands that capture various
phenomena such as functional references (e.g. "find who made this site"),
relational reasoning (e.g. "article by john"), and visual reasoning (e.g.
"top-most article"). We also implemented and analyzed three baseline models
that capture different phenomena present in the dataset.
| 2,018 | Computation and Language |
Toward Fast and Accurate Neural Discourse Segmentation | Discourse segmentation, which segments texts into Elementary Discourse Units,
is a fundamental step in discourse analysis. Previous discourse segmenters rely
on complicated hand-crafted features and are not practical in actual use. In
this paper, we propose an end-to-end neural segmenter based on BiLSTM-CRF
framework. To improve its accuracy, we address the problem of data
insufficiency by transferring a word representation model that is trained on a
large corpus. We also propose a restricted self-attention mechanism in order to
capture useful information within a neighborhood. Experiments on the RST-DT
corpus show that our model is significantly faster than previous methods, while
achieving new state-of-the-art performance.
| 2,018 | Computation and Language |
Guided Neural Language Generation for Abstractive Summarization using
Abstract Meaning Representation | Recent work on abstractive summarization has made progress with neural
encoder-decoder architectures. However, such models are often challenged due to
their lack of explicit semantic modeling of the source document and its
summary. In this paper, we extend previous work on abstractive summarization
using Abstract Meaning Representation (AMR) with a neural language generation
stage which we guide using the source document. We demonstrate that this
guidance improves summarization results by 7.4 and 10.5 points in ROUGE-2 using
gold standard AMR parses and parses obtained from an off-the-shelf parser
respectively. We also find that the summarization performance using the latter
is 2 ROUGE-2 points higher than that of a well-established neural
encoder-decoder approach trained on a larger dataset. Code is available at
\url{https://github.com/sheffieldnlp/AMR2Text-summ}
| 2,018 | Computation and Language |
Analysing the potential of seq-to-seq models for incremental
interpretation in task-oriented dialogue | We investigate how encoder-decoder models trained on a synthetic dataset of
task-oriented dialogues process disfluencies, such as hesitations and
self-corrections. We find that, contrary to earlier results, disfluencies have
very little impact on the task success of seq-to-seq models with attention.
Using visualisation and diagnostic classifiers, we analyse the representations
that are incrementally built by the model, and discover that models develop
little to no awareness of the structure of disfluencies. However, adding
disfluencies to the data appears to help the model create clearer
representations overall, as evidenced by the attention patterns the different
models exhibit.
| 2,018 | Computation and Language |
What do character-level models learn about morphology? The case of
dependency parsing | When parsing morphologically-rich languages with neural models, it is
beneficial to model input at the character level, and it has been claimed that
this is because character-level models learn morphology. We test these claims
by comparing character-level models to an oracle with access to explicit
morphological analysis on twelve languages with varying morphological
typologies. Our results highlight many strengths of character-level models, but
also show that they are poor at disambiguating some words, particularly in the
face of case syncretism. We then demonstrate that explicitly modeling
morphological case improves our best model, showing that character-level models
can benefit from targeted forms of explicit morphological modeling.
| 2,018 | Computation and Language |
Why Do Neural Response Generation Models Prefer Universal Replies? | Recent advances in sequence-to-sequence learning reveal a purely data-driven
approach to the response generation task. Despite its diverse applications,
existing neural models are prone to producing short and generic replies, making
it infeasible to tackle open-domain challenges. In this research, we analyze
this critical issue in light of the model's optimization goal and the specific
characteristics of the human-to-human dialog corpus. By decomposing the black
box into parts, a detailed analysis of the probability limit was conducted to
reveal the reason behind these universal replies. Based on these analyses, we
propose a max-margin ranking regularization term to avoid the models leaning to
these replies. Finally, empirical experiments on case studies and benchmarks
with several metrics validate this approach.
| 2,019 | Computation and Language |
Joint Aspect and Polarity Classification for Aspect-based Sentiment
Analysis with End-to-End Neural Networks | In this work, we propose a new model for aspect-based sentiment analysis. In
contrast to previous approaches, we jointly model the detection of aspects and
the classification of their polarity in an end-to-end trainable neural network.
We conduct experiments with different neural architectures and word
representations on the recent GermEval 2017 dataset. We were able to show
considerable performance gains by using the joint modeling approach in all
settings compared to pipeline approaches. The combination of a convolutional
neural network and fasttext embeddings outperformed the best submission of the
shared task in 2017, establishing a new state of the art.
| 2,018 | Computation and Language |
Card-660: Cambridge Rare Word Dataset - a Reliable Benchmark for
Infrequent Word Representation Models | Rare word representation has recently enjoyed a surge of interest, owing to
the crucial role that effective handling of infrequent words can play in
accurate semantic understanding. However, there is a paucity of reliable
benchmarks for evaluation and comparison of these techniques. We show in this
paper that the only existing benchmark (the Stanford Rare Word dataset) suffers
from low-confidence annotations and limited vocabulary; hence, it does not
constitute a solid comparison framework. In order to fill this evaluation gap,
we propose CAmbridge Rare word Dataset (Card-660), an expert-annotated word
similarity dataset which provides a highly reliable, yet challenging, benchmark
for rare word representation techniques. Through a set of experiments we show
that even the best mainstream word embeddings, with millions of words in their
vocabularies, are unable to achieve performances higher than 0.43 (Pearson
correlation) on the dataset, compared to a human-level upperbound of 0.90. We
release the dataset and the annotation materials at
https://pilehvar.github.io/card-660/.
| 2,018 | Computation and Language |
Convolutional Neural Networks with Recurrent Neural Filters | We introduce a class of convolutional neural networks (CNNs) that utilize
recurrent neural networks (RNNs) as convolution filters. A convolution filter
is typically implemented as a linear affine transformation followed by a
non-linear function, which fails to account for language compositionality. As a
result, it limits the use of high-order filters that are often warranted for
natural language processing tasks. In this work, we model convolution filters
with RNNs that naturally capture compositionality and long-term dependencies in
language. We show that simple CNN architectures equipped with recurrent neural
filters (RNFs) achieve results that are on par with the best published ones on
the Stanford Sentiment Treebank and two answer sentence selection datasets.
| 2,018 | Computation and Language |
Bridging Knowledge Gaps in Neural Entailment via Symbolic Models | Most textual entailment models focus on lexical gaps between the premise text
and the hypothesis, but rarely on knowledge gaps. We focus on filling these
knowledge gaps in the Science Entailment task, by leveraging an external
structured knowledge base (KB) of science facts. Our new architecture combines
standard neural entailment models with a knowledge lookup module. To facilitate
this lookup, we propose a fact-level decomposition of the hypothesis, and
verifying the resulting sub-facts against both the textual premise and the
structured KB. Our model, NSnet, learns to aggregate predictions from these
heterogeneous data formats. On the SciTail dataset, NSnet outperforms a simpler
combination of the two predictions by 3% and the base entailment model by 5%.
| 2,018 | Computation and Language |
A Discriminative Latent-Variable Model for Bilingual Lexicon Induction | We introduce a novel discriminative latent-variable model for the task of
bilingual lexicon induction. Our model combines the bipartite matching
dictionary prior of Haghighi et al. (2008) with a state-of-the-art
embedding-based approach. To train the model, we derive an efficient Viterbi EM
algorithm. We provide empirical improvements on six language pairs under two
metrics and show that the prior theoretically and empirically helps to mitigate
the hubness problem. We also demonstrate how previous work may be viewed as a
similarly fashioned latent-variable model, albeit with a different prior.
| 2,018 | Computation and Language |
Evaluating Theory of Mind in Question Answering | We propose a new dataset for evaluating question answering models with
respect to their capacity to reason about beliefs. Our tasks are inspired by
theory-of-mind experiments that examine whether children are able to reason
about the beliefs of others, in particular when those beliefs differ from
reality. We evaluate a number of recent neural models with memory augmentation.
We find that all fail on our tasks, which require keeping track of inconsistent
states of the world; moreover, the models' accuracy decreases notably when
random sentences are introduced to the tasks at test.
| 2,018 | Computation and Language |
Xu: An Automated Query Expansion and Optimization Tool | The exponential growth of information on the Internet is a big challenge for
information retrieval systems towards generating relevant results. Novel
approaches are required to reformat or expand user queries to generate a
satisfactory response and increase recall and precision. Query expansion (QE)
is a technique to broaden users' queries by introducing additional tokens or
phrases based on some semantic similarity metrics. The tradeoff is the added
computational complexity to find semantically similar words and a possible
increase in noise in information retrieval. Despite several research efforts on
this topic, QE has not yet been explored enough and more work is needed on
similarity matching and composition of query terms with an objective to
retrieve a small set of most appropriate responses. QE should be scalable,
fast, and robust in handling complex queries with a good response time and
noise ceiling. In this paper, we propose Xu, an automated QE technique, using
high dimensional clustering of word vectors and Datamuse API, an open source
query engine to find semantically similar words. We implemented Xu as a command
line tool and evaluated its performances using datasets containing news
articles and human-generated QEs. The evaluation results show that Xu was
better than Datamuse by achieving about 88% accuracy with reference to the
human-generated QE.
| 2,019 | Computation and Language |
Universal Dependency Parsing with a General Transition-Based DAG Parser | This paper presents our experiments with applying TUPA to the CoNLL 2018 UD
shared task. TUPA is a general neural transition-based DAG parser, which we use
to present the first experiments on recovering enhanced dependencies as part of
the general parsing task. TUPA was designed for parsing UCCA, a
cross-linguistic semantic annotation scheme, exhibiting reentrancy,
discontinuity and non-terminal nodes. By converting UD trees and graphs to a
UCCA-like DAG format, we train TUPA almost without modification on the UD
parsing task. The generic nature of our approach lends itself naturally to
multitask learning. Our code is available at
https://github.com/CoNLL-UD-2018/HUJI
| 2,018 | Computation and Language |
Rational Recurrences | Despite the tremendous empirical success of neural models in natural language
processing, many of them lack the strong intuitions that accompany classical
machine learning approaches. Recently, connections have been shown between
convolutional neural networks (CNNs) and weighted finite state automata
(WFSAs), leading to new interpretations and insights. In this work, we show
that some recurrent neural networks also share this connection to WFSAs. We
characterize this connection formally, defining rational recurrences to be
recurrent hidden state update functions that can be written as the Forward
calculation of a finite set of WFSAs. We show that several recent neural models
use rational recurrences. Our analysis provides a fresh view of these models
and facilitates devising new neural architectures that draw inspiration from
WFSAs. We present one such model, which performs better than two recent
baselines on language modeling and text classification. Our results demonstrate
that transferring intuitions from classical models like WFSAs can be an
effective approach to designing and understanding neural models.
| 2,018 | Computation and Language |
Deriving Machine Attention from Human Rationales | Attention-based models are successful when trained on large amounts of data.
In this paper, we demonstrate that even in the low-resource scenario, attention
can be learned effectively. To this end, we start with discrete human-annotated
rationales and map them into continuous attention. Our central hypothesis is
that this mapping is general across domains, and thus can be transferred from
resource-rich domains to low-resource ones. Our model jointly learns a
domain-invariant representation and induces the desired mapping between
rationales and attention. Our empirical results validate this hypothesis and
show that our approach delivers significant gains over state-of-the-art
baselines, yielding over 15% average error reduction on benchmark datasets.
| 2,018 | Computation and Language |
A Tree-based Decoder for Neural Machine Translation | Recent advances in Neural Machine Translation (NMT) show that adding
syntactic information to NMT systems can improve the quality of their
translations. Most existing work utilizes some specific types of
linguistically-inspired tree structures, like constituency and dependency parse
trees. This is often done via a standard RNN decoder that operates on a
linearized target tree structure. However, it is an open question of what
specific linguistic formalism, if any, is the best structural representation
for NMT. In this paper, we (1) propose an NMT model that can naturally generate
the topology of an arbitrary tree structure on the target side, and (2)
experiment with various target tree structures. Our experiments show the
surprising result that our model delivers the best improvements with balanced
binary trees constructed without any linguistic knowledge; this model
outperforms standard seq2seq models by up to 2.1 BLEU points, and other methods
for incorporating target-side syntax by up to 0.7 BLEU.
| 2,018 | Computation and Language |
Understanding Back-Translation at Scale | An effective method to improve neural machine translation with monolingual
data is to augment the parallel training corpus with back-translations of
target language sentences. This work broadens the understanding of
back-translation and investigates a number of methods to generate synthetic
source sentences. We find that in all but resource poor settings
back-translations obtained via sampling or noised beam outputs are most
effective. Our analysis shows that sampling or noisy synthetic data gives a
much stronger training signal than data generated by beam or greedy search. We
also compare how synthetic data compares to genuine bitext and study various
domain effects. Finally, we scale to hundreds of millions of monolingual
sentences and achieve a new state of the art of 35 BLEU on the WMT'14
English-German test set.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.