Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
What Makes Reading Comprehension Questions Easier? | A challenge in creating a dataset for machine reading comprehension (MRC) is
to collect questions that require a sophisticated understanding of language to
answer beyond using superficial cues. In this work, we investigate what makes
questions easier across recent 12 MRC datasets with three question styles
(answer extraction, description, and multiple choice). We propose to employ
simple heuristics to split each dataset into easy and hard subsets and examine
the performance of two baseline models for each of the subsets. We then
manually annotate questions sampled from each subset with both validity and
requisite reasoning skills to investigate which skills explain the difference
between easy and hard questions. From this study, we observed that (i) the
baseline performances for the hard subsets remarkably degrade compared to those
of entire datasets, (ii) hard questions require knowledge inference and
multiple-sentence reasoning in comparison with easy questions, and (iii)
multiple-choice questions tend to require a broader range of reasoning skills
than answer extraction and description questions. These results suggest that
one might overestimate recent advances in MRC.
| 2,018 | Computation and Language |
Framing and Agenda-setting in Russian News: a Computational Analysis of
Intricate Political Strategies | Amidst growing concern over media manipulation, NLP attention has focused on
overt strategies like censorship and "fake news'". Here, we draw on two
concepts from the political science literature to explore subtler strategies
for government media manipulation: agenda-setting (selecting what topics to
cover) and framing (deciding how topics are covered). We analyze 13 years (100K
articles) of the Russian newspaper Izvestia and identify a strategy of
distraction: articles mention the U.S. more frequently in the month directly
following an economic downturn in Russia. We introduce embedding-based methods
for cross-lingually projecting English frames to Russian, and discover that
these articles emphasize U.S. moral failings and threats to the U.S. Our work
offers new ways to identify subtle media manipulation strategies at the
intersection of agenda-setting and framing.
| 2,018 | Computation and Language |
Temporal Information Extraction by Predicting Relative Time-lines | The current leading paradigm for temporal information extraction from text
consists of three phases: (1) recognition of events and temporal expressions,
(2) recognition of temporal relations among them, and (3) time-line
construction from the temporal relations. In contrast to the first two phases,
the last phase, time-line construction, received little attention and is the
focus of this work. In this paper, we propose a new method to construct a
linear time-line from a set of (extracted) temporal relations. But more
importantly, we propose a novel paradigm in which we directly predict start and
end-points for events from the text, constituting a time-line without going
through the intermediate step of prediction of temporal relations as in earlier
work. Within this paradigm, we propose two models that predict in linear
complexity, and a new training loss using TimeML-style annotations, yielding
promising results.
| 2,023 | Computation and Language |
Privacy-preserving Neural Representations of Text | This article deals with adversarial attacks towards deep learning systems for
Natural Language Processing (NLP), in the context of privacy protection. We
study a specific type of attack: an attacker eavesdrops on the hidden
representations of a neural text classifier and tries to recover information
about the input text. Such scenario may arise in situations when the
computation of a neural network is shared across multiple devices, e.g. some
hidden representation is computed by a user's device and sent to a cloud-based
model. We measure the privacy of a hidden representation by the ability of an
attacker to predict accurately specific private information from it and
characterize the tradeoff between the privacy and the utility of neural
representations. Finally, we propose several defense methods based on modified
training objectives and show that they improve the privacy of neural
representations.
| 2,018 | Computation and Language |
Semantic Role Labeling for Learner Chinese: the Importance of Syntactic
Parsing and L2-L1 Parallel Data | This paper studies semantic parsing for interlanguage (L2), taking semantic
role labeling (SRL) as a case task and learner Chinese as a case language. We
first manually annotate the semantic roles for a set of learner texts to derive
a gold standard for automatic SRL. Based on the new data, we then evaluate
three off-the-shelf SRL systems, i.e., the PCFGLA-parser-based,
neural-parser-based and neural-syntax-agnostic systems, to gauge how successful
SRL for learner Chinese can be. We find two non-obvious facts: 1) the
L1-sentence-trained systems performs rather badly on the L2 data; 2) the
performance drop from the L1 data to the L2 data of the two parser-based
systems is much smaller, indicating the importance of syntactic parsing in SRL
for interlanguages. Finally, the paper introduces a new agreement-based model
to explore the semantic coherency information in the large-scale L2-L1 parallel
data. We then show such information is very effective to enhance SRL for
learner texts. Our model achieves an F-score of 72.06, which is a 2.02 point
improvement over the best baseline.
| 2,018 | Computation and Language |
Identifying Well-formed Natural Language Questions | Understanding search queries is a hard problem as it involves dealing with
"word salad" text ubiquitously issued by users. However, if a query resembles a
well-formed question, a natural language processing pipeline is able to perform
more accurate interpretation, thus reducing downstream compounding errors.
Hence, identifying whether or not a query is well formed can enhance query
understanding. Here, we introduce a new task of identifying a well-formed
natural language question. We construct and release a dataset of 25,100
publicly available questions classified into well-formed and non-wellformed
categories and report an accuracy of 70.7% on the test set. We also show that
our classifier can be used to improve the performance of neural
sequence-to-sequence models for generating questions for reading comprehension.
| 2,018 | Computation and Language |
WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling
Language and Discourse | We release a corpus of 43 million atomic edits across 8 languages. These
edits are mined from Wikipedia edit history and consist of instances in which a
human editor has inserted a single contiguous phrase into, or deleted a single
contiguous phrase from, an existing sentence. We use the collected data to show
that the language generated during editing differs from the language that we
observe in standard corpora, and that models trained on edits encode different
aspects of semantics and discourse than models trained on raw, unstructured
text. We release the full corpus as a resource to aid ongoing research in
semantics, discourse, and representation learning.
| 2,018 | Computation and Language |
Discriminative Deep Dyna-Q: Robust Planning for Dialogue Policy Learning | This paper presents a Discriminative Deep Dyna-Q (D3Q) approach to improving
the effectiveness and robustness of Deep Dyna-Q (DDQ), a recently proposed
framework that extends the Dyna-Q algorithm to integrate planning for
task-completion dialogue policy learning. To obviate DDQ's high dependency on
the quality of simulated experiences, we incorporate an RNN-based discriminator
in D3Q to differentiate simulated experience from real user experience in order
to control the quality of training data. Experiments show that D3Q
significantly outperforms DDQ by controlling the quality of simulated
experience used for planning. The effectiveness and robustness of D3Q is
further demonstrated in a domain extension setting, where the agent's
capability of adapting to a changing environment is tested.
| 2,018 | Computation and Language |
Graphene: A Context-Preserving Open Information Extraction System | We introduce Graphene, an Open IE system whose goal is to generate accurate,
meaningful and complete propositions that may facilitate a variety of
downstream semantic applications. For this purpose, we transform syntactically
complex input sentences into clean, compact structures in the form of core
facts and accompanying contexts, while identifying the rhetorical relations
that hold between them in order to maintain their semantic relationship. In
that way, we preserve the context of the relational tuples extracted from a
source sentence, generating a novel lightweight semantic representation for
Open IE that enhances the expressiveness of the extracted propositions.
| 2,018 | Computation and Language |
The University of Cambridge's Machine Translation Systems for WMT18 | The University of Cambridge submission to the WMT18 news translation task
focuses on the combination of diverse models of translation. We compare
recurrent, convolutional, and self-attention-based neural models on
German-English, English-German, and Chinese-English. Our final system combines
all neural models together with a phrase-based SMT system in an MBR-based
scheme. We report small but consistent gains on top of strong Transformer
ensembles.
| 2,018 | Computation and Language |
Learning To Split and Rephrase From Wikipedia Edit History | Split and rephrase is the task of breaking down a sentence into shorter ones
that together convey the same meaning. We extract a rich new dataset for this
task by mining Wikipedia's edit history: WikiSplit contains one million
naturally occurring sentence rewrites, providing sixty times more distinct
split examples and a ninety times larger vocabulary than the WebSplit corpus
introduced by Narayan et al. (2017) as a benchmark for this task. Incorporating
WikiSplit as training data produces a model with qualitatively better
predictions that score 32 BLEU points above the prior best result on the
WebSplit benchmark.
| 2,018 | Computation and Language |
Residualized Factor Adaptation for Community Social Media Prediction
Tasks | Predictive models over social media language have shown promise in capturing
community outcomes, but approaches thus far largely neglect the
socio-demographic context (e.g. age, education rates, race) of the community
from which the language originates. For example, it may be inaccurate to assume
people in Mobile, Alabama, where the population is relatively older, will use
words the same way as those from San Francisco, where the median age is younger
with a higher rate of college education. In this paper, we present residualized
factor adaptation, a novel approach to community prediction tasks which both
(a) effectively integrates community attributes, as well as (b) adapts
linguistic features to community attributes (factors). We use eleven
demographic and socioeconomic attributes, and evaluate our approach over five
different community-level predictive tasks, spanning health (heart disease
mortality, percent fair/poor health), psychology (life satisfaction), and
economics (percent housing price increase, foreclosure rate). Our evaluation
shows that residualized factor adaptation significantly improves 4 out of 5
community-level outcome predictions over prior state-of-the-art for
incorporating socio-demographic contexts.
| 2,018 | Computation and Language |
Learning to Attend On Essential Terms: An Enhanced Retriever-Reader
Model for Open-domain Question Answering | Open-domain question answering remains a challenging task as it requires
models that are capable of understanding questions and answers, collecting
useful information, and reasoning over evidence. Previous work typically
formulates this task as a reading comprehension or entailment problem given
evidence retrieved from search engines. However, existing techniques struggle
to retrieve indirectly related evidence when no directly related evidence is
provided, especially for complex questions where it is hard to parse precisely
what the question asks. In this paper we propose a retriever-reader model that
learns to attend on essential terms during the question answering process. We
build (1) an essential term selector which first identifies the most important
words in a question, then reformulates the query and searches for related
evidence; and (2) an enhanced reader that distinguishes between essential terms
and distracting words to predict the answer. We evaluate our model on multiple
open-domain multiple-choice QA datasets, notably performing at the level of the
state-of-the-art on the AI2 Reasoning Challenge (ARC) dataset.
| 2,019 | Computation and Language |
Adapting Word Embeddings to New Languages with Morphological and
Phonological Subword Representations | Much work in Natural Language Processing (NLP) has been for resource-rich
languages, making generalization to new, less-resourced languages challenging.
We present two approaches for improving generalization to low-resourced
languages by adapting continuous word representations using linguistically
motivated subword units: phonemes, morphemes and graphemes. Our method requires
neither parallel corpora nor bilingual dictionaries and provides a significant
gain in performance over previous methods relying on these resources. We
demonstrate the effectiveness of our approaches on Named Entity Recognition for
four languages, namely Uyghur, Turkish, Bengali and Hindi, of which Uyghur and
Bengali are low resource languages, and also perform experiments on Machine
Translation. Exploiting subwords with transfer learning gives us a boost of
+15.2 NER F1 for Uyghur and +9.7 F1 for Bengali. We also show improvements in
the monolingual setting where we achieve (avg.) +3 F1 and (avg.) +1.35 BLEU.
| 2,018 | Computation and Language |
Semantic Matching Against a Corpus: New Applications and Methods | We consider the case of a domain expert who wishes to explore the extent to
which a particular idea is expressed in a text collection. We propose the task
of semantically matching the idea, expressed as a natural language proposition,
against a corpus. We create two preliminary tasks derived from existing
datasets, and then introduce a more realistic one on disaster recovery designed
for emergency managers, whom we engaged in a user study. On the latter, we find
that a new model built from natural language entailment data produces
higher-quality matches than simple word-vector averaging, both on
expert-crafted queries and on ones produced by the subjects themselves. This
work provides a proof-of-concept for such applications of semantic matching and
illustrates key challenges.
| 2,018 | Computation and Language |
Layer Trajectory LSTM | It is popular to stack LSTM layers to get better modeling power, especially
when large amount of training data is available. However, an LSTM-RNN with too
many vanilla LSTM layers is very hard to train and there still exists the
gradient vanishing issue if the network goes too deep. This issue can be
partially solved by adding skip connections between layers, such as residual
LSTM. In this paper, we propose a layer trajectory LSTM (ltLSTM) which builds a
layer-LSTM using all the layer outputs from a standard multi-layer time-LSTM.
This layer-LSTM scans the outputs from time-LSTMs, and uses the summarized
layer trajectory information for final senone classification. The
forward-propagation of time-LSTM and layer-LSTM can be handled in two separate
threads in parallel so that the network computation time is the same as the
standard time-LSTM. With a layer-LSTM running through layers, a gated path is
provided from the output layer to the bottom layer, alleviating the gradient
vanishing issue. Trained with 30 thousand hours of EN-US Microsoft internal
data, the proposed ltLSTM performed significantly better than the standard
multi-layer LSTM and residual LSTM, with up to 9.0% relative word error rate
reduction across different tasks.
| 2,018 | Computation and Language |
Hierarchical Quantized Representations for Script Generation | Scripts define knowledge about how everyday scenarios (such as going to a
restaurant) are expected to unfold. One of the challenges to learning scripts
is the hierarchical nature of the knowledge. For example, a suspect arrested
might plead innocent or guilty, and a very different track of events is then
expected to happen. To capture this type of information, we propose an
autoencoder model with a latent space defined by a hierarchy of categorical
variables. We utilize a recently proposed vector quantization based approach,
which allows continuous embeddings to be associated with each latent variable
value. This permits the decoder to softly decide what portions of the latent
hierarchy to condition on by attending over the value embeddings for a given
setting. Our model effectively encodes and generates scripts, outperforming a
recent language modeling-based method on several standard tasks, and allowing
the autoencoder model to achieve substantially lower perplexity scores compared
to the previous language modeling-based method.
| 2,018 | Computation and Language |
Towards Semi-Supervised Learning for Deep Semantic Role Labeling | Neural models have shown several state-of-the-art performances on Semantic
Role Labeling (SRL). However, the neural models require an immense amount of
semantic-role corpora and are thus not well suited for low-resource languages
or domains. The paper proposes a semi-supervised semantic role labeling method
that outperforms the state-of-the-art in limited SRL training corpora. The
method is based on explicitly enforcing syntactic constraints by augmenting the
training objective with a syntactic-inconsistency loss component and uses
SRL-unlabeled instances to train a joint-objective LSTM. On CoNLL-2012 English
section, the proposed semi-supervised training with 1%, 10% SRL-labeled data
and varying amounts of SRL-unlabeled data achieves +1.58, +0.78 F1,
respectively, over the pre-trained models that were trained on SOTA
architecture with ELMo on the same SRL-labeled data. Additionally, by using the
syntactic-inconsistency loss on inference time, the proposed model achieves
+3.67, +2.1 F1 over pre-trained model on 1%, 10% SRL-labeled data,
respectively.
| 2,018 | Computation and Language |
Explaining Character-Aware Neural Networks for Word-Level Prediction: Do
They Discover Linguistic Rules? | Character-level features are currently used in different neural network-based
natural language processing algorithms. However, little is known about the
character-level patterns those models learn. Moreover, models are often
compared only quantitatively while a qualitative analysis is missing. In this
paper, we investigate which character-level patterns neural networks learn and
if those patterns coincide with manually-defined word segmentations and
annotations. To that end, we extend the contextual decomposition technique
(Murdoch et al. 2018) to convolutional neural networks which allows us to
compare convolutional neural networks and bidirectional long short-term memory
networks. We evaluate and compare these models for the task of morphological
tagging on three morphologically different languages and show that these models
implicitly discover understandable linguistic rules. Our implementation can be
found at https://github.com/FredericGodin/ContextualDecomposition-NLP .
| 2,018 | Computation and Language |
Multi-Reference Training with Pseudo-References for Neural Translation
and Text Generation | Neural text generation, including neural machine translation, image
captioning, and summarization, has been quite successful recently. However,
during training time, typically only one reference is considered for each
example, even though there are often multiple references available, e.g., 4
references in NIST MT evaluations, and 5 references in image captioning data.
We first investigate several different ways of utilizing multiple human
references during training. But more importantly, we then propose an algorithm
to generate exponentially many pseudo-references by first compressing existing
human references into lattices and then traversing them to generate new
pseudo-references. These approaches lead to substantial improvements over
strong baselines in both machine translation (+1.5 BLEU) and image captioning
(+3.1 BLEU / +11.7 CIDEr).
| 2,018 | Computation and Language |
Breaking the Beam Search Curse: A Study of (Re-)Scoring Methods and
Stopping Criteria for Neural Machine Translation | Beam search is widely used in neural machine translation, and usually
improves translation quality compared to greedy search. It has been widely
observed that, however, beam sizes larger than 5 hurt translation quality. We
explain why this happens, and propose several methods to address this problem.
Furthermore, we discuss the optimal stopping criteria for these methods.
Results show that our hyperparameter-free methods outperform the widely-used
hyperparameter-free heuristic of length normalization by +2.0 BLEU, and achieve
the best results among all methods on Chinese-to-English translation.
| 2,018 | Computation and Language |
Mapping Language to Code in Programmatic Context | Source code is rarely written in isolation. It depends significantly on the
programmatic context, such as the class that the code would reside in. To study
this phenomenon, we introduce the task of generating class member functions
given English documentation and the programmatic context provided by the rest
of the class. This task is challenging because the desired code can vary
greatly depending on the functionality the class provides (e.g., a sort
function may or may not be available when we are asked to "return the smallest
element" in a particular member variable list). We introduce CONCODE, a new
large dataset with over 100,000 examples consisting of Java classes from online
code repositories, and develop a new encoder-decoder architecture that models
the interaction between the method documentation and the class environment. We
also present a detailed error analysis suggesting that there is significant
room for future work on this task.
| 2,018 | Computation and Language |
Multi-Task Identification of Entities, Relations, and Coreference for
Scientific Knowledge Graph Construction | We introduce a multi-task setup of identifying and classifying entities,
relations, and coreference clusters in scientific articles. We create SciERC, a
dataset that includes annotations for all three tasks and develop a unified
framework called Scientific Information Extractor (SciIE) for with shared span
representations. The multi-task setup reduces cascading errors between tasks
and leverages cross-sentence relations through coreference links. Experiments
show that our multi-task model outperforms previous models in scientific
information extraction without using any domain-specific features. We further
show that the framework supports construction of a scientific knowledge graph,
which we use to analyze information in scientific literature.
| 2,018 | Computation and Language |
Improved Semantic-Aware Network Embedding with Fine-Grained Word
Alignment | Network embeddings, which learn low-dimensional representations for each
vertex in a large-scale network, have received considerable attention in recent
years. For a wide range of applications, vertices in a network are typically
accompanied by rich textual information such as user profiles, paper abstracts,
etc. We propose to incorporate semantic features into network embeddings by
matching important words between text sequences for all pairs of vertices. We
introduce a word-by-word alignment framework that measures the compatibility of
embeddings between word pairs, and then adaptively accumulates these alignment
features with a simple yet effective aggregation function. In experiments, we
evaluate the proposed framework on three real-world benchmarks for downstream
tasks, including link prediction and multi-label vertex classification. Results
demonstrate that our model outperforms state-of-the-art network embedding
methods by a large margin.
| 2,018 | Computation and Language |
Decoupling Strategy and Generation in Negotiation Dialogues | We consider negotiation settings in which two agents use natural language to
bargain on goods. Agents need to decide on both high-level strategy (e.g.,
proposing \$50) and the execution of that strategy (e.g., generating "The bike
is brand new. Selling for just \$50."). Recent work on negotiation trains
neural models, but their end-to-end nature makes it hard to control their
strategy, and reinforcement learning tends to lead to degenerate solutions. In
this paper, we propose a modular approach based on coarse di- alogue acts
(e.g., propose(price=50)) that decouples strategy and generation. We show that
we can flexibly set the strategy using supervised learning, reinforcement
learning, or domain-specific knowledge without degeneracy, while our
retrieval-based generation can maintain context-awareness and produce diverse
utterances. We test our approach on the recently proposed DEALORNODEAL game,
and we also collect a richer dataset based on real items on Craigslist. Human
evaluation shows that our systems achieve higher task success rate and more
human-like negotiation behavior than previous approaches.
| 2,018 | Computation and Language |
On Tree-Based Neural Sentence Modeling | Neural networks with tree-based sentence encoders have shown better results
on many downstream tasks. Most of existing tree-based encoders adopt syntactic
parsing trees as the explicit structure prior. To study the effectiveness of
different tree structures, we replace the parsing trees with trivial trees
(i.e., binary balanced tree, left-branching tree and right-branching tree) in
the encoders. Though trivial trees contain no syntactic information, those
encoders get competitive or even better results on all of the ten downstream
tasks we investigated. This surprising result indicates that explicit syntax
guidance may not be the main contributor to the superior performances of
tree-based neural sentence modeling. Further analysis show that tree modeling
gives better results when crucial words are closer to the final representation.
Additional experiments give more clues on how to design an effective tree-based
encoder. Our code is open-source and available at
https://github.com/ExplorerFreda/TreeEnc.
| 2,018 | Computation and Language |
Adapting Visual Question Answering Models for Enhancing Multimodal
Community Q&A Platforms | Question categorization and expert retrieval methods have been crucial for
information organization and accessibility in community question & answering
(CQA) platforms. Research in this area, however, has dealt with only the text
modality. With the increasing multimodal nature of web content, we focus on
extending these methods for CQA questions accompanied by images. Specifically,
we leverage the success of representation learning for text and images in the
visual question answering (VQA) domain, and adapt the underlying concept and
architecture for automated category classification and expert retrieval on
image-based questions posted on Yahoo! Chiebukuro, the Japanese counterpart of
Yahoo! Answers.
To the best of our knowledge, this is the first work to tackle the
multimodality challenge in CQA, and to adapt VQA models for tasks on a more
ecologically valid source of visual questions. Our analysis of the differences
between visual QA and community QA data drives our proposal of novel
augmentations of an attention method tailored for CQA, and use of auxiliary
tasks for learning better grounding features. Our final model markedly
outperforms the text-only and VQA model baselines for both tasks of
classification and expert retrieval on real-world multimodal CQA data.
| 2,019 | Computation and Language |
Neural Metaphor Detection in Context | We present end-to-end neural models for detecting metaphorical word use in
context. We show that relatively standard BiLSTM models which operate on
complete sentences work well in this setting, in comparison to previous work
that used more restricted forms of linguistic context. These models establish a
new state-of-the-art on existing verb metaphor detection benchmarks, and show
strong performance on jointly predicting the metaphoricity of all words in a
running text.
| 2,018 | Computation and Language |
APRIL: Interactively Learning to Summarise by Combining Active
Preference Learning and Reinforcement Learning | We propose a method to perform automatic document summarisation without using
reference summaries. Instead, our method interactively learns from users'
preferences. The merit of preference-based interactive summarisation is that
preferences are easier for users to provide than reference summaries. Existing
preference-based interactive learning methods suffer from high sample
complexity, i.e. they need to interact with the oracle for many rounds in order
to converge. In this work, we propose a new objective function, which enables
us to leverage active learning, preference learning and reinforcement learning
techniques in order to reduce the sample complexity. Both simulation and
real-user experiments suggest that our method significantly advances the state
of the art. Our source code is freely available at
https://github.com/UKPLab/emnlp2018-april.
| 2,018 | Computation and Language |
Context Mover's Distance & Barycenters: Optimal Transport of Contexts
for Building Representations | We present a framework for building unsupervised representations of entities
and their compositions, where each entity is viewed as a probability
distribution rather than a vector embedding. In particular, this distribution
is supported over the contexts which co-occur with the entity and are embedded
in a suitable low-dimensional space. This enables us to consider representation
learning from the perspective of Optimal Transport and take advantage of its
tools such as Wasserstein distance and barycenters. We elaborate how the method
can be applied for obtaining unsupervised representations of text and
illustrate the performance (quantitatively as well as qualitatively) on tasks
such as measuring sentence similarity, word entailment and similarity, where we
empirically observe significant gains (e.g., 4.1% relative improvement over
Sent2vec, GenSen).
The key benefits of the proposed approach include: (a) capturing uncertainty
and polysemy via modeling the entities as distributions, (b) utilizing the
underlying geometry of the particular task (with the ground cost), (c)
simultaneously providing interpretability with the notion of optimal transport
between contexts and (d) easy applicability on top of existing point embedding
methods. The code, as well as prebuilt histograms, are available under
https://github.com/context-mover/.
| 2,020 | Computation and Language |
An Operation Sequence Model for Explainable Neural Machine Translation | We propose to achieve explainable neural machine translation (NMT) by
changing the output representation to explain itself. We present a novel
approach to NMT which generates the target sentence by monotonically walking
through the source sentence. Word reordering is modeled by operations which
allow setting markers in the target sentence and move a target-side write head
between those markers. In contrast to many modern neural models, our system
emits explicit word alignment information which is often crucial to practical
machine translation as it improves explainability. Our technique can outperform
a plain text system in terms of BLEU score under the recent Transformer
architecture on Japanese-English and Portuguese-English, and is within 0.5 BLEU
difference on Spanish-English.
| 2,018 | Computation and Language |
What can we learn from Semantic Tagging? | We investigate the effects of multi-task learning using the recently
introduced task of semantic tagging. We employ semantic tagging as an auxiliary
task for three different NLP tasks: part-of-speech tagging, Universal
Dependency parsing, and Natural Language Inference. We compare full neural
network sharing, partial neural network sharing, and what we term the learning
what to share setting where negative transfer between tasks is less likely. Our
findings show considerable improvements for all tasks, particularly in the
learning what to share setting, which shows consistent gains across all tasks.
| 2,018 | Computation and Language |
Characterizing the Influence of Features on Reading Difficulty
Estimation for Non-native Readers | In recent years, the number of people studying English as a second language
(ESL) has surpassed the number of native speakers. Recent work have
demonstrated the success of providing personalized content based on reading
difficulty, such as information retrieval and summarization. However, almost
all prior studies of reading difficulty are designed for native speakers,
rather than non-native readers. In this study, we investigate various features
for ESL readers, by conducting a linear regression to estimate the reading
level of English language sources. This estimation is based not only on the
complexity of lexical and syntactic features, but also several novel concepts,
including the age of word and grammar acquisition from several sources, word
sense from WordNet, and the implicit relation between sentences. By employing
Bayesian Information Criterion (BIC) to select the optimal model, we find that
the combination of the number of words, the age of word acquisition and the
height of the parsing tree generate better results than alternative competing
models. Thus, our results show that proposed second language reading difficulty
estimation outperforms other first language reading difficulty estimations.
| 2,018 | Computation and Language |
Identifying the sentiment styles of YouTube's vloggers | Vlogs provide a rich public source of data in a novel setting. This paper
examined the continuous sentiment styles employed in 27,333 vlogs using a
dynamic intra-textual approach to sentiment analysis. Using unsupervised
clustering, we identified seven distinct continuous sentiment trajectories
characterized by fluctuations of sentiment throughout a vlog's narrative time.
We provide a taxonomy of these seven continuous sentiment styles and found that
vlogs whose sentiment builds up towards a positive ending are the most
prevalent in our sample. Gender was associated with preferences for different
continuous sentiment trajectories. This paper discusses the findings with
respect to previous work and concludes with an outlook towards possible uses of
the corpus, method and findings of this paper for related areas of research.
| 2,018 | Computation and Language |
Development and Evaluation of a Personalized Computer-aided Question
Generation for English Learners to Improve Proficiency and Correct Mistakes | In the last several years, the field of computer assisted language learning
has increasingly focused on computer aided question generation. However, this
approach often provides test takers with an exhaustive amount of questions that
are not designed for any specific testing purpose. In this work, we present a
personalized computer aided question generation that generates multiple choice
questions at various difficulty levels and types, including vocabulary, grammar
and reading comprehension. In order to improve the weaknesses of test takers,
it selects questions depending on an estimated proficiency level and unclear
concepts behind incorrect responses. This results show that the students with
the personalized automatic quiz generation corrected their mistakes more
frequently than ones only with computer aided question generation. Moreover,
students demonstrated the most progress between the pretest and post test and
correctly answered more difficult questions. Finally, we investigated the
personalizing strategy and found that a student could make a significant
progress if the proposed system offered the vocabulary questions at the same
level of his or her proficiency level, and if the grammar and reading
comprehension questions were at a level lower than his or her proficiency
level.
| 2,018 | Computation and Language |
Distant Supervision from Disparate Sources for Low-Resource
Part-of-Speech Tagging | We introduce DsDs: a cross-lingual neural part-of-speech tagger that learns
from disparate sources of distant supervision, and realistically scales to
hundreds of low-resource languages. The model exploits annotation projection,
instance selection, tag dictionaries, morphological lexicons, and distributed
representations, all in a uniform framework. The approach is simple, yet
surprisingly effective, resulting in a new state of the art without access to
any gold annotated data.
| 2,018 | Computation and Language |
Rule induction for global explanation of trained models | Understanding the behavior of a trained network and finding explanations for
its outputs is important for improving the network's performance and
generalization ability, and for ensuring trust in automated systems. Several
approaches have previously been proposed to identify and visualize the most
important features by analyzing a trained network. However, the relations
between different features and classes are lost in most cases. We propose a
technique to induce sets of if-then-else rules that capture these relations to
globally explain the predictions of a network. We first calculate the
importance of the features in the trained network. We then weigh the original
inputs with these feature importance scores, simplify the transformed input
space, and finally fit a rule induction model to explain the model predictions.
We find that the output rule-sets can explain the predictions of a neural
network trained for 4-class text classification from the 20 newsgroups dataset
to a macro-averaged F-score of 0.80. We make the code available at
https://github.com/clips/interpret_with_rules.
| 2,018 | Computation and Language |
Notes on Deep Learning for NLP | My notes on Deep Learning for NLP.
| 2,018 | Computation and Language |
Neural Cross-Lingual Named Entity Recognition with Minimal Resources | For languages with no annotated resources, unsupervised transfer of natural
language processing models such as named-entity recognition (NER) from
resource-rich languages would be an appealing capability. However, differences
in words and word order across languages make it a challenging problem. To
improve mapping of lexical items across languages, we propose a method that
finds translations based on bilingual word embeddings. To improve robustness to
word order differences, we propose to use self-attention, which allows for a
degree of flexibility with respect to word order. We demonstrate that these
methods achieve state-of-the-art or competitive NER performance on commonly
tested languages under a cross-lingual setting, with much lower resource
requirements than past approaches. We also evaluate the challenges of applying
these methods to Uyghur, a low-resource language.
| 2,018 | Computation and Language |
KDSL: a Knowledge-Driven Supervised Learning Framework for Word Sense
Disambiguation | We propose KDSL, a new word sense disambiguation (WSD) framework that
utilizes knowledge to automatically generate sense-labeled data for supervised
learning. First, from WordNet, we automatically construct a semantic knowledge
base called DisDict, which provides refined feature words that highlight the
differences among word senses, i.e., synsets. Second, we automatically generate
new sense-labeled data by DisDict from unlabeled corpora. Third, these
generated data, together with manually labeled data and unlabeled data, are fed
to a neural framework conducting supervised and unsupervised learning jointly
to model the semantic relations among synsets, feature words and their
contexts. The experimental results show that KDSL outperforms several
representative state-of-the-art methods on various major benchmarks.
Interestingly, it performs relatively well even when manually labeled data is
unavailable, thus provides a potential solution for similar tasks in a lack of
manual annotations.
| 2,018 | Computation and Language |
Zero-shot Transfer Learning for Semantic Parsing | While neural networks have shown impressive performance on large datasets,
applying these models to tasks where little data is available remains a
challenging problem.
In this paper we propose to use feature transfer in a zero-shot experimental
setting on the task of semantic parsing.
We first introduce a new method for learning the shared space between
multiple domains based on the prediction of the domain label for each example.
Our experiments support the superiority of this method in a zero-shot
experimental setting in terms of accuracy metrics compared to state-of-the-art
techniques.
In the second part of this paper we study the impact of individual domains
and examples on semantic parsing performance.
We use influence functions to this aim and investigate the sensitivity of
domain-label classification loss on each example.
Our findings reveal that cross-domain adversarial attacks identify useful
examples for training even from the domains the least similar to the target
domain. Augmenting our training data with these influential examples further
boosts our accuracy at both the token and the sequence level.
| 2,018 | Computation and Language |
An Adaptive Conversational Bot Framework | How can we enable users to heavily specify criteria for database queries in a
user-friendly way? This paper describes a general framework of a conversational
bot that extracts meaningful information from user's sentences, that asks
subsequent questions to complete missing information, and that adjusts its
questions and information-extraction parameters for later conversations
depending on users' behavior. Additionally, we provide a comparison of existing
tools and give novel techniques to implement such framework. Finally, we
exemplify the framework with a bot to query movies in a database, whose code is
available for Microsoft employees.
| 2,018 | Computation and Language |
A Quantum Many-body Wave Function Inspired Language Modeling Approach | The recently proposed quantum language model (QLM) aimed at a principled
approach to modeling term dependency by applying the quantum probability
theory. The latest development for a more effective QLM has adopted word
embeddings as a kind of global dependency information and integrated the
quantum-inspired idea in a neural network architecture. While these
quantum-inspired LMs are theoretically more general and also practically
effective, they have two major limitations. First, they have not taken into
account the interaction among words with multiple meanings, which is common and
important in understanding natural language text. Second, the integration of
the quantum-inspired LM with the neural network was mainly for effective
training of parameters, yet lacking a theoretical foundation accounting for
such integration. To address these two issues, in this paper, we propose a
Quantum Many-body Wave Function (QMWF) inspired language modeling approach. The
QMWF inspired LM can adopt the tensor product to model the aforesaid
interaction among words. It also enables us to reveal the inherent necessity of
using Convolutional Neural Network (CNN) in QMWF language modeling.
Furthermore, our approach delivers a simple algorithm to represent and match
text/sentence pairs. Systematic evaluation shows the effectiveness of the
proposed QMWF-LM algorithm, in comparison with the state of the art
quantum-inspired LMs and a couple of CNN-based methods, on three typical
Question Answering (QA) datasets.
| 2,018 | Computation and Language |
Review Helpfulness Prediction with Embedding-Gated CNN | Product reviews, in the form of texts dominantly, significantly help
consumers finalize their purchasing decisions. Thus, it is important for
e-commerce companies to predict review helpfulness to present and recommend
reviews in a more informative manner. In this work, we introduce a
convolutional neural network model that is able to extract abstract features
from multi-granularity representations. Inspired by the fact that different
words contribute to the meaning of a sentence differently, we consider to learn
word-level embedding-gates for all the representations. Furthermore, as it is
common that some product domains/categories have rich user reviews, other
domains not. To help domains with less sufficient data, we integrate our model
into a cross-domain relationship learning framework for effectively
transferring knowledge from other domains. Extensive experiments show that our
model yields better performance than the existing methods.
| 2,018 | Computation and Language |
Question Answering by Reasoning Across Documents with Graph
Convolutional Networks | Most research in reading comprehension has focused on answering questions
based on individual documents or even single paragraphs. We introduce a neural
model which integrates and reasons relying on information spread within
documents and across multiple documents. We frame it as an inference problem on
a graph. Mentions of entities are nodes of this graph while edges encode
relations between different mentions (e.g., within- and cross-document
co-reference). Graph convolutional networks (GCNs) are applied to these graphs
and trained to perform multi-step reasoning. Our Entity-GCN method is scalable
and compact, and it achieves state-of-the-art results on a multi-document
question answering dataset, WikiHop (Welbl et al., 2018).
| 2,019 | Computation and Language |
A Neural Model of Adaptation in Reading | It has been argued that humans rapidly adapt their lexical and syntactic
expectations to match the statistics of the current linguistic context. We
provide further support to this claim by showing that the addition of a simple
adaptation mechanism to a neural language model improves our predictions of
human reading times compared to a non-adaptive model. We analyze the
performance of the model on controlled materials from psycholinguistic
experiments and show that it adapts not only to lexical items but also to
abstract syntactic structures.
| 2,018 | Computation and Language |
Neural Compositional Denotational Semantics for Question Answering | Answering compositional questions requiring multi-step reasoning is
challenging. We introduce an end-to-end differentiable model for interpreting
questions about a knowledge graph (KG), which is inspired by formal approaches
to semantics. Each span of text is represented by a denotation in a KG and a
vector that captures ungrounded aspects of meaning. Learned composition modules
recursively combine constituent spans, culminating in a grounding for the
complete sentence which answers the question. For example, to interpret "not
green", the model represents "green" as a set of KG entities and "not" as a
trainable ungrounded vector---and then uses this vector to parameterize a
composition function that performs a complement operation. For each sentence,
we build a parse chart subsuming all possible parses, allowing the model to
jointly learn both the composition operators and output structure by gradient
descent from end-task supervision. The model learns a variety of challenging
semantic operators, such as quantifiers, disjunctions and composed relations,
and infers latent syntactic structure. It also generalizes well to longer
questions than seen in its training data, in contrast to RNN, its tree-based
variants, and semantic parsing baselines.
| 2,018 | Computation and Language |
Revisiting Character-Based Neural Machine Translation with Capacity and
Compression | Translating characters instead of words or word-fragments has the potential
to simplify the processing pipeline for neural machine translation (NMT), and
improve results by eliminating hyper-parameters and manual feature engineering.
However, it results in longer sequences in which each symbol contains less
information, creating both modeling and computational challenges. In this
paper, we show that the modeling problem can be solved by standard
sequence-to-sequence architectures of sufficient depth, and that deep models
operating at the character level outperform identical models operating over
word fragments. This result implies that alternative architectures for handling
character input are better viewed as methods for reducing computation time than
as improved ways of modeling longer sequences. From this perspective, we
evaluate several techniques for character-level NMT, verify that they do not
match the performance of our deep character baseline model, and evaluate the
performance versus computation time tradeoffs they offer. Within this
framework, we also perform the first evaluation for NMT of conditional
computation over time, in which the model learns which timesteps can be
skipped, rather than having them be dictated by a fixed schedule specified
before training begins.
| 2,018 | Computation and Language |
Learning End-to-End Goal-Oriented Dialog with Multiple Answers | In a dialog, there can be multiple valid next utterances at any point. The
present end-to-end neural methods for dialog do not take this into account.
They learn with the assumption that at any time there is only one correct next
utterance. In this work, we focus on this problem in the goal-oriented dialog
setting where there are different paths to reach a goal. We propose a new
method, that uses a combination of supervised learning and reinforcement
learning approaches to address this issue. We also propose a new and more
effective testbed, permuted-bAbI dialog tasks, by introducing multiple valid
next utterances to the original-bAbI dialog tasks, which allows evaluation of
goal-oriented dialog systems in a more realistic setting. We show that there is
a significant drop in performance of existing end-to-end neural methods from
81.5% per-dialog accuracy on original-bAbI dialog tasks to 30.3% on
permuted-bAbI dialog tasks. We also show that our proposed method improves the
performance and achieves 47.3% per-dialog accuracy on permuted-bAbI dialog
tasks.
| 2,018 | Computation and Language |
Grammar Induction with Neural Language Models: An Unusual Replication | A substantial thread of recent work on latent tree learning has attempted to
develop neural network models with parse-valued latent variables and train them
on non-parsing tasks, in the hope of having them discover interpretable tree
structure. In a recent paper, Shen et al. (2018) introduce such a model and
report near-state-of-the-art results on the target task of language modeling,
and the first strong latent tree learning result on constituency parsing. In an
attempt to reproduce these results, we discover issues that make the original
results hard to trust, including tuning and even training on what is
effectively the test set. Here, we attempt to reproduce these results in a fair
experiment and to extend them to two new datasets. We find that the results of
this work are robust: All variants of the model under study outperform all
latent tree learning baselines, and perform competitively with symbolic grammar
induction systems. We find that this model represents the first empirical
success for latent tree learning, and that neural network language modeling
warrants further study as a setting for grammar induction.
| 2,018 | Computation and Language |
Correcting Length Bias in Neural Machine Translation | We study two problems in neural machine translation (NMT). First, in beam
search, whereas a wider beam should in principle help translation, it often
hurts NMT. Second, NMT has a tendency to produce translations that are too
short. Here, we argue that these problems are closely related and both rooted
in label bias. We show that correcting the brevity problem almost eliminates
the beam problem; we compare some commonly-used methods for doing this, finding
that a simple per-word reward works well; and we introduce a simple and quick
way to tune this reward using the perceptron algorithm.
| 2,018 | Computation and Language |
Learning a Policy for Opportunistic Active Learning | Active learning identifies data points to label that are expected to be the
most useful in improving a supervised model. Opportunistic active learning
incorporates active learning into interactive tasks that constrain possible
queries during interactions. Prior work has shown that opportunistic active
learning can be used to improve grounding of natural language descriptions in
an interactive object retrieval task. In this work, we use reinforcement
learning for such an object retrieval task, to learn a policy that effectively
trades off task completion with model improvement that would benefit future
tasks.
| 2,018 | Computation and Language |
Hard Non-Monotonic Attention for Character-Level Transduction | Character-level string-to-string transduction is an important component of
various NLP tasks. The goal is to map an input string to an output string,
where the strings may be of different lengths and have characters taken from
different alphabets. Recent approaches have used sequence-to-sequence models
with an attention mechanism to learn which parts of the input string the model
should focus on during the generation of the output string. Both soft attention
and hard monotonic attention have been used, but hard non-monotonic attention
has only been used in other sequence modeling tasks such as image captioning
(Xu et al., 2015), and has required a stochastic approximation to compute the
gradient. In this work, we introduce an exact, polynomial-time algorithm for
marginalizing over the exponential number of non-monotonic alignments between
two strings, showing that hard attention models can be viewed as neural
reparameterizations of the classical IBM Model 1. We compare soft and hard
non-monotonic attention experimentally and find that the exact algorithm
significantly improves performance over the stochastic approximation and
outperforms soft attention. Code is available at https://github.
com/shijie-wu/neural-transducer.
| 2,024 | Computation and Language |
Retrieval-Based Neural Code Generation | In models to generate program source code from natural language, representing
this code in a tree structure has been a common approach. However, existing
methods often fail to generate complex code correctly due to a lack of ability
to memorize large and complex structures. We introduce ReCode, a method based
on subtree retrieval that makes it possible to explicitly reference existing
code examples within a neural code generation model. First, we retrieve
sentences that are similar to input sentences using a dynamic-programming-based
sentence similarity scoring method. Next, we extract n-grams of action
sequences that build the associated abstract syntax tree. Finally, we increase
the probability of actions that cause the retrieved n-gram action subtree to be
in the predicted code. We show that our approach improves the performance on
two code generation tasks by up to +2.6 BLEU.
| 2,018 | Computation and Language |
Zero-Shot Adaptive Transfer for Conversational Language Understanding | Conversational agents such as Alexa and Google Assistant constantly need to
increase their language understanding capabilities by adding new domains. A
massive amount of labeled data is required for training each new domain. While
domain adaptation approaches alleviate the annotation cost, prior approaches
suffer from increased training time and suboptimal concept alignments. To
tackle this, we introduce a novel Zero-Shot Adaptive Transfer method for slot
tagging that utilizes the slot description for transferring reusable concepts
across domains, and enjoys efficient training without any explicit concept
alignments. Extensive experimentation over a dataset of 10 domains relevant to
our commercial personal digital assistant shows that our model outperforms
previous state-of-the-art systems by a large margin, and achieves an even
higher improvement in the low data regime.
| 2,018 | Computation and Language |
Story Ending Generation with Incremental Encoding and Commonsense
Knowledge | Generating a reasonable ending for a given story context, i.e., story ending
generation, is a strong indication of story comprehension. This task requires
not only to understand the context clues which play an important role in
planning the plot but also to handle implicit knowledge to make a reasonable,
coherent story.
In this paper, we devise a novel model for story ending generation. The model
adopts an incremental encoding scheme to represent context clues which are
spanning in the story context. In addition, commonsense knowledge is applied
through multi-source attention to facilitate story comprehension, and thus to
help generate coherent and reasonable endings. Through building context clues
and using implicit knowledge, the model is able to produce reasonable story
endings. context clues implied in the post and make the inference based on it.
Automatic and manual evaluation shows that our model can generate more
reasonable story endings than state-of-the-art baselines.
| 2,018 | Computation and Language |
Learning Neural Templates for Text Generation | While neural, encoder-decoder models have had significant empirical success
in text generation, there remain several unaddressed problems with this style
of generation. Encoder-decoder models are largely (a) uninterpretable, and (b)
difficult to control in terms of their phrasing or content. This work proposes
a neural generation system using a hidden semi-markov model (HSMM) decoder,
which learns latent, discrete templates jointly with learning to generate. We
show that this model learns useful templates, and that these templates make
generation both more interpretable and controllable. Furthermore, we show that
this approach scales to real data sets and achieves strong performance nearing
that of encoder-decoder text generation models.
| 2,019 | Computation and Language |
Semi-Supervised Training for Improving Data Efficiency in End-to-End
Speech Synthesis | Although end-to-end text-to-speech (TTS) models such as Tacotron have shown
excellent results, they typically require a sizable set of high-quality <text,
audio> pairs for training, which are expensive to collect. In this paper, we
propose a semi-supervised training framework to improve the data efficiency of
Tacotron. The idea is to allow Tacotron to utilize textual and acoustic
knowledge contained in large, publicly-available text and speech corpora.
Importantly, these external data are unpaired and potentially noisy.
Specifically, first we embed each word in the input text into word vectors and
condition the Tacotron encoder on them. We then use an unpaired speech corpus
to pre-train the Tacotron decoder in the acoustic domain. Finally, we fine-tune
the model using available paired data. We demonstrate that the proposed
framework enables Tacotron to generate intelligible speech using less than half
an hour of paired training data.
| 2,018 | Computation and Language |
Direct Output Connection for a High-Rank Language Model | This paper proposes a state-of-the-art recurrent neural network (RNN)
language model that combines probability distributions computed not only from a
final RNN layer but also from middle layers. Our proposed method raises the
expressive power of a language model based on the matrix factorization
interpretation of language modeling introduced by Yang et al. (2018). The
proposed method improves the current state-of-the-art language model and
achieves the best score on the Penn Treebank and WikiText-2, which are the
standard benchmark datasets. Moreover, we indicate our proposed method
contributes to two application tasks: machine translation and headline
generation. Our code is publicly available at:
https://github.com/nttcslab-nlp/doc_lm.
| 2,018 | Computation and Language |
Towards a Better Metric for Evaluating Question Generation Systems | There has always been criticism for using $n$-gram based similarity metrics,
such as BLEU, NIST, etc, for evaluating the performance of NLG systems.
However, these metrics continue to remain popular and are recently being used
for evaluating the performance of systems which automatically generate
questions from documents, knowledge graphs, images, etc. Given the rising
interest in such automatic question generation (AQG) systems, it is important
to objectively examine whether these metrics are suitable for this task. In
particular, it is important to verify whether such metrics used for evaluating
AQG systems focus on answerability of the generated question by preferring
questions which contain all relevant information such as question type
(Wh-types), entities, relations, etc. In this work, we show that current
automatic evaluation metrics based on $n$-gram similarity do not always
correlate well with human judgments about answerability of a question. To
alleviate this problem and as a first step towards better evaluation metrics
for AQG, we introduce a scoring function to capture answerability and show that
when this scoring function is integrated with existing metrics, they correlate
significantly better with human judgments. The scripts and data developed as a
part of this work are made publicly available at
https://github.com/PrekshaNema25/Answerability-Metric
| 2,018 | Computation and Language |
Pronoun Translation in English-French Machine Translation: An Analysis
of Error Types | Pronouns are a long-standing challenge in machine translation. We present a
study of the performance of a range of rule-based, statistical and neural MT
systems on pronoun translation based on an extensive manual evaluation using
the PROTEST test suite, which enables a fine-grained analysis of different
pronoun types and sheds light on the difficulties of the task. We find that the
rule-based approaches in our corpus perform poorly as a result of
oversimplification, whereas SMT and early NMT systems exhibit significant
shortcomings due to a lack of awareness of the functional and referential
properties of pronouns. A recent Transformer-based NMT system with
cross-sentence context shows very promising results on non-anaphoric pronouns
and intra-sentential anaphora, but there is still considerable room for
improvement in examples with cross-sentence dependencies.
| 2,018 | Computation and Language |
Learning to adapt: a meta-learning approach for speaker adaptation | The performance of automatic speech recognition systems can be improved by
adapting an acoustic model to compensate for the mismatch between training and
testing conditions, for example by adapting to unseen speakers. The success of
speaker adaptation methods relies on selecting weights that are suitable for
adaptation and using good adaptation schedules to update these weights in order
not to overfit to the adaptation data. In this paper we investigate a
principled way of adapting all the weights of the acoustic model using a
meta-learning. We show that the meta-learner can learn to perform supervised
and unsupervised speaker adaptation and that it outperforms a strong baseline
adapting LHUC parameters when adapting a DNN AM with 1.5M parameters. We also
report initial experiments on adapting TDNN AMs, where the meta-learner
achieves comparable performance with LHUC.
| 2,018 | Computation and Language |
Comparative Studies of Detecting Abusive Language on Twitter | The context-dependent nature of online aggression makes annotating large
collections of data extremely difficult. Previously studied datasets in abusive
language detection have been insufficient in size to efficiently train deep
learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much
greater in size and reliability, has been released. However, this dataset has
not been comprehensively studied to its potential. In this paper, we conduct
the first comparative study of various learning models on Hate and Abusive
Speech on Twitter, and discuss the possibility of using additional features and
context data for improvements. Experimental results show that bidirectional GRU
networks trained on word-level features, with Latent Topic Clustering modules,
is the most accurate model scoring 0.805 F1.
| 2,018 | Computation and Language |
Multi-Source Syntactic Neural Machine Translation | We introduce a novel multi-source technique for incorporating source syntax
into neural machine translation using linearized parses. This is achieved by
employing separate encoders for the sequential and parsed versions of the same
source sentence; the resulting representations are then combined using a
hierarchical attention mechanism. The proposed model improves over both seq2seq
and parsed baselines by over 1 BLEU on the WMT17 English-German task. Further
analysis shows that our multi-source syntactic model is able to translate
successfully without any parsed input, unlike standard parsed methods. In
addition, performance does not deteriorate as much on long sentences as for the
baselines.
| 2,018 | Computation and Language |
Acquiring Annotated Data with Cross-lingual Explicitation for Implicit
Discourse Relation Classification | Implicit discourse relation classification is one of the most challenging and
important tasks in discourse parsing, due to the lack of connective as strong
linguistic cues. A principle bottleneck to further improvement is the shortage
of training data (ca.~16k instances in the PDTB). Shi et al. (2017) proposed to
acquire additional data by exploiting connectives in translation: human
translators mark discourse relations which are implicit in the source language
explicitly in the translation. Using back-translations of such explicitated
connectives improves discourse relation parsing performance. This paper
addresses the open question of whether the choice of the translation language
matters, and whether multiple translations into different languages can be
effectively used to improve the quality of the additional data.
| 2,019 | Computation and Language |
Generalize Symbolic Knowledge With Neural Rule Engine | As neural networks have dominated the state-of-the-art results in a wide
range of NLP tasks, it attracts considerable attention to improve the
performance of neural models by integrating symbolic knowledge. Different from
existing works, this paper investigates the combination of these two powerful
paradigms from the knowledge-driven side. We propose Neural Rule Engine (NRE),
which can learn knowledge explicitly from logic rules and then generalize them
implicitly with neural networks. NRE is implemented with neural module networks
in which each module represents an action of a logic rule. The experiments show
that NRE could greatly improve the generalization abilities of logic rules with
a significant increase in recall. Meanwhile, the precision is still maintained
at a high level.
| 2,019 | Computation and Language |
Modeling Empathy and Distress in Reaction to News Stories | Computational detection and understanding of empathy is an important factor
in advancing human-computer interaction. Yet to date, text-based empathy
prediction has the following major limitations: It underestimates the
psychological complexity of the phenomenon, adheres to a weak notion of ground
truth where empathic states are ascribed by third parties, and lacks a shared
corpus. In contrast, this contribution presents the first publicly available
gold standard for empathy prediction. It is constructed using a novel
annotation methodology which reliably captures empathy assessments by the
writer of a statement using multi-item scales. This is also the first
computational work distinguishing between multiple forms of empathy, empathic
concern, and personal distress, as recognized throughout psychology. Finally,
we present experimental results for three different predictive models, of which
a CNN performs the best.
| 2,018 | Computation and Language |
Attaining the Unattainable? Reassessing Claims of Human Parity in Neural
Machine Translation | We reassess a recent study (Hassan et al., 2018) that claimed that machine
translation (MT) has reached human parity for the translation of news from
Chinese into English, using pairwise ranking and considering three variables
that were not taken into account in that previous study: the language in which
the source side of the test set was originally written, the translation
proficiency of the evaluators, and the provision of inter-sentential context.
If we consider only original source text (i.e. not translated from another
language, or translationese), then we find evidence showing that human parity
has not been achieved. We compare the judgments of professional translators
against those of non-experts and discover that those of the experts result in
higher inter-annotator agreement and better discrimination between human and
machine translations. In addition, we analyse the human translations of the
test set and identify important translation issues. Finally, based on these
findings, we provide a set of recommendations for future human evaluations of
MT.
| 2,018 | Computation and Language |
Syntactic Scaffolds for Semantic Structures | We introduce the syntactic scaffold, an approach to incorporating syntactic
information into semantic tasks. Syntactic scaffolds avoid expensive syntactic
processing at runtime, only making use of a treebank during training, through a
multitask objective. We improve over strong baselines on PropBank semantics,
frame semantics, and coreference resolution, achieving competitive performance
on all three tasks.
| 2,018 | Computation and Language |
Iterative Recursive Attention Model for Interpretable Sequence
Classification | Natural language processing has greatly benefited from the introduction of
the attention mechanism. However, standard attention models are of limited
interpretability for tasks that involve a series of inference steps. We
describe an iterative recursive attention model, which constructs incremental
representations of input data through reusing results of previously computed
queries. We train our model on sentiment classification datasets and
demonstrate its capacity to identify and combine different aspects of the input
in an easily interpretable manner, while obtaining performance close to the
state of the art.
| 2,018 | Computation and Language |
AISHELL-2: Transforming Mandarin ASR Research Into Industrial Scale | AISHELL-1 is by far the largest open-source speech corpus available for
Mandarin speech recognition research. It was released with a baseline system
containing solid training and testing pipelines for Mandarin ASR. In AISHELL-2,
1000 hours of clean read-speech data from iOS is published, which is free for
academic usage. On top of AISHELL-2 corpus, an improved recipe is developed and
released, containing key components for industrial applications, such as
Chinese word segmentation, flexible vocabulary expension and phone set
transformation etc. Pipelines support various state-of-the-art techniques, such
as time-delayed neural networks and Lattic-Free MMI objective funciton. In
addition, we also release dev and test data from other channels(Android and
Mic). For research community, we hope that AISHELL-2 corpus can be a solid
resource for topics like transfer learning and robust ASR. For industry, we
hope AISHELL-2 recipe can be a helpful reference for building meaningful
industrial systems and products.
| 2,018 | Computation and Language |
Learning to Describe Differences Between Pairs of Similar Images | In this paper, we introduce the task of automatically generating text to
describe the differences between two similar images. We collect a new dataset
by crowd-sourcing difference descriptions for pairs of image frames extracted
from video-surveillance footage. Annotators were asked to succinctly describe
all the differences in a short paragraph. As a result, our novel dataset
provides an opportunity to explore models that align language and vision, and
capture visual salience. The dataset may also be a useful benchmark for
coherent multi-sentence generation. We perform a firstpass visual analysis that
exposes clusters of differing pixels as a proxy for object-level differences.
We propose a model that captures visual salience by using a latent variable to
align clusters of differing pixels with output sentences. We find that, for
both single-sentence generation and as well as multi-sentence generation, the
proposed model outperforms the models that use attention alone.
| 2,018 | Computation and Language |
Ensemble Sequence Level Training for Multimodal MT: OSU-Baidu WMT18
Multimodal Machine Translation System Report | This paper describes multimodal machine translation systems developed jointly
by Oregon State University and Baidu Research for WMT 2018 Shared Task on
multimodal translation. In this paper, we introduce a simple approach to
incorporate image information by feeding image features to the decoder side. We
also explore different sequence level training methods including scheduled
sampling and reinforcement learning which lead to substantial improvements. Our
systems ensemble several models using different architectures and training
methods and achieve the best performance for three subtasks: En-De and En-Cs in
task 1 and (En+De+Fr)-Cs task 1B.
| 2,018 | Computation and Language |
Explicit State Tracking with Semi-Supervision for Neural Dialogue
Generation | The task of dialogue generation aims to automatically provide responses given
previous utterances. Tracking dialogue states is an important ingredient in
dialogue generation for estimating users' intention. However, the
\emph{expensive nature of state labeling} and the \emph{weak interpretability}
make the dialogue state tracking a challenging problem for both task-oriented
and non-task-oriented dialogue generation: For generating responses in
task-oriented dialogues, state tracking is usually learned from manually
annotated corpora, where the human annotation is expensive for training; for
generating responses in non-task-oriented dialogues, most of existing work
neglects the explicit state tracking due to the unlimited number of dialogue
states.
In this paper, we propose the \emph{semi-supervised explicit dialogue state
tracker} (SEDST) for neural dialogue generation. To this end, our approach has
two core ingredients: \emph{CopyFlowNet} and \emph{posterior regularization}.
Specifically, we propose an encoder-decoder architecture, named
\emph{CopyFlowNet}, to represent an explicit dialogue state with a
probabilistic distribution over the vocabulary space. To optimize the training
procedure, we apply a posterior regularization strategy to integrate indirect
supervision. Extensive experiments conducted on both task-oriented and
non-task-oriented dialogue corpora demonstrate the effectiveness of our
proposed model. Moreover, we find that our proposed semi-supervised dialogue
state tracker achieves a comparable performance as state-of-the-art supervised
learning baselines in state tracking procedure.
| 2,018 | Computation and Language |
Do Language Models Understand Anything? On the Ability of LSTMs to
Understand Negative Polarity Items | In this paper, we attempt to link the inner workings of a neural language
model to linguistic theory, focusing on a complex phenomenon well discussed in
formal linguis- tics: (negative) polarity items. We briefly discuss the leading
hypotheses about the licensing contexts that allow negative polarity items and
evaluate to what extent a neural language model has the ability to correctly
process a subset of such constructions. We show that the model finds a relation
between the licensing context and the negative polarity item and appears to be
aware of the scope of this context, which we extract from a parse tree of the
sentence. With this research, we hope to pave the way for other studies linking
formal linguistics to deep learning.
| 2,018 | Computation and Language |
Retrieve-and-Read: Multi-task Learning of Information Retrieval and
Reading Comprehension | This study considers the task of machine reading at scale (MRS) wherein,
given a question, a system first performs the information retrieval (IR) task
of finding relevant passages in a knowledge source and then carries out the
reading comprehension (RC) task of extracting an answer span from the passages.
Previous MRS studies, in which the IR component was trained without considering
answer spans, struggled to accurately find a small number of relevant passages
from a large set of passages. In this paper, we propose a simple and effective
approach that incorporates the IR and RC tasks by using supervised multi-task
learning in order that the IR component can be trained by considering answer
spans. Experimental results on the standard benchmark, answering SQuAD
questions using the full Wikipedia as the knowledge source, showed that our
model achieved state-of-the-art performance. Moreover, we thoroughly evaluated
the individual contributions of our model components with our new Japanese
dataset and SQuAD. The results showed significant improvements in the IR task
and provided a new perspective on IR for RC: it is effective to teach which
part of the passage answers the question rather than to give only a relevance
score to the whole passage.
| 2,018 | Computation and Language |
An Empirical Analysis of the Role of Amplifiers, Downtoners, and
Negations in Emotion Classification in Microblogs | The effect of amplifiers, downtoners, and negations has been studied in
general and particularly in the context of sentiment analysis. However, there
is only limited work which aims at transferring the results and methods to
discrete classes of emotions, e. g., joy, anger, fear, sadness, surprise, and
disgust. For instance, it is not straight-forward to interpret which emotion
the phrase "not happy" expresses. With this paper, we aim at obtaining a better
understanding of such modifiers in the context of emotion-bearing words and
their impact on document-level emotion classification, namely, microposts on
Twitter. We select an appropriate scope detection method for modifiers of
emotion words, incorporate it in a document-level emotion classification model
as additional bag of words and show that this approach improves the performance
of emotion classification. In addition, we build a term weighting approach
based on the different modifiers into a lexical model for the analysis of the
semantics of modifiers and their impact on emotion meaning. We show that
amplifiers separate emotions expressed with an emotion- bearing word more
clearly from other secondary connotations. Downtoners have the opposite effect.
In addition, we discuss the meaning of negations of emotion-bearing words. For
instance we show empirically that "not happy" is closer to sadness than to
anger and that fear-expressing words in the scope of downtoners often express
surprise.
| 2,018 | Computation and Language |
Beyond Weight Tying: Learning Joint Input-Output Embeddings for Neural
Machine Translation | Tying the weights of the target word embeddings with the target word
classifiers of neural machine translation models leads to faster training and
often to better translation quality. Given the success of this parameter
sharing, we investigate other forms of sharing in between no sharing and hard
equality of parameters. In particular, we propose a structure-aware output
layer which captures the semantic structure of the output space of words within
a joint input-output embedding. The model is a generalized form of weight tying
which shares parameters but allows learning a more flexible relationship with
input word embeddings and allows the effective capacity of the output layer to
be controlled. In addition, the model shares weights across output classifiers
and translation contexts which allows it to better leverage prior knowledge
about them. Our evaluation on English-to-Finnish and English-to-German datasets
shows the effectiveness of the method against strong encoder-decoder baselines
trained with or without weight tying.
| 2,018 | Computation and Language |
Extracting Keywords from Open-Ended Business Survey Questions | Open-ended survey data constitute an important basis in research as well as
for making business decisions. Collecting and manually analysing free-text
survey data is generally more costly than collecting and analysing survey data
consisting of answers to multiple-choice questions. Yet free-text data allow
for new content to be expressed beyond predefined categories and are a very
valuable source of new insights into people's opinions. At the same time,
surveys always make ontological assumptions about the nature of the entities
that are researched, and this has vital ethical consequences. Human
interpretations and opinions can only be properly ascertained in their richness
using textual data sources; if these sources are analyzed appropriately, the
essential linguistic nature of humans and social entities is safeguarded.
Natural Language Processing (NLP) offers possibilities for meeting this ethical
business challenge by automating the analysis of natural language and thus
allowing for insightful investigations of human judgements. We present a
computational pipeline for analysing large amounts of responses to open-ended
questions in surveys and extract keywords that appropriately represent people's
opinions. This pipeline addresses the need to perform such tasks outside the
scope of both commercial software and bespoke analysis, exceeds the performance
to state-of-the-art systems, and performs this task in a transparent way that
allows for scrutinising and exposing potential biases in the analysis.
Following the principle of Open Data Science, our code is open-source and
generalizable to other datasets.
| 2,020 | Computation and Language |
How agents see things: On visual representations in an emergent language
game | There is growing interest in the language developed by agents interacting in
emergent-communication settings. Earlier studies have focused on the agents'
symbol usage, rather than on their representation of visual input. In this
paper, we consider the referential games of Lazaridou et al. (2017) and
investigate the representations the agents develop during their evolving
interaction. We find that the agents establish successful communication by
inducing visual representations that almost perfectly align with each other,
but, surprisingly, do not capture the conceptual properties of the objects
depicted in the input images. We conclude that, if we are interested in
developing language-like communication systems, we must pay more attention to
the visual semantics agents associate to the symbols they use.
| 2,018 | Computation and Language |
Imitation Learning for Neural Morphological String Transduction | We employ imitation learning to train a neural transition-based string
transducer for morphological tasks such as inflection generation and
lemmatization. Previous approaches to training this type of model either rely
on an external character aligner for the production of gold action sequences,
which results in a suboptimal model due to the unwarranted dependence on a
single gold action sequence despite spurious ambiguity, or require warm
starting with an MLE model. Our approach only requires a simple expert policy,
eliminating the need for a character aligner or warm start. It also addresses
familiar MLE training biases and leads to strong and state-of-the-art
performance on several benchmarks.
| 2,018 | Computation and Language |
Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNs | In human conversational interactions, turn-taking exchanges can be
coordinated using cues from multiple modalities. To design spoken dialog
systems that can conduct fluid interactions it is desirable to incorporate cues
from separate modalities into turn-taking models. We propose that there is an
appropriate temporal granularity at which modalities should be modeled. We
design a multiscale RNN architecture to model modalities at separate timescales
in a continuous manner. Our results show that modeling linguistic and acoustic
features at separate temporal rates can be beneficial for turn-taking modeling.
We also show that our approach can be used to incorporate gaze features into
turn-taking models.
| 2,018 | Computation and Language |
Cognate-aware morphological segmentation for multilingual neural
translation | This article describes the Aalto University entry to the WMT18 News
Translation Shared Task. We participate in the multilingual subtrack with a
system trained under the constrained condition to translate from English to
both Finnish and Estonian. The system is based on the Transformer model. We
focus on improving the consistency of morphological segmentation for words that
are similar orthographically, semantically, and distributionally; such words
include etymological cognates, loan words, and proper names. For this, we
introduce Cognate Morfessor, a multilingual variant of the Morfessor method. We
show that our approach improves the translation quality particularly for
Estonian, which has less resources for training the translation model.
| 2,018 | Computation and Language |
Bottom-Up Abstractive Summarization | Neural network-based methods for abstractive summarization produce outputs
that are more fluent than other techniques, but which can be poor at content
selection. This work proposes a simple technique for addressing this issue: use
a data-efficient content selector to over-determine phrases in a source
document that should be part of the summary. We use this selector as a
bottom-up attention step to constrain the model to likely phrases. We show that
this approach improves the ability to compress text, while still generating
fluent summaries. This two-step process is both simpler and higher performing
than other end-to-end content selection models, leading to significant
improvements on ROUGE for both the CNN-DM and NYT corpus. Furthermore, the
content selector can be trained with as little as 1,000 sentences, making it
easy to transfer a trained summarizer to a new domain.
| 2,018 | Computation and Language |
The MeMAD Submission to the WMT18 Multimodal Translation Task | This paper describes the MeMAD project entry to the WMT Multimodal Machine
Translation Shared Task.
We propose adapting the Transformer neural machine translation (NMT)
architecture to a multi-modal setting. In this paper, we also describe the
preliminary experiments with text-only translation systems leading us up to
this choice.
We have the top scoring system for both English-to-German and
English-to-French, according to the automatic metrics for flickr18.
Our experiments show that the effect of the visual features in our system is
small. Our largest gains come from the quality of the underlying text-only NMT
system. We find that appropriate use of additional data is effective.
| 2,018 | Computation and Language |
Spherical Latent Spaces for Stable Variational Autoencoders | A hallmark of variational autoencoders (VAEs) for text processing is their
combination of powerful encoder-decoder models, such as LSTMs, with simple
latent distributions, typically multivariate Gaussians. These models pose a
difficult optimization problem: there is an especially bad local optimum where
the variational posterior always equals the prior and the model does not use
the latent variable at all, a kind of "collapse" which is encouraged by the KL
divergence term of the objective. In this work, we experiment with another
choice of latent distribution, namely the von Mises-Fisher (vMF) distribution,
which places mass on the surface of the unit hypersphere. With this choice of
prior and posterior, the KL divergence term now only depends on the variance of
the vMF distribution, giving us the ability to treat it as a fixed
hyperparameter. We show that doing so not only averts the KL collapse, but
consistently gives better likelihoods than Gaussians across a range of modeling
conditions, including recurrent language modeling and bag-of-words document
modeling. An analysis of the properties of our vMF representations shows that
they learn richer and more nuanced structures in their latent representations
than their Gaussian counterparts.
| 2,018 | Computation and Language |
Gromov-Wasserstein Alignment of Word Embedding Spaces | Cross-lingual or cross-domain correspondences play key roles in tasks ranging
from machine translation to transfer learning. Recently, purely unsupervised
methods operating on monolingual embeddings have become effective alignment
tools. Current state-of-the-art methods, however, involve multiple steps,
including heuristic post-hoc refinement strategies. In this paper, we cast the
correspondence problem directly as an optimal transport (OT) problem, building
on the idea that word embeddings arise from metric recovery algorithms. Indeed,
we exploit the Gromov-Wasserstein distance that measures how similarities
between pairs of words relate across languages. We show that our OT objective
can be estimated efficiently, requires little or no tuning, and results in
performance comparable with the state-of-the-art in various unsupervised word
translation tasks.
| 2,018 | Computation and Language |
What do RNN Language Models Learn about Filler-Gap Dependencies? | RNN language models have achieved state-of-the-art perplexity results and
have proven useful in a suite of NLP tasks, but it is as yet unclear what
syntactic generalizations they learn. Here we investigate whether
state-of-the-art RNN language models represent long-distance filler-gap
dependencies and constraints on them. Examining RNN behavior on experimentally
controlled sentences designed to expose filler-gap dependencies, we show that
RNNs can represent the relationship in multiple syntactic positions and over
large spans of text. Furthermore, we show that RNNs learn a subset of the known
restrictions on filler-gap dependencies, known as island constraints: RNNs show
evidence for wh-islands, adjunct islands, and complex NP islands. These studies
demonstrates that state-of-the-art RNN models are able to learn and generalize
about empty syntactic positions.
| 2,018 | Computation and Language |
Generalizing Procrustes Analysis for Better Bilingual Dictionary
Induction | Most recent approaches to bilingual dictionary induction find a linear
alignment between the word vector spaces of two languages. We show that
projecting the two languages onto a third, latent space, rather than directly
onto each other, while equivalent in terms of expressivity, makes it easier to
learn approximate alignments. Our modified approach also allows for supporting
languages to be included in the alignment process, to obtain an even better
performance in low resource settings.
| 2,020 | Computation and Language |
Indicatements that character language models learn English
morpho-syntactic units and regularities | Character language models have access to surface morphological patterns, but
it is not clear whether or how they learn abstract morphological regularities.
We instrument a character language model with several probes, finding that it
can develop a specific unit to identify word boundaries and, by extension,
morpheme boundaries, which allows it to capture linguistic properties and
regularities of these units. Our language model proves surprisingly good at
identifying the selectional restrictions of English derivational morphemes, a
task that requires both morphological and syntactic awareness. Thus we conclude
that, when morphemes overlap extensively with the words of a language, a
character language model can perform morphological abstraction.
| 2,018 | Computation and Language |
Denoising Neural Machine Translation Training with Trusted Data and
Online Data Selection | Measuring domain relevance of data and identifying or selecting well-fit
domain data for machine translation (MT) is a well-studied topic, but denoising
is not yet. Denoising is concerned with a different type of data quality and
tries to reduce the negative impact of data noise on MT training, in
particular, neural MT (NMT) training. This paper generalizes methods for
measuring and selecting data for domain MT and applies them to denoising NMT
training. The proposed approach uses trusted data and a denoising curriculum
realized by online data selection. Intrinsic and extrinsic evaluations of the
approach show its significant effectiveness for NMT to train on data with
severe noise.
| 2,018 | Computation and Language |
When to Finish? Optimal Beam Search for Neural Text Generation (modulo
beam size) | In neural text generation such as neural machine translation, summarization,
and image captioning, beam search is widely used to improve the output text
quality. However, in the neural generation setting, hypotheses can finish in
different steps, which makes it difficult to decide when to end beam search to
ensure optimality. We propose a provably optimal beam search algorithm that
will always return the optimal-score complete hypothesis (modulo beam size),
and finish as soon as the optimality is established (finishing no later than
the baseline). To counter neural generation's tendency for shorter hypotheses,
we also introduce a bounded length reward mechanism which allows a modified
version of our beam search algorithm to remain optimal. Experiments on neural
machine translation demonstrate that our principled beam search algorithm leads
to improvement in BLEU score over previously proposed alternatives.
| 2,018 | Computation and Language |
Nightmare at test time: How punctuation prevents parsers from
generalizing | Punctuation is a strong indicator of syntactic structure, and parsers trained
on text with punctuation often rely heavily on this signal. Punctuation is a
diversion, however, since human language processing does not rely on
punctuation to the same extent, and in informal texts, we therefore often leave
out punctuation. We also use punctuation ungrammatically for emphatic or
creative purposes, or simply by mistake. We show that (a) dependency parsers
are sensitive to both absence of punctuation and to alternative uses; (b)
neural parsers tend to be more sensitive than vintage parsers; (c) training
neural parsers without punctuation outperforms all out-of-the-box parsers
across all scenarios where punctuation departs from standard punctuation. Our
main experiments are on synthetically corrupted data to study the effect of
punctuation in isolation and avoid potential confounds, but we also show
effects on out-of-domain data.
| 2,018 | Computation and Language |
Hierarchical CVAE for Fine-Grained Hate Speech Classification | Existing work on automated hate speech detection typically focuses on binary
classification or on differentiating among a small set of categories. In this
paper, we propose a novel method on a fine-grained hate speech classification
task, which focuses on differentiating among 40 hate groups of 13 different
hate group categories. We first explore the Conditional Variational Autoencoder
(CVAE) as a discriminative model and then extend it to a hierarchical
architecture to utilize the additional hate category information for more
accurate prediction. Experimentally, we show that incorporating the hate
category information for training can significantly improve the classification
performance and our proposed model outperforms commonly-used discriminative
models.
| 2,018 | Computation and Language |
Dependency-based Hybrid Trees for Semantic Parsing | We propose a novel dependency-based hybrid tree model for semantic parsing,
which converts natural language utterance into machine interpretable meaning
representations. Unlike previous state-of-the-art models, the semantic
information is interpreted as the latent dependency between the natural
language words in our joint representation. Such dependency information can
capture the interactions between the semantics and natural language words. We
integrate a neural component into our model and propose an efficient
dynamic-programming algorithm to perform tractable inference. Through extensive
experiments on the standard multilingual GeoQuery dataset with eight languages,
we demonstrate that our proposed approach is able to achieve state-of-the-art
performance across several languages. Analysis also justifies the effectiveness
of using our new dependency-based representation.
| 2,018 | Computation and Language |
Beyond Error Propagation in Neural Machine Translation: Characteristics
of Language Also Matter | Neural machine translation usually adopts autoregressive models and suffers
from exposure bias as well as the consequent error propagation problem. Many
previous works have discussed the relationship between error propagation and
the \emph{accuracy drop} (i.e., the left part of the translated sentence is
often better than its right part in left-to-right decoding models) problem. In
this paper, we conduct a series of analyses to deeply understand this problem
and get several interesting findings. (1) The role of error propagation on
accuracy drop is overstated in the literature, although it indeed contributes
to the accuracy drop problem. (2) Characteristics of a language play a more
important role in causing the accuracy drop: the left part of the translation
result in a right-branching language (e.g., English) is more likely to be more
accurate than its right part, while the right part is more accurate for a
left-branching language (e.g., Japanese). Our discoveries are confirmed on
different model structures including Transformer and RNN, and in other sequence
generation tasks such as text summarization.
| 2,018 | Computation and Language |
Simple Fusion: Return of the Language Model | Neural Machine Translation (NMT) typically leverages monolingual data in
training through backtranslation. We investigate an alternative simple method
to use monolingual data for NMT training: We combine the scores of a
pre-trained and fixed language model (LM) with the scores of a translation
model (TM) while the TM is trained from scratch. To achieve that, we train the
translation model to predict the residual probability of the training data
added to the prediction of the LM. This enables the TM to focus its capacity on
modeling the source sentence since it can rely on the LM for fluency. We show
that our method outperforms previous approaches to integrate LMs into NMT while
the architecture is simpler as it does not require gating networks to balance
TM and LM. We observe gains of between +0.24 and +2.36 BLEU on all four test
sets (English-Turkish, Turkish-English, Estonian-English, Xhosa-English) on top
of ensembles without LM. We compare our method with alternative ways to utilize
monolingual data such as backtranslation, shallow fusion, and cold fusion.
| 2,019 | Computation and Language |
Contextual Encoding for Translation Quality Estimation | The task of word-level quality estimation (QE) consists of taking a source
sentence and machine-generated translation, and predicting which words in the
output are correct and which are wrong.
In this paper, propose a method to effectively encode the local and global
contextual information for each target word using a three-part neural network
approach.
The first part uses an embedding layer to represent words and their
part-of-speech tags in both languages. The second part leverages a
one-dimensional convolution layer to integrate local context information for
each target word. The third part applies a stack of feed-forward and recurrent
neural networks to further encode the global context in the sentence before
making the predictions. This model was submitted as the CMU entry to the
WMT2018 shared task on QE, and achieves strong results, ranking first in three
of the six tracks.
| 2,018 | Computation and Language |
Why is unsupervised alignment of English embeddings from different
algorithms so hard? | This paper presents a challenge to the community: Generative adversarial
networks (GANs) can perfectly align independent English word embeddings induced
using the same algorithm, based on distributional information alone; but fails
to do so, for two different embeddings algorithms. Why is that? We believe
understanding why, is key to understand both modern word embedding algorithms
and the limitations and instability dynamics of GANs. This paper shows that (a)
in all these cases, where alignment fails, there exists a linear transform
between the two embeddings (so algorithm biases do not lead to non-linear
differences), and (b) similar effects can not easily be obtained by varying
hyper-parameters. One plausible suggestion based on our initial experiments is
that the differences in the inductive biases of the embedding algorithms lead
to an optimization landscape that is riddled with local optima, leading to a
very small basin of convergence, but we present this more as a challenge paper
than a technical contribution.
| 2,018 | Computation and Language |
LIUM-CVC Submissions for WMT18 Multimodal Translation Task | This paper describes the multimodal Neural Machine Translation systems
developed by LIUM and CVC for WMT18 Shared Task on Multimodal Translation. This
year we propose several modifications to our previous multimodal attention
architecture in order to better integrate convolutional features and refine
them using encoder-side information. Our final constrained submissions ranked
first for English-French and second for English-German language pairs among the
constrained submissions according to the automatic evaluation metric METEOR.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.