Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Leap-LSTM: Enhancing Long Short-Term Memory for Text Categorization | Recurrent Neural Networks (RNNs) are widely used in the field of natural
language processing (NLP), ranging from text categorization to question
answering and machine translation. However, RNNs generally read the whole text
from beginning to end or vice versa sometimes, which makes it inefficient to
process long texts. When reading a long document for a categorization task,
such as topic categorization, large quantities of words are irrelevant and can
be skipped. To this end, we propose Leap-LSTM, an LSTM-enhanced model which
dynamically leaps between words while reading texts. At each step, we utilize
several feature encoders to extract messages from preceding texts, following
texts and the current word, and then determine whether to skip the current
word. We evaluate Leap-LSTM on several text categorization tasks: sentiment
analysis, news categorization, ontology classification and topic
classification, with five benchmark data sets. The experimental results show
that our model reads faster and predicts better than standard LSTM. Compared to
previous models which can also skip words, our model achieves better trade-offs
between performance and efficiency.
| 2,019 | Computation and Language |
Unsupervised End-to-End Learning of Discrete Linguistic Units for Voice
Conversion | We present an unsupervised end-to-end training scheme where we discover
discrete subword units from speech without using any labels. The discrete
subword units are learned under an ASR-TTS autoencoder reconstruction setting,
where an ASR-Encoder is trained to discover a set of common linguistic units
given a variety of speakers, and a TTS-Decoder trained to project the
discovered units back to the designated speech. We propose a discrete encoding
method, Multilabel-Binary Vectors (MBV), to make the ASR-TTS autoencoder
differentiable. We found that the proposed encoding method offers automatic
extraction of speech content from speaker style, and is sufficient to cover
full linguistic content in a given language. Therefore, the TTS-Decoder can
synthesize speech with the same content as the input of ASR-Encoder but with
different speaker characteristics, which achieves voice conversion (VC). We
further improve the quality of VC using adversarial training, where we train a
TTS-Patcher that augments the output of TTS-Decoder. Objective and subjective
evaluations show that the proposed approach offers strong VC results as it
eliminates speaker identity while preserving content within speech. In the
ZeroSpeech 2019 Challenge, we achieved outstanding performance in terms of low
bitrate.
| 2,019 | Computation and Language |
DSReg: Using Distant Supervision as a Regularizer | In this paper, we aim at tackling a general issue in NLP tasks where some of
the negative examples are highly similar to the positive examples, i.e.,
hard-negative examples. We propose the distant supervision as a regularizer
(DSReg) approach to tackle this issue. The original task is converted to a
multi-task learning problem, in which distant supervision is used to retrieve
hard-negative examples. The obtained hard-negative examples are then used as a
regularizer. The original target objective of distinguishing positive examples
from negative examples is jointly optimized with the auxiliary task objective
of distinguishing softened positive (i.e., hard-negative examples plus positive
examples) from easy-negative examples. In the neural context, this can be done
by outputting the same representation from the last neural layer to different
$softmax$ functions. Using this strategy, we can improve the performance of
baseline models in a range of different NLP tasks, including text
classification, sequence labeling and reading comprehension.
| 2,019 | Computation and Language |
On Measuring Gender Bias in Translation of Gender-neutral Pronouns | Ethics regarding social bias has recently thrown striking issues in natural
language processing. Especially for gender-related topics, the need for a
system that reduces the model bias has grown in areas such as image captioning,
content recommendation, and automated employment. However, detection and
evaluation of gender bias in the machine translation systems are not yet
thoroughly investigated, for the task being cross-lingual and challenging to
define. In this paper, we propose a scheme for making up a test set that
evaluates the gender bias in a machine translation system, with Korean, a
language with gender-neutral pronouns. Three word/phrase sets are primarily
constructed, each incorporating positive/negative expressions or occupations;
all the terms are gender-independent or at least not biased to one side
severely. Then, additional sentence lists are constructed concerning formality
of the pronouns and politeness of the sentences. With the generated sentence
set of size 4,236 in total, we evaluate gender bias in conventional machine
translation systems utilizing the proposed measure, which is termed here as
translation gender bias index (TGBI). The corpus and the code for evaluation is
available on-line.
| 2,019 | Computation and Language |
Extracting adverse drug reactions and their context using sequence
labelling ensembles in TAC2017 | Adverse drug reactions (ADRs) are unwanted or harmful effects experienced
after the administration of a certain drug or a combination of drugs,
presenting a challenge for drug development and drug administration. In this
paper, we present a set of taggers for extracting adverse drug reactions and
related entities, including factors, severity, negations, drug class and
animal. The systems used a mix of rule-based, machine learning (CRF) and deep
learning (BLSTM with word2vec embeddings) methodologies in order to annotate
the data. The systems were submitted to adverse drug reaction shared task,
organised during Text Analytics Conference in 2017 by National Institute for
Standards and Technology, archiving F1-scores of 76.00 and 75.61 respectively.
| 2,017 | Computation and Language |
An Incremental Turn-Taking Model For Task-Oriented Dialog Systems | In a human-machine dialog scenario, deciding the appropriate time for the
machine to take the turn is an open research problem. In contrast, humans
engaged in conversations are able to timely decide when to interrupt the
speaker for competitive or non-competitive reasons. In state-of-the-art
turn-by-turn dialog systems the decision on the next dialog action is taken at
the end of the utterance. In this paper, we propose a token-by-token prediction
of the dialog state from incremental transcriptions of the user utterance. To
identify the point of maximal understanding in an ongoing utterance, we a)
implement an incremental Dialog State Tracker which is updated on a token basis
(iDST) b) re-label the Dialog State Tracking Challenge 2 (DSTC2) dataset and c)
adapt it to the incremental turn-taking experimental scenario. The re-labeling
consists of assigning a binary value to each token in the user utterance that
allows to identify the appropriate point for taking the turn. Finally, we
implement an incremental Turn Taking Decider (iTTD) that is trained on these
new labels for the turn-taking decision. We show that the proposed model can
achieve a better performance compared to a deterministic handcrafted
turn-taking algorithm.
| 2,019 | Computation and Language |
Interpreting and improving natural-language processing (in machines)
with natural language-processing (in the brain) | Neural networks models for NLP are typically implemented without the explicit
encoding of language rules and yet they are able to break one performance
record after another. This has generated a lot of research interest in
interpreting the representations learned by these networks. We propose here a
novel interpretation approach that relies on the only processing system we have
that does understand language: the human brain. We use brain imaging recordings
of subjects reading complex natural text to interpret word and sequence
embeddings from 4 recent NLP models - ELMo, USE, BERT and Transformer-XL. We
study how their representations differ across layer depth, context length, and
attention type. Our results reveal differences in the context-related
representations across these models. Further, in the transformer models, we
find an interaction between layer depth and context length, and between layer
depth and attention type. We finally hypothesize that altering BERT to better
align with brain recordings would enable it to also better understand language.
Probing the altered BERT using syntactic NLP tasks reveals that the model with
increased brain-alignment outperforms the original model. Cognitive
neuroscientists have already begun using NLP networks to study the brain, and
this work closes the loop to allow the interaction between NLP and cognitive
neuroscience to be a true cross-pollination.
| 2,019 | Computation and Language |
Specific polysemy of the brief sapiential units | In this paper we explain how we deal with the problems related to the
constitution of the Aliento database, the complexity of which has to do with
the type of phrases we work with, the differences between languages, the type
of information we want to see emerge. The correct tagging of the specific
polysemy of brief sapiential units is an important step in the preparation of
the text within the corpus which will be submitted to compute similarities and
posterity of the units.
| 2,010 | Computation and Language |
Miss Tools and Mr Fruit: Emergent communication in agents learning about
object affordances | Recent research studies communication emergence in communities of deep
network agents assigned a joint task, hoping to gain insights on human language
evolution. We propose here a new task capturing crucial aspects of the human
environment, such as natural object affordances, and of human conversation,
such as full symmetry among the participants. By conducting a thorough
pragmatic and semantic analysis of the emergent protocol, we show that the
agents solve the shared task through genuine bilateral, referential
communication. However, the agents develop multiple idiolects, which makes us
conclude that full symmetry is not a sufficient condition for a common language
to emerge.
| 2,019 | Computation and Language |
Revisiting Low-Resource Neural Machine Translation: A Case Study | It has been shown that the performance of neural machine translation (NMT)
drops starkly in low-resource conditions, underperforming phrase-based
statistical machine translation (PBSMT) and requiring large amounts of
auxiliary data to achieve competitive results. In this paper, we re-assess the
validity of these results, arguing that they are the result of lack of system
adaptation to low-resource settings. We discuss some pitfalls to be aware of
when training low-resource NMT systems, and recent techniques that have shown
to be especially helpful in low-resource settings, resulting in a set of best
practices for low-resource NMT. In our experiments on German--English with
different amounts of IWSLT14 training data, we show that, without the use of
any auxiliary monolingual or multilingual data, an optimized NMT system can
outperform PBSMT with far less data than previously claimed. We also apply
these techniques to a low-resource Korean-English dataset, surpassing
previously reported results by 4 BLEU.
| 2,019 | Computation and Language |
A Cross-Domain Transferable Neural Coherence Model | Coherence is an important aspect of text quality and is crucial for ensuring
its readability. One important limitation of existing coherence models is that
training on one domain does not easily generalize to unseen categories of text.
Previous work advocates for generative models for cross-domain generalization,
because for discriminative models, the space of incoherent sentence orderings
to discriminate against during training is prohibitively large. In this work,
we propose a local discriminative neural model with a much smaller negative
sampling space that can efficiently learn against incorrect orderings. The
proposed coherence model is simple in structure, yet it significantly
outperforms previous state-of-art methods on a standard benchmark dataset on
the Wall Street Journal corpus, as well as in multiple new challenging settings
of transfer to unseen categories of discourse on Wikipedia articles.
| 2,019 | Computation and Language |
On Variational Learning of Controllable Representations for Text without
Supervision | The variational autoencoder (VAE) can learn the manifold of natural images on
certain datasets, as evidenced by meaningful interpolating or extrapolating in
the continuous latent space. However, on discrete data such as text, it is
unclear if unsupervised learning can discover similar latent space that allows
controllable manipulation. In this work, we find that sequence VAEs trained on
text fail to properly decode when the latent codes are manipulated, because the
modified codes often land in holes or vacant regions in the aggregated
posterior latent space, where the decoding network fails to generalize. Both as
a validation of the explanation and as a fix to the problem, we propose to
constrain the posterior mean to a learned probability simplex, and performs
manipulation within this simplex. Our proposed method mitigates the latent
vacancy problem and achieves the first success in unsupervised learning of
controllable representations for text. Empirically, our method outperforms
unsupervised baselines and strong supervised approaches on text style transfer,
and is capable of performing more flexible fine-grained control over text
generation than existing methods.
| 2,020 | Computation and Language |
Automatic Ambiguity Detection | Most work on sense disambiguation presumes that one knows beforehand -- e.g.
from a thesaurus -- a set of polysemous terms. But published lists invariably
give only partial coverage. For example, the English word tan has several
obvious senses, but one may overlook the abbreviation for tangent. In this
paper, we present an algorithm for identifying interesting polysemous terms and
measuring their degree of polysemy, given an unlabeled corpus. The algorithm
involves: (i) collecting all terms within a k-term window of the target term;
(ii) computing the inter-term distances of the contextual terms, and reducing
the multi-dimensional distance space to two dimensions using standard methods;
(iii) converting the two-dimensional representation into radial coordinates and
using isotonic/antitonic regression to compute the degree to which the
distribution deviates from a single-peak model. The amount of deviation is the
proposed polysemy index
| 1,998 | Computation and Language |
SEMA: an Extended Semantic Evaluation Metric for AMR | Abstract Meaning Representation (AMR) is a recently designed semantic
representation language intended to capture the meaning of a sentence, which
may be represented as a single-rooted directed acyclic graph with labeled nodes
and edges. The automatic evaluation of this structure plays an important role
in the development of better systems, as well as for semantic annotation.
Despite there is one available metric, smatch, it has some drawbacks. For
instance, smatch creates a self-relation on the root of the graph, has weights
for different error types, and does not take into account the dependence of the
elements in the AMR structure. With these drawbacks, smatch masks several
problems of the AMR parsers and distorts the evaluation of the AMRs. In view of
this, in this paper, we introduce an extended metric to evaluate AMR parsers,
which deals with the drawbacks of the smatch metric. Finally, we compare both
metrics, using four well-known AMR parsers, and we argue that our metric is
more refined, robust, fairer, and faster than smatch.
| 2,019 | Computation and Language |
Parallax: Visualizing and Understanding the Semantics of Embedding
Spaces via Algebraic Formulae | Embeddings are a fundamental component of many modern machine learning and
natural language processing models. Understanding them and visualizing them is
essential for gathering insights about the information they capture and the
behavior of the models. State of the art in analyzing embeddings consists in
projecting them in two-dimensional planes without any interpretable semantics
associated to the axes of the projection, which makes detailed analyses and
comparison among multiple sets of embeddings challenging. In this work, we
propose to use explicit axes defined as algebraic formulae over embeddings to
project them into a lower dimensional, but semantically meaningful subspace, as
a simple yet effective analysis and visualization methodology. This methodology
assigns an interpretable semantics to the measures of variability and the axes
of visualizations, allowing for both comparisons among different sets of
embeddings and fine-grained inspection of the embedding spaces. We demonstrate
the power of the proposed methodology through a series of case studies that
make use of visualizations constructed around the underlying methodology and
through a user study. The results show how the methodology is effective at
providing more profound insights than classical projection methods and how it
is widely applicable to many other use cases.
| 2,019 | Computation and Language |
Ensuring Readability and Data-fidelity using Head-modifier Templates in
Deep Type Description Generation | A type description is a succinct noun compound which helps human and machines
to quickly grasp the informative and distinctive information of an entity.
Entities in most knowledge graphs (KGs) still lack such descriptions, thus
calling for automatic methods to supplement such information. However, existing
generative methods either overlook the grammatical structure or make factual
mistakes in generated texts. To solve these problems, we propose a
head-modifier template-based method to ensure the readability and data fidelity
of generated type descriptions. We also propose a new dataset and two automatic
metrics for this task. Experiments show that our method improves substantially
compared with baselines and achieves state-of-the-art performance on both
datasets.
| 2,019 | Computation and Language |
Guided Source Separation Meets a Strong ASR Backend: Hitachi/Paderborn
University Joint Investigation for Dinner Party ASR | In this paper, we present Hitachi and Paderborn University's joint effort for
automatic speech recognition (ASR) in a dinner party scenario. The main
challenges of ASR systems for dinner party recordings obtained by multiple
microphone arrays are (1) heavy speech overlaps, (2) severe noise and
reverberation, (3) very natural conversational content, and possibly (4)
insufficient training data. As an example of a dinner party scenario, we have
chosen the data presented during the CHiME-5 speech recognition challenge,
where the baseline ASR had a 73.3% word error rate (WER), and even the best
performing system at the CHiME-5 challenge had a 46.1% WER. We extensively
investigated a combination of the guided source separation-based speech
enhancement technique and an already proposed strong ASR backend and found that
a tight combination of these techniques provided substantial accuracy
improvements. Our final system achieved WERs of 39.94% and 41.64% for the
development and evaluation data, respectively, both of which are the best
published results for the dataset. We also investigated with additional
training data on the official small data in the CHiME-5 corpus to assess the
intrinsic difficulty of this ASR task.
| 2,019 | Computation and Language |
Learning Multilingual Word Embeddings Using Image-Text Data | There has been significant interest recently in learning multilingual word
embeddings -- in which semantically similar words across languages have similar
embeddings. State-of-the-art approaches have relied on expensive labeled data,
which is unavailable for low-resource languages, or have involved post-hoc
unification of monolingual embeddings. In the present paper, we investigate the
efficacy of multilingual embeddings learned from weakly-supervised image-text
data. In particular, we propose methods for learning multilingual embeddings
using image-text data, by enforcing similarity between the representations of
the image and that of the text. Our experiments reveal that even without using
any expensive labeled data, a bag-of-words-based embedding model trained on
image-text data achieves performance comparable to the state-of-the-art on
crosslingual semantic similarity tasks.
| 2,020 | Computation and Language |
Learning Task-specific Representation for Novel Words in Sequence
Labeling | Word representation is a key component in neural-network-based sequence
labeling systems. However, representations of unseen or rare words trained on
the end task are usually poor for appreciable performance. This is commonly
referred to as the out-of-vocabulary (OOV) problem. In this work, we address
the OOV problem in sequence labeling using only training data of the task. To
this end, we propose a novel method to predict representations for OOV words
from their surface-forms (e.g., character sequence) and contexts. The method is
specifically designed to avoid the error propagation problem suffered by
existing approaches in the same paradigm. To evaluate its effectiveness, we
performed extensive empirical studies on four part-of-speech tagging (POS)
tasks and four named entity recognition (NER) tasks. Experimental results show
that the proposed method can achieve better or competitive performance on the
OOV problem compared with existing state-of-the-art methods.
| 2,019 | Computation and Language |
Revision in Continuous Space: Unsupervised Text Style Transfer without
Adversarial Learning | Typical methods for unsupervised text style transfer often rely on two key
ingredients: 1) seeking the explicit disentanglement of the content and the
attributes, and 2) troublesome adversarial learning. In this paper, we show
that neither of these components is indispensable. We propose a new framework
that utilizes the gradients to revise the sentence in a continuous space during
inference to achieve text style transfer. Our method consists of three key
components: a variational auto-encoder (VAE), some attribute predictors (one
for each attribute), and a content predictor. The VAE and the two types of
predictors enable us to perform gradient-based optimization in the continuous
space, which is mapped from sentences in a discrete space, to find the
representation of a target sentence with the desired attributes and preserved
content. Moreover, the proposed method naturally has the ability to
simultaneously manipulate multiple fine-grained attributes, such as sentence
length and the presence of specific words, when performing text style transfer
tasks. Compared with previous adversarial learning based methods, the proposed
method is more interpretable, controllable and easier to train. Extensive
experimental studies on three popular text style transfer tasks show that the
proposed method significantly outperforms five state-of-the-art methods.
| 2,019 | Computation and Language |
Word-order biases in deep-agent emergent communication | Sequence-processing neural networks led to remarkable progress on many NLP
tasks. As a consequence, there has been increasing interest in understanding to
what extent they process language as humans do. We aim here to uncover which
biases such models display with respect to "natural" word-order constraints. We
train models to communicate about paths in a simple gridworld, using miniature
languages that reflect or violate various natural language trends, such as the
tendency to avoid redundancy or to minimize long-distance dependencies. We
study how the controlled characteristics of our miniature languages affect
individual learning and their stability across multiple network generations.
The results draw a mixed picture. On the one hand, neural networks show a
strong tendency to avoid long-distance dependencies. On the other hand, there
is no clear preference for the efficient, non-redundant encoding of information
that is widely attested in natural language. We thus suggest inoculating a
notion of "effort" into neural networks, as a possible way to make their
linguistic behavior more human-like.
| 2,019 | Computation and Language |
TopExNet: Entity-Centric Network Topic Exploration in News Streams | The recent introduction of entity-centric implicit network representations of
unstructured text offers novel ways for exploring entity relations in document
collections and streams efficiently and interactively. Here, we present
TopExNet as a tool for exploring entity-centric network topics in streams of
news articles. The application is available as a web service at
https://topexnet.ifi.uni-heidelberg.de/ .
| 2,019 | Computation and Language |
Racial Bias in Hate Speech and Abusive Language Detection Datasets | Technologies for abusive language detection are being developed and applied
with little consideration of their potential biases. We examine racial bias in
five different sets of Twitter data annotated for hate speech and abusive
language. We train classifiers on these datasets and compare the predictions of
these classifiers on tweets written in African-American English with those
written in Standard American English. The results show evidence of systematic
racial bias in all datasets, as classifiers trained on them tend to predict
that tweets written in African-American English are abusive at substantially
higher rates. If these abusive language detection systems are used in the field
they will therefore have a disproportionate negative impact on African-American
social media users. Consequently, these systems may discriminate against the
groups who are often the targets of the abuse we are trying to detect.
| 2,019 | Computation and Language |
Anti-efficient encoding in emergent communication | Despite renewed interest in emergent language simulations with neural
networks, little is known about the basic properties of the induced code, and
how they compare to human language. One fundamental characteristic of the
latter, known as Zipf's Law of Abbreviation (ZLA), is that more frequent words
are efficiently associated to shorter strings. We study whether the same
pattern emerges when two neural networks, a "speaker" and a "listener", are
trained to play a signaling game. Surprisingly, we find that networks develop
an \emph{anti-efficient} encoding scheme, in which the most frequent inputs are
associated to the longest messages, and messages in general are skewed towards
the maximum length threshold. This anti-efficient code appears easier to
discriminate for the listener, and, unlike in human communication, the speaker
does not impose a contrasting least-effort pressure towards brevity. Indeed,
when the cost function includes a penalty for longer messages, the resulting
message distribution starts respecting ZLA. Our analysis stresses the
importance of studying the basic features of emergent communication in a highly
controlled setup, to ensure the latter will not strand too far from human
language. Moreover, we present a concrete illustration of how different
functional pressures can lead to successful communication codes that lack basic
properties of human language, thus highlighting the role such pressures play in
the latter.
| 2,019 | Computation and Language |
Towards better substitution-based word sense induction | Word sense induction (WSI) is the task of unsupervised clustering of word
usages within a sentence to distinguish senses. Recent work obtain strong
results by clustering lexical substitutes derived from pre-trained RNN language
models (ELMo). Adapting the method to BERT improves the scores even further. We
extend the previous method to support a dynamic rather than a fixed number of
clusters as supported by other prominent methods, and propose a method for
interpreting the resulting clusters by associating them with their most
informative substitutes. We then perform extensive error analysis revealing the
remaining sources of errors in the WSI task.
Our code is available at https://github.com/asafamr/bertwsi.
| 2,019 | Computation and Language |
Defending Against Neural Fake News | Recent progress in natural language generation has raised dual-use concerns.
While applications like summarization and translation are positive, the
underlying technology also might enable adversaries to generate neural fake
news: targeted propaganda that closely mimics the style of real news.
Modern computer security relies on careful threat modeling: identifying
potential threats and vulnerabilities from an adversary's point of view, and
exploring potential mitigations to these threats. Likewise, developing robust
defenses against neural fake news requires us first to carefully investigate
and characterize the risks of these models. We thus present a model for
controllable text generation called Grover. Given a headline like `Link Found
Between Vaccines and Autism,' Grover can generate the rest of the article;
humans find these generations to be more trustworthy than human-written
disinformation.
Developing robust verification techniques against generators like Grover is
critical. We find that best current discriminators can classify neural fake
news from real, human-written, news with 73% accuracy, assuming access to a
moderate level of training data. Counterintuitively, the best defense against
Grover turns out to be Grover itself, with 92% accuracy, demonstrating the
importance of public release of strong generators. We investigate these results
further, showing that exposure bias -- and sampling strategies that alleviate
its effects -- both leave artifacts that similar discriminators can pick up on.
We conclude by discussing ethical issues regarding the technology, and plan to
release Grover publicly, helping pave the way for better detection of neural
fake news.
| 2,020 | Computation and Language |
The (Non-)Utility of Structural Features in BiLSTM-based Dependency
Parsers | Classical non-neural dependency parsers put considerable effort on the design
of feature functions. Especially, they benefit from information coming from
structural features, such as features drawn from neighboring tokens in the
dependency tree. In contrast, their BiLSTM-based successors achieve
state-of-the-art performance without explicit information about the structural
context. In this paper we aim to answer the question: How much structural
context are the BiLSTM representations able to capture implicitly? We show that
features drawn from partial subtrees become redundant when the BiLSTMs are
used. We provide a deep insight into information flow in transition- and
graph-based neural architectures to demonstrate where the implicit information
comes from when the parsers make their decisions. Finally, with model ablations
we demonstrate that the structural context is not only present in the models,
but it significantly influences their performance.
| 2,019 | Computation and Language |
Choosing Transfer Languages for Cross-Lingual Learning | Cross-lingual transfer, where a high-resource transfer language is used to
improve the accuracy of a low-resource task language, is now an invaluable tool
for improving performance of natural language processing (NLP) on low-resource
languages. However, given a particular task language, it is not clear which
language to transfer from, and the standard strategy is to select languages
based on ad hoc criteria, usually the intuition of the experimenter. Since a
large number of features contribute to the success of cross-lingual transfer
(including phylogenetic similarity, typological properties, lexical overlap, or
size of available data), even the most enlightened experimenter rarely
considers all these factors for the particular task at hand. In this paper, we
consider this task of automatically selecting optimal transfer languages as a
ranking problem, and build models that consider the aforementioned features to
perform this prediction. In experiments on representative NLP tasks, we
demonstrate that our model predicts good transfer languages much better than ad
hoc baselines considering single features in isolation, and glean insights on
what features are most informative for each different NLP tasks, which may
inform future ad hoc selection even without use of our method. Code, data, and
pre-trained models are available at https://github.com/neulab/langrank
| 2,019 | Computation and Language |
Geolocating Political Events in Text | This work introduces a general method for automatically finding the locations
where political events in text occurred. Using a novel set of 8,000 labeled
sentences, I create a method to link automatically extracted events and
locations in text. The model achieves human level performance on the annotation
task and outperforms previous event geolocation systems. It can be applied to
most event extraction systems across geographic contexts. I formalize the
event--location linking task, describe the neural network model, describe the
potential uses of such a system in political science, and demonstrate a
workflow to answer an open question on the role of conventional military
offensives in causing civilian casualties in the Syrian civil war.
| 2,019 | Computation and Language |
Large Scale Question Paraphrase Retrieval with Smoothed Deep Metric
Learning | The goal of a Question Paraphrase Retrieval (QPR) system is to retrieve
equivalent questions that result in the same answer as the original question.
Such a system can be used to understand and answer rare and noisy
reformulations of common questions by mapping them to a set of canonical forms.
This has large-scale applications for community Question Answering (cQA) and
open-domain spoken language question answering systems. In this paper we
describe a new QPR system implemented as a Neural Information Retrieval (NIR)
system consisting of a neural network sentence encoder and an approximate
k-Nearest Neighbour index for efficient vector retrieval. We also describe our
mechanism to generate an annotated dataset for question paraphrase retrieval
experiments automatically from question-answer logs via distant supervision. We
show that the standard loss function in NIR, triplet loss, does not perform
well with noisy labels. We propose smoothed deep metric loss (SDML) and with
our experiments on two QPR datasets we show that it significantly outperforms
triplet loss in the noisy label setting.
| 2,019 | Computation and Language |
Reducing Gender Bias in Word-Level Language Models with a
Gender-Equalizing Loss Function | Gender bias exists in natural language datasets which neural language models
tend to learn, resulting in biased text generation. In this research, we
propose a debiasing approach based on the loss function modification. We
introduce a new term to the loss function which attempts to equalize the
probabilities of male and female words in the output. Using an array of bias
evaluation metrics, we provide empirical evidence that our approach
successfully mitigates gender bias in language models without increasing
perplexity. In comparison to existing debiasing strategies, data augmentation,
and word embedding debiasing, our method performs better in several aspects,
especially in reducing gender bias in occupation words. Finally, we introduce a
combination of data augmentation and our approach, and show that it outperforms
existing strategies in all bias evaluation metrics.
| 2,019 | Computation and Language |
A Simple but Effective Method to Incorporate Multi-turn Context with
BERT for Conversational Machine Comprehension | Conversational machine comprehension (CMC) requires understanding the context
of multi-turn dialogue. Using BERT, a pre-training language model, has been
successful for single-turn machine comprehension, while modeling multiple turns
of question answering with BERT has not been established because BERT has a
limit on the number and the length of input sequences. In this paper, we
propose a simple but effective method with BERT for CMC. Our method uses BERT
to encode a paragraph independently conditioned with each question and each
answer in a multi-turn context. Then, the method predicts an answer on the
basis of the paragraph representations encoded with BERT. The experiments with
representative CMC datasets, QuAC and CoQA, show that our method outperformed
recently published methods (+0.8 F1 on QuAC and +2.1 F1 on CoQA). In addition,
we conducted a detailed analysis of the effects of the number and types of
dialogue history on the accuracy of CMC, and we found that the gold answer
history, which may not be given in an actual conversation, contributed to the
model performance most on both datasets.
| 2,019 | Computation and Language |
Semantically Conditioned Dialog Response Generation via Hierarchical
Disentangled Self-Attention | Semantically controlled neural response generation on limited-domain has
achieved great performance. However, moving towards multi-domain large-scale
scenarios are shown to be difficult because the possible combinations of
semantic inputs grow exponentially with the number of domains. To alleviate
such scalability issue, we exploit the structure of dialog acts to build a
multi-layer hierarchical graph, where each act is represented as a root-to-leaf
route on the graph. Then, we incorporate such graph structure prior as an
inductive bias to build a hierarchical disentangled self-attention network,
where we disentangle attention heads to model designated nodes on the dialog
act graph. By activating different (disentangled) heads at each layer,
combinatorially many dialog act semantics can be modeled to control the neural
response generation. On the large-scale Multi-Domain-WOZ dataset, our model can
yield a significant improvement over the baselines on various automatic and
human evaluation metrics.
| 2,019 | Computation and Language |
M-GWAP: An Online and Multimodal Game With A Purpose in WordPress for
Mental States Annotation | M-GWAP is a multimodal game with a purpose of that leverages on the wisdom of
crowds phenomenon for the annotation of multimedia data in terms of mental
states. This game with a purpose is developed in WordPress to allow users
implementing the game without programming skills. The game adopts motivational
strategies for the player to remain engaged, such as a score system, text
motivators while playing, a ranking system to foster competition and mechanics
for identify building. The current version of the game was deployed after alpha
and beta testing helped refining the game accordingly.
| 2,019 | Computation and Language |
A Compare-Aggregate Model with Latent Clustering for Answer Selection | In this paper, we propose a novel method for a sentence-level
answer-selection task that is a fundamental problem in natural language
processing. First, we explore the effect of additional information by adopting
a pretrained language model to compute the vector representation of the input
text and by applying transfer learning from a large-scale corpus. Second, we
enhance the compare-aggregate model by proposing a novel latent clustering
method to compute additional information within the target corpus and by
changing the objective function from listwise to pointwise. To evaluate the
performance of the proposed approaches, experiments are performed with the
WikiQA and TREC-QA datasets. The empirical results demonstrate the superiority
of our proposed approach, which achieve state-of-the-art performance for both
datasets.
| 2,019 | Computation and Language |
Controllable Unsupervised Text Attribute Transfer via Editing Entangled
Latent Representation | Unsupervised text attribute transfer automatically transforms a text to alter
a specific attribute (e.g. sentiment) without using any parallel data, while
simultaneously preserving its attribute-independent content. The dominant
approaches are trying to model the content-independent attribute separately,
e.g., learning different attributes' representations or using multiple
attribute-specific decoders. However, it may lead to inflexibility from the
perspective of controlling the degree of transfer or transferring over multiple
aspects at the same time. To address the above problems, we propose a more
flexible unsupervised text attribute transfer framework which replaces the
process of modeling attribute with minimal editing of latent representations
based on an attribute classifier. Specifically, we first propose a
Transformer-based autoencoder to learn an entangled latent representation for a
discrete text, then we transform the attribute transfer task to an optimization
problem and propose the Fast-Gradient-Iterative-Modification algorithm to edit
the latent representation until conforming to the target attribute. Extensive
experimental results demonstrate that our model achieves very competitive
performance on three public data sets. Furthermore, we also show that our model
can not only control the degree of transfer freely but also allow to transfer
over multiple aspects at the same time.
| 2,019 | Computation and Language |
Unbabel's Submission to the WMT2019 APE Shared Task: BERT-based
Encoder-Decoder for Automatic Post-Editing | This paper describes Unbabel's submission to the WMT2019 APE Shared Task for
the English-German language pair. Following the recent rise of large, powerful,
pre-trained models, we adapt the BERT pretrained model to perform Automatic
Post-Editing in an encoder-decoder framework. Analogously to dual-encoder
architectures we develop a BERT-based encoder-decoder (BED) model in which a
single pretrained BERT encoder receives both the source src and machine
translation tgt strings. Furthermore, we explore a conservativeness factor to
constrain the APE system to perform fewer edits. As the official results show,
when trained on a weighted combination of in-domain and artificial training
data, our BED system with the conservativeness penalty improves significantly
the translations of a strong Neural Machine Translation system by $-0.78$ and
$+1.23$ in terms of TER and BLEU, respectively. Finally, our submission
achieves a new state-of-the-art, ex-aequo, in English-German APE of NMT.
| 2,019 | Computation and Language |
Lattice-based lightly-supervised acoustic model training | In the broadcast domain there is an abundance of related text data and
partial transcriptions, such as closed captions and subtitles. This text data
can be used for lightly supervised training, in which text matching the audio
is selected using an existing speech recognition model. Current approaches to
light supervision typically filter the data based on matching error rates
between the transcriptions and biased decoding hypotheses. In contrast,
semi-supervised training does not require matching text data, instead
generating a hypothesis using a background language model. State-of-the-art
semi-supervised training uses lattice-based supervision with the lattice-free
MMI (LF-MMI) objective function. We propose a technique to combine inaccurate
transcriptions with the lattices generated for semi-supervised training, thus
preserving uncertainty in the lattice where appropriate. We demonstrate that
this combined approach reduces the expected error rates over the lattices, and
reduces the word error rate (WER) on a broadcast task.
| 2,019 | Computation and Language |
Hierarchical Transformers for Multi-Document Summarization | In this paper, we develop a neural summarization model which can effectively
process multiple input documents and distill Transformer architecture with the
ability to encode documents in a hierarchical manner. We represent
cross-document relationships via an attention mechanism which allows to share
information as opposed to simply concatenating text spans and processing them
as a flat sequence. Our model learns latent dependencies among textual units,
but can also take advantage of explicit graph representations focusing on
similarity or discourse relations. Empirical results on the WikiSum dataset
demonstrate that the proposed architecture brings substantial improvements over
several strong baselines.
| 2,019 | Computation and Language |
MathQA: Towards Interpretable Math Word Problem Solving with
Operation-Based Formalisms | We introduce a large-scale dataset of math word problems and an interpretable
neural math problem solver that learns to map problems to operation programs.
Due to annotation challenges, current datasets in this domain have been either
relatively small in scale or did not offer precise operational annotations over
diverse problem types. We introduce a new representation language to model
precise operation programs corresponding to each math problem that aim to
improve both the performance and the interpretability of the learned models.
Using this representation language, our new dataset, MathQA, significantly
enhances the AQuA dataset with fully-specified operational programs. We
additionally introduce a neural sequence-to-program model enhanced with
automatic problem categorization. Our experiments show improvements over
competitive baselines in our MathQA as well as the AQuA dataset. The results
are still significantly lower than human performance indicating that the
dataset poses new challenges for future research. Our dataset is available at:
https://math-qa.github.io/math-QA/
| 2,019 | Computation and Language |
Assessing The Factual Accuracy of Generated Text | We propose a model-based metric to estimate the factual accuracy of generated
text that is complementary to typical scoring schemes like ROUGE
(Recall-Oriented Understudy for Gisting Evaluation) and BLEU (Bilingual
Evaluation Understudy). We introduce and release a new large-scale dataset
based on Wikipedia and Wikidata to train relation classifiers and end-to-end
fact extraction models. The end-to-end models are shown to be able to extract
complete sets of facts from datasets with full pages of text. We then analyse
multiple models that estimate factual accuracy on a Wikipedia text
summarization task, and show their efficacy compared to ROUGE and other
model-free variants by conducting a human evaluation study.
| 2,019 | Computation and Language |
A Lightweight Recurrent Network for Sequence Modeling | Recurrent networks have achieved great success on various sequential tasks
with the assistance of complex recurrent units, but suffer from severe
computational inefficiency due to weak parallelization. One direction to
alleviate this issue is to shift heavy computations outside the recurrence. In
this paper, we propose a lightweight recurrent network, or LRN. LRN uses input
and forget gates to handle long-range dependencies as well as gradient
vanishing and explosion, with all parameter related calculations factored
outside the recurrence. The recurrence in LRN only manipulates the weight
assigned to each token, tightly connecting LRN with self-attention networks. We
apply LRN as a drop-in replacement of existing recurrent units in several
neural sequential models. Extensive experiments on six NLP tasks show that LRN
yields the best running efficiency with little or no loss in model performance.
| 2,019 | Computation and Language |
Grammar-based Neural Text-to-SQL Generation | The sequence-to-sequence paradigm employed by neural text-to-SQL models
typically performs token-level decoding and does not consider generating SQL
hierarchically from a grammar. Grammar-based decoding has shown significant
improvements for other semantic parsing tasks, but SQL and other general
programming languages have complexities not present in logical formalisms that
make writing hierarchical grammars difficult. We introduce techniques to handle
these complexities, showing how to construct a schema-dependent grammar with
minimal over-generation. We analyze these techniques on ATIS and Spider, two
challenging text-to-SQL datasets, demonstrating that they yield 14--18\%
relative reductions in error.
| 2,019 | Computation and Language |
DiaBLa: A Corpus of Bilingual Spontaneous Written Dialogues for Machine
Translation | We present a new English-French test set for the evaluation of Machine
Translation (MT) for informal, written bilingual dialogue. The test set
contains 144 spontaneous dialogues (5,700+ sentences) between native English
and French speakers, mediated by one of two neural MT systems in a range of
role-play settings. The dialogues are accompanied by fine-grained
sentence-level judgments of MT quality, produced by the dialogue participants
themselves, as well as by manually normalised versions and reference
translations produced a posteriori. The motivation for the corpus is two-fold:
to provide (i) a unique resource for evaluating MT models, and (ii) a corpus
for the analysis of MT-mediated communication. We provide a preliminary
analysis of the corpus to confirm that the participants' judgments reveal
perceptible differences in MT quality between the two MT systems used.
| 2,019 | Computation and Language |
Multi-modal Discriminative Model for Vision-and-Language Navigation | Vision-and-Language Navigation (VLN) is a natural language grounding task
where agents have to interpret natural language instructions in the context of
visual scenes in a dynamic environment to achieve prescribed navigation goals.
Successful agents must have the ability to parse natural language of varying
linguistic styles, ground them in potentially unfamiliar scenes, plan and react
with ambiguous environmental feedback. Generalization ability is limited by the
amount of human annotated data. In particular, \emph{paired} vision-language
sequence data is expensive to collect. We develop a discriminator that
evaluates how well an instruction explains a given path in VLN task using
multi-modal alignment. Our study reveals that only a small fraction of the
high-quality augmented data from \citet{Fried:2018:Speaker}, as scored by our
discriminator, is useful for training VLN agents with similar performance on
previously unseen environments. We also show that a VLN agent warm-started with
pre-trained components from the discriminator outperforms the benchmark success
rates of 35.5 by 10\% relative measure on previously unseen environments.
| 2,019 | Computation and Language |
Leveraging Pretrained Word Embeddings for Part-of-Speech Tagging of Code
Switching Data | Linguistic Code Switching (CS) is a phenomenon that occurs when multilingual
speakers alternate between two or more languages/dialects within a single
conversation. Processing CS data is especially challenging in intra-sentential
data given state-of-the-art monolingual NLP technologies since such
technologies are geared toward the processing of one language at a time. In
this paper, we address the problem of Part-of-Speech tagging (POS) in the
context of linguistic code switching (CS). We explore leveraging multiple
neural network architectures to measure the impact of different pre-trained
embeddings methods on POS tagging CS data. We investigate the landscape in four
CS language pairs, Spanish-English, Hindi-English, Modern Standard Arabic-
Egyptian Arabic dialect (MSA-EGY), and Modern Standard Arabic- Levantine Arabic
dialect (MSA-LEV). Our results show that multilingual embedding (e.g., MSA-EGY
and MSA-LEV) helps closely related languages (EGY/LEV) but adds noise to the
languages that are distant (SPA/HIN). Finally, we show that our proposed models
outperform state-of-the-art CS taggers for MSA-EGY language pair.
| 2,019 | Computation and Language |
Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement
Learning | Our work involves enriching the Stack-LSTM transition-based AMR parser
(Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning
and rewarding the Smatch score of sampled graphs. In addition, we also combined
several AMR-to-text alignments with an attention mechanism and we supplemented
the parser with pre-processed concept identification, named entities and
contextualized embeddings. We achieve a highly competitive performance that is
comparable to the best published results. We show an in-depth study ablating
each of the new components of the parser
| 2,019 | Computation and Language |
Improving Open Information Extraction via Iterative Rank-Aware Learning | Open information extraction (IE) is the task of extracting open-domain
assertions from natural language sentences. A key step in open IE is confidence
modeling, ranking the extractions based on their estimated quality to adjust
precision and recall of extracted assertions. We found that the extraction
likelihood, a confidence measure used by current supervised open IE systems, is
not well calibrated when comparing the quality of assertions extracted from
different sentences. We propose an additional binary classification loss to
calibrate the likelihood to make it more globally comparable, and an iterative
learning process, where extractions generated by the open IE model are
incrementally included as training samples to help the model learn from trial
and error. Experiments on OIE2016 demonstrate the effectiveness of our method.
Code and data are available at https://github.com/jzbjyb/oie_rank.
| 2,019 | Computation and Language |
Fine-Grained Spoiler Detection from Large-Scale Review Corpora | This paper presents computational approaches for automatically detecting
critical plot twists in reviews of media products. First, we created a
large-scale book review dataset that includes fine-grained spoiler annotations
at the sentence-level, as well as book and (anonymized) user information.
Second, we carefully analyzed this dataset, and found that: spoiler language
tends to be book-specific; spoiler distributions vary greatly across books and
review authors; and spoiler sentences tend to jointly appear in the latter part
of reviews. Third, inspired by these findings, we developed an end-to-end
neural network architecture to detect spoiler sentences in review corpora.
Quantitative and qualitative results demonstrate that the proposed method
substantially outperforms existing baselines.
| 2,019 | Computation and Language |
Constructive Type-Logical Supertagging with Self-Attention Networks | We propose a novel application of self-attention networks towards grammar
induction. We present an attention-based supertagger for a refined type-logical
grammar, trained on constructing types inductively. In addition to achieving a
high overall type accuracy, our model is able to learn the syntax of the
grammar's type system along with its denotational semantics. This lifts the
closed world assumption commonly made by lexicalized grammar supertaggers,
greatly enhancing its generalization potential. This is evidenced both by its
adequate accuracy over sparse word types and its ability to correctly construct
complex types never seen during training, which, to the best of our knowledge,
was as of yet unaccomplished.
| 2,019 | Computation and Language |
Content Word-based Sentence Decoding and Evaluating for Open-domain
Neural Response Generation | Various encoder-decoder models have been applied to response generation in
open-domain dialogs, but a majority of conventional models directly learn a
mapping from lexical input to lexical output without explicitly modeling
intermediate representations. Utilizing language hierarchy and modeling
intermediate information have been shown to benefit many language understanding
and generation tasks. Motivated by Broca's aphasia, we propose to use a content
word sequence as an intermediate representation for open-domain response
generation. Experimental results show that the proposed method improves content
relatedness of produced responses, and our models can often choose correct
grammar for generated content words. Meanwhile, instead of evaluating complete
sentences, we propose to compute conventional metrics on content word
sequences, which is a better indicator of content relevance.
| 2,019 | Computation and Language |
Symbol Emergence as an Interpersonal Multimodal Categorization | This study focuses on category formation for individual agents and the
dynamics of symbol emergence in a multi-agent system through semiotic
communication. Semiotic communication is defined, in this study, as the
generation and interpretation of signs associated with the categories formed
through the agent's own sensory experience or by exchange of signs with other
agents. From the viewpoint of language evolution and symbol emergence,
organization of a symbol system in a multi-agent system is considered as a
bottom-up and dynamic process, where individual agents share the meaning of
signs and categorize sensory experience. A constructive computational model can
explain the mutual dependency of the two processes and has mathematical support
that guarantees a symbol system's emergence and sharing within the multi-agent
system. In this paper, we describe a new computational model that represents
symbol emergence in a two-agent system based on a probabilistic generative
model for multimodal categorization. It models semiotic communication via a
probabilistic rejection based on the receiver's own belief. We have found that
the dynamics by which cognitively independent agents create a symbol system
through their semiotic communication can be regarded as the inference process
of a hidden variable in an interpersonal multimodal categorizer, if we define
the rejection probability based on the Metropolis-Hastings algorithm. The
validity of the proposed model and algorithm for symbol emergence is also
verified in an experiment with two agents observing daily objects in the
real-world environment. The experimental results demonstrate that our model
reproduces the phenomena of symbol emergence, which does not require a teacher
who would know a pre-existing symbol system. Instead, the multi-agent system
can form and use a symbol system without having pre-existing categories.
| 2,019 | Computation and Language |
MultiQA: An Empirical Investigation of Generalization and Transfer in
Reading Comprehension | A large number of reading comprehension (RC) datasets has been created
recently, but little analysis has been done on whether they generalize to one
another, and the extent to which existing datasets can be leveraged for
improving performance on new ones. In this paper, we conduct such an
investigation over ten RC datasets, training on one or more source RC datasets,
and evaluating generalization, as well as transfer to a target RC dataset. We
analyze the factors that contribute to generalization, and show that training
on a source RC dataset and transferring to a target dataset substantially
improves performance, even in the presence of powerful contextual
representations from BERT (Devlin et al., 2019). We also find that training on
multiple source RC datasets leads to robust generalization and transfer, and
can reduce the cost of example collection for a new RC dataset. Following our
analysis, we propose MultiQA, a BERT-based model, trained on multiple RC
datasets, which leads to state-of-the-art performance on five RC datasets. We
share our infrastructure for the benefit of the research community.
| 2,019 | Computation and Language |
Effective writing style imitation via combinatorial paraphrasing | Stylometry can be used to profile or deanonymize authors against their will
based on writing style. Style transfer provides a defence. Current techniques
typically use either encoder-decoder architectures or rule-based algorithms.
Crucially, style transfer must reliably retain original semantic content to be
actually deployable. We conduct a multifaceted evaluation of three
state-of-the-art encoder-decoder style transfer techniques, and show that all
fail at semantic retainment. In particular, they do not produce appropriate
paraphrases, but only retain original content in the trivial case of exactly
reproducing the text. To mitigate this problem we propose ParChoice: a
technique based on the combinatorial application of multiple paraphrasing
algorithms. ParChoice strongly outperforms the encoder-decoder baselines in
semantic retainment. Additionally, compared to baselines that achieve
non-negligible semantic retainment, ParChoice has superior style transfer
performance. We also apply ParChoice to multi-author style imitation (not
considered by prior work), where we achieve up to 75% imitation success among
five authors. Furthermore, when compared to two state-of-the-art rule-based
style transfer techniques, ParChoice has markedly better semantic retainment.
Combining ParChoice with the best performing rule-based baseline (Mutant-X)
also reaches the highest style transfer success on the Brennan-Greenstadt and
Extended-Brennan-Greenstadt corpora, with much less impact on original meaning
than when using the rule-based baseline techniques alone. Finally, we highlight
a critical problem that afflicts all current style transfer techniques: the
adversary can use the same technique for thwarting style transfer via
adversarial training. We show that adding randomness to style transfer helps to
mitigate the effectiveness of adversarial training.
| 2,020 | Computation and Language |
Attention Is (not) All You Need for Commonsense Reasoning | The recently introduced BERT model exhibits strong performance on several
language understanding benchmarks. In this paper, we describe a simple
re-implementation of BERT for commonsense reasoning. We show that the
attentions produced by BERT can be directly utilized for tasks such as the
Pronoun Disambiguation Problem and Winograd Schema Challenge. Our proposed
attention-guided commonsense reasoning method is conceptually simple yet
empirically powerful. Experimental analysis on multiple datasets demonstrates
that our proposed system performs remarkably well on all cases while
outperforming the previously reported state of the art by a margin. While
results suggest that BERT seems to implicitly learn to establish complex
relationships between entities, solving commonsense reasoning tasks might
require more than unsupervised models learned from huge text corpora.
| 2,019 | Computation and Language |
Using Natural Language Processing to Develop an Automated Orthodontic
Diagnostic System | We work on the task of automatically designing a treatment plan from the
findings included in the medical certificate written by the dentist. To develop
an artificial intelligence system that deals with free-form certificates
written by dentists, we annotate the findings and utilized the natural language
processing approach. As a result of the experiment using 990 certificates,
0.585 F1-score was achieved for the task of extracting orthodontic problems
from findings, and 0.584 correlation coefficient with the human ranking was
achieved for the treatment prioritization task.
| 2,019 | Computation and Language |
Crowdsourcing and Validating Event-focused Emotion Corpora for German
and English | Sentiment analysis has a range of corpora available across multiple
languages. For emotion analysis, the situation is more limited, which hinders
potential research on cross-lingual modeling and the development of predictive
models for other languages. In this paper, we fill this gap for German by
constructing deISEAR, a corpus designed in analogy to the well-established
English ISEAR emotion dataset. Motivated by Scherer's appraisal theory, we
implement a crowdsourcing experiment which consists of two steps. In step 1,
participants create descriptions of emotional events for a given emotion. In
step 2, five annotators assess the emotion expressed by the texts. We show that
transferring an emotion classification model from the original English ISEAR to
the German crowdsourced deISEAR via machine translation does not, on average,
cause a performance drop.
| 2,019 | Computation and Language |
GSN: A Graph-Structured Network for Multi-Party Dialogues | Existing neural models for dialogue response generation assume that
utterances are sequentially organized. However, many real-world dialogues
involve multiple interlocutors (i.e., multi-party dialogues), where the
assumption does not hold as utterances from different interlocutors can occur
"in parallel." This paper generalizes existing sequence-based models to a
Graph-Structured neural Network (GSN) for dialogue modeling. The core of GSN is
a graph-based encoder that can model the information flow along the
graph-structured dialogues (two-party sequential dialogues are a special case).
Experimental results show that GSN significantly outperforms existing
sequence-based models.
| 2,019 | Computation and Language |
Investigating an Effective Character-level Embedding in Korean Sentence
Classification | Different from the writing systems of many Romance and Germanic languages,
some languages or language families show complex conjunct forms in character
composition. For such cases where the conjuncts consist of the components
representing consonant(s) and vowel, various character encoding schemes can be
adopted beyond merely making up a one-hot vector. However, there has been
little work done on intra-language comparison regarding performances using each
representation. In this study, utilizing the Korean language which is
character-rich and agglutinative, we investigate an encoding scheme that is the
most effective among Jamo-level one-hot, character-level one-hot,
character-level dense, and character-level multi-hot. Classification
performance with each scheme is evaluated on two corpora: one on binary
sentiment analysis of movie reviews, and the other on multi-class
identification of intention types. The result displays that the character-level
features show higher performance in general, although the Jamo-level features
may show compatibility with the attention-based models if guaranteed adequate
parameter set size.
| 2,019 | Computation and Language |
Entropy Minimization In Emergent Languages | There is growing interest in studying the languages that emerge when neural
agents are jointly trained to solve tasks requiring communication through a
discrete channel. We investigate here the information-theoretic complexity of
such languages, focusing on the basic two-agent, one-exchange setup. We find
that, under common training procedures, the emergent languages are subject to
an entropy minimization pressure that has also been detected in human language,
whereby the mutual information between the communicating agent's inputs and the
messages is minimized, within the range afforded by the need for successful
communication. That is, emergent languages are (nearly) as simple as the task
they are developed for allow them to be. This pressure is amplified as we
increase communication channel discreteness. Further, we observe that stronger
discrete-channel-driven entropy minimization leads to representations with
increased robustness to overfitting and adversarial attacks. We conclude by
discussing the implications of our findings for the study of natural and
artificial communication systems.
| 2,020 | Computation and Language |
Do Human Rationales Improve Machine Explanations? | Work on "learning with rationales" shows that humans providing explanations
to a machine learning system can improve the system's predictive accuracy.
However, this work has not been connected to work in "explainable AI" which
concerns machines explaining their reasoning to humans. In this work, we show
that learning with rationales can also improve the quality of the machine's
explanations as evaluated by human judges. Specifically, we present experiments
showing that, for CNN- based text classification, explanations generated using
"supervised attention" are judged superior to explanations generated using
normal unsupervised attention.
| 2,019 | Computation and Language |
Visual Understanding and Narration: A Deeper Understanding and
Explanation of Visual Scenes | We describe the task of Visual Understanding and Narration, in which a robot
(or agent) generates text for the images that it collects when navigating its
environment, by answering open-ended questions, such as 'what happens, or might
have happened, here?'
| 2,019 | Computation and Language |
Thinking Slow about Latency Evaluation for Simultaneous Machine
Translation | Simultaneous machine translation attempts to translate a source sentence
before it is finished being spoken, with applications to translation of spoken
language for live streaming and conversation. Since simultaneous systems trade
quality to reduce latency, having an effective and interpretable latency metric
is crucial. We introduce a variant of the recently proposed Average Lagging
(AL) metric, which we call Differentiable Average Lagging (DAL). It
distinguishes itself by being differentiable and internally consistent to its
underlying mathematical model.
| 2,019 | Computation and Language |
Improving the Similarity Measure of Determinantal Point Processes for
Extractive Multi-Document Summarization | The most important obstacles facing multi-document summarization include
excessive redundancy in source descriptions and the looming shortage of
training data. These obstacles prevent encoder-decoder models from being used
directly, but optimization-based methods such as determinantal point processes
(DPPs) are known to handle them well. In this paper we seek to strengthen a
DPP-based method for extractive multi-document summarization by presenting a
novel similarity measure inspired by capsule networks. The approach measures
redundancy between a pair of sentences based on surface form and semantic
information. We show that our DPP system with improved similarity measure
performs competitively, outperforming strong summarization baselines on
benchmark datasets. Our findings are particularly meaningful for summarizing
documents created by multiple authors containing redundant yet lexically
diverse expressions.
| 2,019 | Computation and Language |
Scoring Sentence Singletons and Pairs for Abstractive Summarization | When writing a summary, humans tend to choose content from one or two
sentences and merge them into a single summary sentence. However, the
mechanisms behind the selection of one or multiple source sentences remain
poorly understood. Sentence fusion assumes multi-sentence input; yet sentence
selection methods only work with single sentences and not combinations of them.
There is thus a crucial gap between sentence selection and fusion to support
summarizing by both compressing single sentences and fusing pairs. This paper
attempts to bridge the gap by ranking sentence singletons and pairs together in
a unified space. Our proposed framework attempts to model human methodology by
selecting either a single sentence or a pair of sentences, then compressing or
fusing the sentence(s) to produce a summary sentence. We conduct extensive
experiments on both single- and multi-document summarization datasets and
report findings on sentence selection and abstraction.
| 2,019 | Computation and Language |
Gmail Smart Compose: Real-Time Assisted Writing | In this paper, we present Smart Compose, a novel system for generating
interactive, real-time suggestions in Gmail that assists users in writing mails
by reducing repetitive typing. In the design and deployment of such a
large-scale and complicated system, we faced several challenges including model
selection, performance evaluation, serving and other practical issues. At the
core of Smart Compose is a large-scale neural language model. We leveraged
state-of-the-art machine learning techniques for language model training which
enabled high-quality suggestion prediction, and constructed novel serving
infrastructure for high-throughput and real-time inference. Experimental
results show the effectiveness of our proposed system design and deployment
approach. This system is currently being served in Gmail.
| 2,019 | Computation and Language |
Emotional Embeddings: Refining Word Embeddings to Capture Emotional
Content of Words | Word embeddings are one of the most useful tools in any modern natural
language processing expert's toolkit. They contain various types of information
about each word which makes them the best way to represent the terms in any NLP
task. But there are some types of information that cannot be learned by these
models. Emotional information of words are one of those. In this paper, we
present an approach to incorporate emotional information of words into these
models. We accomplish this by adding a secondary training stage which uses an
emotional lexicon and a psychological model of basic emotions. We show that
fitting an emotional model into pre-trained word vectors can increase the
performance of these models in emotional similarity metrics. Retrained models
perform better than their original counterparts from 13% improvement for
Word2Vec model, to 29% for GloVe vectors. This is the first such model
presented in the literature, and although preliminary, these emotion sensitive
models can open the way to increase performance in variety of emotion detection
techniques.
| 2,019 | Computation and Language |
Examining Structure of Word Embeddings with PCA | In this paper we compare structure of Czech word embeddings for English-Czech
neural machine translation (NMT), word2vec and sentiment analysis. We show that
although it is possible to successfully predict part of speech (POS) tags from
word embeddings of word2vec and various translation models, not all of the
embedding spaces show the same structure. The information about POS is present
in word2vec embeddings, but the high degree of organization by POS in the NMT
decoder suggests that this information is more important for machine
translation and therefore the NMT model represents it in more direct way. Our
method is based on correlation of principal component analysis (PCA) dimensions
with categorical linguistic data. We also show that further examining
histograms of classes along the principal component is important to understand
the structure of representation of information in embeddings.
| 2,019 | Computation and Language |
Efficient Adaptation of Pretrained Transformers for Abstractive
Summarization | Large-scale learning of transformer language models has yielded improvements
on a variety of natural language understanding tasks. Whether they can be
effectively adapted for summarization, however, has been less explored, as the
learned representations are less seamlessly integrated into existing neural
text production architectures. In this work, we propose two solutions for
efficiently adapting pretrained transformer language models as text
summarizers: source embeddings and domain-adaptive training. We test these
solutions on three abstractive summarization datasets, achieving new state of
the art performance on two of them. Finally, we show that these improvements
are achieved by producing more focused summaries with fewer superfluous and
that performance improvements are more pronounced on more abstractive datasets.
| 2,019 | Computation and Language |
Multi-Turn Beam Search for Neural Dialogue Modeling | In neural dialogue modeling, a neural network is trained to predict the next
utterance, and at inference time, an approximate decoding algorithm is used to
generate next utterances given previous ones. While this autoregressive
framework allows us to model the whole conversation during training, inference
is highly suboptimal, as a wrong utterance can affect future utterances. While
beam search yields better results than greedy search does, we argue that it is
still greedy in the context of the entire conversation, in that it does not
consider future utterances. We propose a novel approach for conversation-level
inference by explicitly modeling the dialogue partner and running beam search
across multiple conversation turns. Given a set of candidates for next
utterance, we unroll the conversation for a number of turns and identify the
candidate utterance in the initial hypothesis set that gives rise to the most
likely sequence of future utterances. We empirically validate our approach by
conducting human evaluation using the Persona-Chat dataset, and find that our
multi-turn beam search generates significantly better dialogue responses. We
propose three approximations to the partner model, and observe that more
informed partner models give better performance.
| 2,019 | Computation and Language |
Promotion of Answer Value Measurement with Domain Effects in Community
Question Answering Systems | In the area of community question answering (CQA), answer selection and
answer ranking are two tasks which are applied to help users quickly access
valuable answers. Existing solutions mainly exploit the syntactic or semantic
correlation between a question and its related answers (Q&A), where the
multi-facet domain effects in CQA are still underexplored. In this paper, we
propose a unified model, Enhanced Attentive Recurrent Neural Network (EARNN),
for both answer selection and answer ranking tasks by taking full advantages of
both Q&A semantics and multi-facet domain effects (i.e., topic effects and
timeliness). Specifically, we develop a serialized LSTM to learn the unified
representations of Q&A, where two attention mechanisms at either sentence-level
or word-level are designed for capturing the deep effects of topics. Meanwhile,
the emphasis of Q&A can be automatically distinguished. Furthermore, we design
a time-sensitive ranking function to model the timeliness in CQA. To
effectively train EARNN, a question-dependent pairwise learning strategy is
also developed. Finally, we conduct extensive experiments on a real-world
dataset from Quora. Experimental results validate the effectiveness and
interpretability of our proposed EARNN model.
| 2,019 | Computation and Language |
Adversarial Generation and Encoding of Nested Texts | In this paper we propose a new language model called AGENT, which stands for
Adversarial Generation and Encoding of Nested Texts. AGENT is designed for
encoding, generating and refining documents that consist of a long and coherent
text, such as an entire book, provided they are hierarchically annotated
(nested). i.e. divided into sentences, paragraphs and chapters. The core idea
of our system is learning vector representations for each level of the text
hierarchy (sentences, paragraphs, etc...), and train each such representation
to perform 3 tasks: The task of reconstructing the sequence of vectors from a
lower level that was used to create the representation, and generalized
versions of the Masked Language Modeling (MLM) and "Next Sentence Prediction"
tasks from BERT Devlin et al. [2018]. Additionally we present a new adversarial
model for long text generation and suggest a way to improve the coherence of
the generated text by traversing its vector representation tree.
| 2,019 | Computation and Language |
COS960: A Chinese Word Similarity Dataset of 960 Word Pairs | Word similarity computation is a widely recognized task in the field of
lexical semantics. Most proposed tasks test on similarity of word pairs of
single morpheme, while few works focus on words of two morphemes or more
morphemes. In this work, we propose COS960, a benchmark dataset with 960 pairs
of Chinese wOrd Similarity, where all the words have two morphemes in three
Part of Speech (POS) tags with their human annotated similarity rather than
relatedness. We give a detailed description of dataset construction and
annotation process, and test on a range of word embedding models. The dataset
of this paper can be obtained from https://github.com/thunlp/COS960.
| 2,019 | Computation and Language |
How to best use Syntax in Semantic Role Labelling | There are many different ways in which external information might be used in
an NLP task. This paper investigates how external syntactic information can be
used most effectively in the Semantic Role Labeling (SRL) task. We evaluate
three different ways of encoding syntactic parses and three different ways of
injecting them into a state-of-the-art neural ELMo-based SRL sequence labelling
model. We show that using a constituency representation as input features
improves performance the most, achieving a new state-of-the-art for
non-ensemble SRL models on the in-domain CoNLL'05 and CoNLL'12 benchmarks.
| 2,019 | Computation and Language |
"President Vows to Cut <Taxes> Hair": Dataset and Analysis of Creative
Text Editing for Humorous Headlines | We introduce, release, and analyze a new dataset, called Humicroedit, for
research in computational humor. Our publicly available data consists of
regular English news headlines paired with versions of the same headlines that
contain simple replacement edits designed to make them funny. We carefully
curated crowdsourced editors to create funny headlines and judges to score a to
a total of 15,095 edited headlines, with five judges per headline. The simple
edits, usually just a single word replacement, mean we can apply
straightforward analysis techniques to determine what makes our edited
headlines humorous. We show how the data support classic theories of humor,
such as incongruity, superiority, and setup/punchline. Finally, we develop
baseline classifiers that can predict whether or not an edited headline is
funny, which is a first step toward automatically generating humorous headlines
as an approach to creating topical humor.
| 2,019 | Computation and Language |
Multimodal Transformer for Unaligned Multimodal Language Sequences | Human language is often multimodal, which comprehends a mixture of natural
language, facial gestures, and acoustic behaviors. However, two major
challenges in modeling such multimodal human language time-series data exist:
1) inherent data non-alignment due to variable sampling rates for the sequences
from each modality; and 2) long-range dependencies between elements across
modalities. In this paper, we introduce the Multimodal Transformer (MulT) to
generically address the above issues in an end-to-end manner without explicitly
aligning the data. At the heart of our model is the directional pairwise
crossmodal attention, which attends to interactions between multimodal
sequences across distinct time steps and latently adapt streams from one
modality to another. Comprehensive experiments on both aligned and non-aligned
multimodal time-series show that our model outperforms state-of-the-art methods
by a large margin. In addition, empirical analysis suggests that correlated
crossmodal signals are able to be captured by the proposed crossmodal attention
mechanism in MulT.
| 2,019 | Computation and Language |
Latent Retrieval for Weakly Supervised Open Domain Question Answering | Recent work on open domain question answering (QA) assumes strong supervision
of the supporting evidence and/or assumes a blackbox information retrieval (IR)
system to retrieve evidence candidates. We argue that both are suboptimal,
since gold evidence is not always available, and QA is fundamentally different
from IR. We show for the first time that it is possible to jointly learn the
retriever and reader from question-answer string pairs and without any IR
system. In this setting, evidence retrieval from all of Wikipedia is treated as
a latent variable. Since this is impractical to learn from scratch, we
pre-train the retriever with an Inverse Cloze Task. We evaluate on open
versions of five QA datasets. On datasets where the questioner already knows
the answer, a traditional IR system such as BM25 is sufficient. On datasets
where a user is genuinely seeking an answer, we show that learned retrieval is
crucial, outperforming BM25 by up to 19 points in exact match.
| 2,019 | Computation and Language |
Question Answering as an Automatic Evaluation Metric for News Article
Summarization | Recent work in the field of automatic summarization and headline generation
focuses on maximizing ROUGE scores for various news datasets. We present an
alternative, extrinsic, evaluation metric for this task, Answering Performance
for Evaluation of Summaries. APES utilizes recent progress in the field of
reading-comprehension to quantify the ability of a summary to answer a set of
manually created questions regarding central entities in the source article. We
first analyze the strength of this metric by comparing it to known manual
evaluation metrics. We then present an end-to-end neural abstractive model that
maximizes APES, while increasing ROUGE scores to competitive results.
| 2,019 | Computation and Language |
Are You Looking? Grounding to Multiple Modalities in Vision-and-Language
Navigation | Vision-and-Language Navigation (VLN) requires grounding instructions, such as
"turn right and stop at the door", to routes in a visual environment. The
actual grounding can connect language to the environment through multiple
modalities, e.g. "stop at the door" might ground into visual objects, while
"turn right" might rely only on the geometric structure of a route. We
investigate where the natural language empirically grounds under two recent
state-of-the-art VLN models. Surprisingly, we discover that visual features may
actually hurt these models: models which only use route structure, ablating
visual features, outperform their visual counterparts in unseen new
environments on the benchmark Room-to-Room dataset. To better use all the
available modalities, we propose to decompose the grounding procedure into a
set of expert models with access to different modalities (including object
detections) and ensemble them at prediction time, improving the performance of
state-of-the-art models on the VLN task.
| 2,019 | Computation and Language |
Domain Adaptation of Neural Machine Translation by Lexicon Induction | It has been previously noted that neural machine translation (NMT) is very
sensitive to domain shift. In this paper, we argue that this is a dual effect
of the highly lexicalized nature of NMT, resulting in failure for sentences
with large numbers of unknown words, and lack of supervision for
domain-specific words. To remedy this problem, we propose an unsupervised
adaptation method which fine-tunes a pre-trained out-of-domain NMT model using
a pseudo-in-domain corpus. Specifically, we perform lexicon induction to
extract an in-domain lexicon, and construct a pseudo-parallel in-domain corpus
by performing word-for-word back-translation of monolingual in-domain target
sentences. In five domains over twenty pairwise adaptation settings and two
model architectures, our method achieves consistent improvements without using
any in-domain parallel sentences, improving up to 14 BLEU over unadapted
models, and up to 2 BLEU over strong back-translation baselines.
| 2,019 | Computation and Language |
Unsupervised Bilingual Lexicon Induction from Mono-lingual Multimodal
Data | Bilingual lexicon induction, translating words from the source language to
the target language, is a long-standing natural language processing task.
Recent endeavors prove that it is promising to employ images as pivot to learn
the lexicon induction without reliance on parallel corpora. However, these
vision-based approaches simply associate words with entire images, which are
constrained to translate concrete words and require object-centered images. We
humans can understand words better when they are within a sentence with
context. Therefore, in this paper, we propose to utilize images and their
associated captions to address the limitations of previous approaches. We
propose a multi-lingual caption model trained with different mono-lingual
multimodal data to map words in different languages into joint spaces. Two
types of word representation are induced from the multi-lingual caption model:
linguistic features and localized visual features. The linguistic feature is
learned from the sentence contexts with visual semantic constraints, which is
beneficial to learn translation for words that are less visual-relevant. The
localized visual feature is attended to the region in the image that correlates
to the word, so that it alleviates the image restriction for salient visual
representation. The two types of features are complementary for word
translation. Experimental results on multiple language pairs demonstrate the
effectiveness of our proposed method, which substantially outperforms previous
vision-based approaches without using any parallel sentences or supervision of
seed word pairs.
| 2,019 | Computation and Language |
Domain Adaptive Inference for Neural Machine Translation | We investigate adaptive ensemble weighting for Neural Machine Translation,
addressing the case of improving performance on a new and potentially unknown
domain without sacrificing performance on the original domain. We adapt
sequentially across two Spanish-English and three English-German tasks,
comparing unregularized fine-tuning, L2 and Elastic Weight Consolidation. We
then report a novel scheme for adaptive NMT ensemble decoding by extending
Bayesian Interpolation with source information, and show strong improvements
across test domains without access to the domain label.
| 2,019 | Computation and Language |
Pretraining Methods for Dialog Context Representation Learning | This paper examines various unsupervised pretraining objectives for learning
dialog context representations. Two novel methods of pretraining dialog context
encoders are proposed, and a total of four methods are examined. Each
pretraining objective is fine-tuned and evaluated on a set of downstream dialog
tasks using the MultiWoz dataset and strong performance improvement is
observed. Further evaluation shows that our pretraining objectives result in
not only better performance, but also better convergence, models that are less
data hungry and have better domain generalizability.
| 2,019 | Computation and Language |
Plain English Summarization of Contracts | Unilateral contracts, such as terms of service, play a substantial role in
modern digital life. However, few users read these documents before accepting
the terms within, as they are too long and the language too complicated. We
propose the task of summarizing such legal documents in plain English, which
would enable users to have a better understanding of the terms they are
accepting.
We propose an initial dataset of legal text snippets paired with summaries
written in plain English. We verify the quality of these summaries manually and
show that they involve heavy abstraction, compression, and simplification.
Initial experiments show that unsupervised extractive summarization methods do
not perform well on this task due to the level of abstraction and style
differences. We conclude with a call for resource and technique development for
simplification and style transfer for legal language.
| 2,019 | Computation and Language |
Deep Unknown Intent Detection with Margin Loss | Identifying the unknown (novel) user intents that have never appeared in the
training set is a challenging task in the dialogue system. In this paper, we
present a two-stage method for detecting unknown intents. We use bidirectional
long short-term memory (BiLSTM) network with the margin loss as the feature
extractor. With margin loss, we can learn discriminative deep features by
forcing the network to maximize inter-class variance and to minimize
intra-class variance. Then, we feed the feature vectors to the density-based
novelty detection algorithm, local outlier factor (LOF), to detect unknown
intents. Experiments on two benchmark datasets show that our method can yield
consistent improvements compared with the baseline methods.
| 2,019 | Computation and Language |
Budgeted Policy Learning for Task-Oriented Dialogue Systems | This paper presents a new approach that extends Deep Dyna-Q (DDQ) by
incorporating a Budget-Conscious Scheduling (BCS) to best utilize a fixed,
small amount of user interactions (budget) for learning task-oriented dialogue
agents. BCS consists of (1) a Poisson-based global scheduler to allocate budget
over different stages of training; (2) a controller to decide at each training
step whether the agent is trained using real or simulated experiences; (3) a
user goal sampling module to generate the experiences that are most effective
for policy learning. Experiments on a movie-ticket booking task with simulated
and real users show that our approach leads to significant improvements in
success rate over the state-of-the-art baselines given the fixed budget.
| 2,019 | Computation and Language |
A Survey of Natural Language Generation Techniques with a Focus on
Dialogue Systems - Past, Present and Future Directions | One of the hardest problems in the area of Natural Language Processing and
Artificial Intelligence is automatically generating language that is coherent
and understandable to humans. Teaching machines how to converse as humans do
falls under the broad umbrella of Natural Language Generation. Recent years
have seen unprecedented growth in the number of research articles published on
this subject in conferences and journals both by academic and industry
researchers. There have also been several workshops organized alongside
top-tier NLP conferences dedicated specifically to this problem. All this
activity makes it hard to clearly define the state of the field and reason
about its future directions. In this work, we provide an overview of this
important and thriving area, covering traditional approaches, statistical
approaches and also approaches that use deep neural networks. We provide a
comprehensive review towards building open domain dialogue systems, an
important application of natural language generation. We find that,
predominantly, the approaches for building dialogue systems use seq2seq or
language models architecture. Notably, we identify three important areas of
further research towards building more effective dialogue systems: 1)
incorporating larger context, including conversation context and world
knowledge; 2) adding personae or personality in the NLG system; and 3)
overcoming dull and generic responses that affect the quality of
system-produced responses. We provide pointers on how to tackle these open
problems through the use of cognitive architectures that mimic human language
understanding and generation capabilities.
| 2,019 | Computation and Language |
Sentiment Tagging with Partial Labels using Modular Architectures | Many NLP learning tasks can be decomposed into several distinct sub-tasks,
each associated with a partial label. In this paper we focus on a popular class
of learning problems, sequence prediction applied to several sentiment analysis
tasks, and suggest a modular learning approach in which different sub-tasks are
learned using separate functional modules, combined to perform the final task
while sharing information. Our experiments show this approach helps constrain
the learning process and can alleviate some of the supervision efforts.
| 2,019 | Computation and Language |
Know More about Each Other: Evolving Dialogue Strategy via Compound
Assessment | In this paper, a novel Generation-Evaluation framework is developed for
multi-turn conversations with the objective of letting both participants know
more about each other. For the sake of rational knowledge utilization and
coherent conversation flow, a dialogue strategy which controls knowledge
selection is instantiated and continuously adapted via reinforcement learning.
Under the deployed strategy, knowledge grounded conversations are conducted
with two dialogue agents. The generated dialogues are comprehensively evaluated
on aspects like informativeness and coherence, which are aligned with our
objective and human instinct. These assessments are integrated as a compound
reward to guide the evolution of dialogue strategy via policy gradient.
Comprehensive experiments have been carried out on the publicly available
dataset, demonstrating that the proposed method outperforms the other
state-of-the-art approaches significantly.
| 2,019 | Computation and Language |
Global Textual Relation Embedding for Relational Understanding | Pre-trained embeddings such as word embeddings and sentence embeddings are
fundamental tools facilitating a wide range of downstream NLP tasks. In this
work, we investigate how to learn a general-purpose embedding of textual
relations, defined as the shortest dependency path between entities. Textual
relation embedding provides a level of knowledge between word/phrase level and
sentence level, and we show that it can facilitate downstream tasks requiring
relational understanding of the text. To learn such an embedding, we create the
largest distant supervision dataset by linking the entire English ClueWeb09
corpus to Freebase. We use global co-occurrence statistics between textual and
knowledge base relations as the supervision signal to train the embedding.
Evaluation on two relational understanding tasks demonstrates the usefulness of
the learned textual relation embedding. The data and code can be found at
https://github.com/czyssrs/GloREPlus
| 2,019 | Computation and Language |
Fluent Translations from Disfluent Speech in End-to-End Speech
Translation | Spoken language translation applications for speech suffer due to
conversational speech phenomena, particularly the presence of disfluencies.
With the rise of end-to-end speech translation models, processing steps such as
disfluency removal that were previously an intermediate step between speech
recognition and machine translation need to be incorporated into model
architectures. We use a sequence-to-sequence model to translate from noisy,
disfluent speech to fluent text with disfluencies removed using the recently
collected `copy-edited' references for the Fisher Spanish-English dataset. We
are able to directly generate fluent translations and introduce considerations
about how to evaluate success on this task. This work provides a baseline for a
new task, the translation of conversational speech with joint removal of
disfluencies.
| 2,019 | Computation and Language |
Controllable Paraphrase Generation with a Syntactic Exemplar | Prior work on controllable text generation usually assumes that the
controlled attribute can take on one of a small set of values known a priori.
In this work, we propose a novel task, where the syntax of a generated sentence
is controlled rather by a sentential exemplar. To evaluate quantitatively with
standard metrics, we create a novel dataset with human annotations. We also
develop a variational model with a neural module specifically designed for
capturing syntactic knowledge and several multitask training objectives to
promote disentangled representation learning. Empirically, the proposed model
is observed to achieve improvements over baselines and learn to capture
desirable characteristics.
| 2,019 | Computation and Language |
Jointly Learning Semantic Parser and Natural Language Generator via Dual
Information Maximization | Semantic parsing aims to transform natural language (NL) utterances into
formal meaning representations (MRs), whereas an NL generator achieves the
reverse: producing a NL description for some given MRs. Despite this intrinsic
connection, the two tasks are often studied separately in prior work. In this
paper, we model the duality of these two tasks via a joint learning framework,
and demonstrate its effectiveness of boosting the performance on both tasks.
Concretely, we propose a novel method of dual information maximization (DIM) to
regularize the learning process, where DIM empirically maximizes the
variational lower bounds of expected joint distributions of NL and MRs. We
further extend DIM to a semi-supervision setup (SemiDIM), which leverages
unlabeled data of both tasks. Experiments on three datasets of dialogue
management and code generation (and summarization) show that performance on
both semantic parsing and NL generation can be consistently improved by DIM, in
both supervised and semi-supervised setups.
| 2,019 | Computation and Language |
Listening while Speaking and Visualizing: Improving ASR through
Multimodal Chain | Previously, a machine speech chain, which is based on sequence-to-sequence
deep learning, was proposed to mimic speech perception and production behavior.
Such chains separately processed listening and speaking by automatic speech
recognition (ASR) and text-to-speech synthesis (TTS) and simultaneously enabled
them to teach each other in semi-supervised learning when they received
unpaired data. Unfortunately, this speech chain study is limited to speech and
textual modalities. In fact, natural communication is actually multimodal and
involves both auditory and visual sensory systems. Although the said speech
chain reduces the requirement of having a full amount of paired data, in this
case we still need a large amount of unpaired data. In this research, we take a
further step and construct a multimodal chain and design a closely knit chain
architecture that combines ASR, TTS, image captioning, and image production
models into a single framework. The framework allows the training of each
component without requiring a large number of parallel multimodal data. Our
experimental results also show that an ASR can be further trained without
speech and text data and cross-modal data augmentation remains possible through
our proposed chain, which improves the ASR performance.
| 2,019 | Computation and Language |
Massive Styles Transfer with Limited Labeled Data | Language style transfer has attracted more and more attention in the past few
years. Recent researches focus on improving neural models targeting at
transferring from one style to the other with labeled data. However,
transferring across multiple styles is often very useful in real-life
applications. Previous researches of language style transfer have two main
deficiencies: dependency on massive labeled data and neglect of mutual
influence among different style transfer tasks. In this paper, we propose a
multi-agent style transfer system (MAST) for addressing multiple style transfer
tasks with limited labeled data, by leveraging abundant unlabeled data and the
mutual benefit among the multiple styles. A style transfer agent in our system
not only learns from unlabeled data by using techniques like denoising
auto-encoder and back-translation, but also learns to cooperate with other
style transfer agents in a self-organization manner. We conduct our experiments
by simulating a set of real-world style transfer tasks with multiple versions
of the Bible. Our model significantly outperforms the other competitive
methods. Extensive results and analysis further verify the efficacy of our
proposed system.
| 2,019 | Computation and Language |
A Semi-Supervised Approach for Low-Resourced Text Generation | Recently, encoder-decoder neural models have achieved great success on text
generation tasks. However, one problem of this kind of models is that their
performances are usually limited by the scale of well-labeled data, which are
very expensive to get. The low-resource (of labeled data) problem is quite
common in different task generation tasks, but unlabeled data are usually
abundant. In this paper, we propose a method to make use of the unlabeled data
to improve the performance of such models in the low-resourced circumstances.
We use denoising auto-encoder (DAE) and language model (LM) based reinforcement
learning (RL) to enhance the training of encoder and decoder with unlabeled
data. Our method shows adaptability for different text generation tasks, and
makes significant improvements over basic text generation models.
| 2,019 | Computation and Language |
Evaluating Gender Bias in Machine Translation | We present the first challenge set and evaluation protocol for the analysis
of gender bias in machine translation (MT). Our approach uses two recent
coreference resolution datasets composed of English sentences which cast
participants into non-stereotypical gender roles (e.g., "The doctor asked the
nurse to help her in the operation"). We devise an automatic gender bias
evaluation method for eight target languages with grammatical gender, based on
morphological analysis (e.g., the use of female inflection for the word
"doctor"). Our analyses show that four popular industrial MT systems and two
recent state-of-the-art academic MT models are significantly prone to
gender-biased translation errors for all tested target languages. Our data and
code are made publicly available.
| 2,019 | Computation and Language |
Assessing the Ability of Self-Attention Networks to Learn Word Order | Self-attention networks (SAN) have attracted a lot of interests due to their
high parallelization and strong performance on a variety of NLP tasks, e.g.
machine translation. Due to the lack of recurrence structure such as recurrent
neural networks (RNN), SAN is ascribed to be weak at learning positional
information of words for sequence modeling. However, neither this speculation
has been empirically confirmed, nor explanations for their strong performances
on machine translation tasks when "lacking positional information" have been
explored. To this end, we propose a novel word reordering detection task to
quantify how well the word order information learned by SAN and RNN.
Specifically, we randomly move one word to another position, and examine
whether a trained model can detect both the original and inserted positions.
Experimental results reveal that: 1) SAN trained on word reordering detection
indeed has difficulty learning the positional information even with the
position embedding; and 2) SAN trained on machine translation learns better
positional information than its RNN counterpart, in which position embedding
plays a critical role. Although recurrence structure make the model more
universally-effective on learning word order, learning objectives matter more
in the downstream tasks such as machine translation.
| 2,019 | Computation and Language |
Semantically Constrained Multilayer Annotation: The Case of Coreference | We propose a coreference annotation scheme as a layer on top of the Universal
Conceptual Cognitive Annotation foundational layer, treating units in
predicate-argument structure as a basis for entity and event mentions. We argue
that this allows coreference annotators to sidestep some of the challenges
faced in other schemes, which do not enforce consistency with
predicate-argument structure and vary widely in what kinds of mentions they
annotate and how. The proposed approach is examined with a pilot annotation
study and compared with annotations from other schemes.
| 2,019 | Computation and Language |
Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic
Attention for Neural TTS | Neural TTS has demonstrated strong capabilities to generate human-like speech
with high quality and naturalness, while its generalization to out-of-domain
texts is still a challenging task, with regard to the design of attention-based
sequence-to-sequence acoustic modeling. Various errors occur in those inputs
with unseen context, including attention collapse, skipping, repeating, etc.,
which limits the broader applications. In this paper, we propose a novel
stepwise monotonic attention method in sequence-to-sequence acoustic modeling
to improve the robustness on out-of-domain inputs. The method utilizes the
strict monotonic property in TTS with constraints on monotonic hard attention
that the alignments between inputs and outputs sequence must be not only
monotonic but allowing no skipping on inputs. Soft attention could be used to
evade mismatch between training and inference. The experimental results show
that the proposed method could achieve significant improvements in robustness
on out-of-domain scenarios for phoneme-based models, without any regression on
the in-domain naturalness test.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.