Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
How2: A Large-scale Dataset for Multimodal Language Understanding | In this paper, we introduce How2, a multimodal collection of instructional
videos with English subtitles and crowdsourced Portuguese translations. We also
present integrated sequence-to-sequence baselines for machine translation,
automatic speech recognition, spoken language translation, and multimodal
summarization. By making available data and code for several multimodal natural
language tasks, we hope to stimulate more research on these and similar
challenges, to obtain a deeper understanding of multimodality in language
processing.
| 2,018 | Computation and Language |
Latent Variable Model for Multi-modal Translation | In this work, we propose to model the interaction between visual and textual
features for multi-modal neural machine translation (MMT) through a latent
variable model. This latent variable can be seen as a multi-modal stochastic
embedding of an image and its description in a foreign language. It is used in
a target-language decoder and also to predict image features. Importantly, our
model formulation utilises visual and textual inputs during training but does
not require that images be available at test time. We show that our latent
variable MMT formulation improves considerably over strong baselines, including
a multi-task learning approach (Elliott and K\'ad\'ar, 2017) and a conditional
variational auto-encoder approach (Toyama et al., 2016). Finally, we show
improvements due to (i) predicting image features in addition to only
conditioning on them, (ii) imposing a constraint on the minimum amount of
information encoded in the latent variable, and (iii) by training on additional
target-language image descriptions (i.e. synthetic data).
| 2,019 | Computation and Language |
Helping each Other: A Framework for Customer-to-Customer Suggestion
Mining using a Semi-supervised Deep Neural Network | Suggestion mining is increasingly becoming an important task along with
sentiment analysis. In today's cyberspace world, people not only express their
sentiments and dispositions towards some entities or services, but they also
spend considerable time sharing their experiences and advice to fellow
customers and the product/service providers with two-fold agenda: helping
fellow customers who are likely to share a similar experience, and motivating
the producer to bring specific changes in their offerings which would be more
appreciated by the customers. In our current work, we propose a hybrid deep
learning model to identify whether a review text contains any suggestion. The
model employs semi-supervised learning to leverage the useful information from
the large amount of unlabeled data. We evaluate the performance of our proposed
model on a benchmark customer review dataset, comprising of the reviews of
Hotel and Electronics domains. Our proposed approach shows the F-scores of
65.6% and 65.5% for the Hotel and Electronics review datasets, respectively.
These performances are significantly better compared to the existing
state-of-the-art system.
| 2,018 | Computation and Language |
Addressing word-order Divergence in Multilingual Neural Machine
Translation for extremely Low Resource Languages | Transfer learning approaches for Neural Machine Translation (NMT) train a NMT
model on the assisting-target language pair (parent model) which is later
fine-tuned for the source-target language pair of interest (child model), with
the target language being the same. In many cases, the assisting language has a
different word order from the source language. We show that divergent word
order adversely limits the benefits from transfer learning when little to no
parallel corpus between the source and target language is available. To bridge
this divergence, We propose to pre-order the assisting language sentence to
match the word order of the source language and train the parent model. Our
experiments on many language pairs show that bridging the word order gap leads
to significant improvement in the translation quality.
| 2,019 | Computation and Language |
Truly unsupervised acoustic word embeddings using weak top-down
constraints in encoder-decoder models | We investigate unsupervised models that can map a variable-duration speech
segment to a fixed-dimensional representation. In settings where unlabelled
speech is the only available resource, such acoustic word embeddings can form
the basis for "zero-resource" speech search, discovery and indexing systems.
Most existing unsupervised embedding methods still use some supervision, such
as word or phoneme boundaries. Here we propose the encoder-decoder
correspondence autoencoder (EncDec-CAE), which, instead of true word segments,
uses automatically discovered segments: an unsupervised term discovery system
finds pairs of words of the same unknown type, and the EncDec-CAE is trained to
reconstruct one word given the other as input. We compare it to a standard
encoder-decoder autoencoder (AE), a variational AE with a prior over its latent
embedding, and downsampling. EncDec-CAE outperforms its closest competitor by
24% relative in average precision on two languages in a word discrimination
task.
| 2,019 | Computation and Language |
DialogueRNN: An Attentive RNN for Emotion Detection in Conversations | Emotion detection in conversations is a necessary step for a number of
applications, including opinion mining over chat history, social media threads,
debates, argumentation mining, understanding consumer feedback in live
conversations, etc. Currently, systems do not treat the parties in the
conversation individually by adapting to the speaker of each utterance. In this
paper, we describe a new method based on recurrent neural networks that keeps
track of the individual party states throughout the conversation and uses this
information for emotion classification. Our model outperforms the state of the
art by a significant margin on two different datasets.
| 2,019 | Computation and Language |
Unsupervised Dual-Cascade Learning with Pseudo-Feedback Distillation for
Query-based Extractive Summarization | We propose Dual-CES -- a novel unsupervised, query-focused, multi-document
extractive summarizer. Dual-CES is designed to better handle the tradeoff
between saliency and focus in summarization. To this end, Dual-CES employs a
two-step dual-cascade optimization approach with saliency-based pseudo-feedback
distillation. Overall, Dual-CES significantly outperforms all other
state-of-the-art unsupervised alternatives. Dual-CES is even shown to be able
to outperform strong supervised summarizers.
| 2,018 | Computation and Language |
A Corpus for Reasoning About Natural Language Grounded in Photographs | We introduce a new dataset for joint reasoning about natural language and
images, with a focus on semantic diversity, compositionality, and visual
reasoning challenges. The data contains 107,292 examples of English sentences
paired with web photographs. The task is to determine whether a natural
language caption is true about a pair of photographs. We crowdsource the data
using sets of visually rich images and a compare-and-contrast task to elicit
linguistically diverse language. Qualitative analysis shows the data requires
compositional joint reasoning, including about quantities, comparisons, and
relations. Evaluation using state-of-the-art visual reasoning methods shows the
data presents a strong challenge.
| 2,019 | Computation and Language |
Multilingual NMT with a language-independent attention bridge | In this paper, we propose a multilingual encoder-decoder architecture capable
of obtaining multilingual sentence representations by means of incorporating an
intermediate {\em attention bridge} that is shared across all languages. That
is, we train the model with language-specific encoders and decoders that are
connected via self-attention with a shared layer that we call attention bridge.
This layer exploits the semantics from each language for performing translation
and develops into a language-independent meaning representation that can
efficiently be used for transfer learning. We present a new framework for the
efficient development of multilingual NMT using this model and scheduled
training. We have tested the approach in a systematic way with a multi-parallel
data set. We show that the model achieves substantial improvements over strong
bilingual models and that it also works well for zero-shot translation, which
demonstrates its ability of abstraction and transfer learning.
| 2,019 | Computation and Language |
Towards Coherent and Cohesive Long-form Text Generation | Generating coherent and cohesive long-form texts is a challenging task.
Previous works relied on large amounts of human-generated texts to train neural
language models. However, few attempted to explicitly improve neural language
models from the perspectives of coherence and cohesion. In this work, we
propose a new neural language model that is equipped with two neural
discriminators which provide feedback signals at the levels of sentence
(cohesion) and paragraph (coherence). Our model is trained using a simple yet
efficient variant of policy gradient, called negative-critical sequence
training, which is proposed to eliminate the need of training a separate critic
for estimating baseline. Results demonstrate the effectiveness of our approach,
showing improvements over the strong baseline -- recurrent attention-based
bidirectional MLE-trained neural language model.
| 2,019 | Computation and Language |
Multiple-Attribute Text Style Transfer | The dominant approach to unsupervised "style transfer" in text is based on
the idea of learning a latent representation, which is independent of the
attributes specifying its "style". In this paper, we show that this condition
is not necessary and is not always met in practice, even with domain
adversarial training that explicitly aims at learning such disentangled
representations. We thus propose a new model that controls several factors of
variation in textual data where this condition on disentanglement is replaced
with a simpler mechanism based on back-translation. Our method allows control
over multiple attributes, like gender, sentiment, product type, etc., and a
more fine-grained control on the trade-off between content preservation and
change of style with a pooling operator in the latent space. Our experiments
demonstrate that the fully entangled model produces better generations, even
when tested on new and more challenging benchmarks comprising reviews with
multiple sentences and multiple attributes.
| 2,019 | Computation and Language |
On Difficulties of Cross-Lingual Transfer with Order Differences: A Case
Study on Dependency Parsing | Different languages might have different word orders. In this paper, we
investigate cross-lingual transfer and posit that an order-agnostic model will
perform better when transferring to distant foreign languages. To test our
hypothesis, we train dependency parsers on an English corpus and evaluate their
transfer performance on 30 other languages. Specifically, we compare encoders
and decoders based on Recurrent Neural Networks (RNNs) and modified
self-attentive architectures. The former relies on sequential information while
the latter is more flexible at modeling word order. Rigorous experiments and
detailed analysis shows that RNN-based architectures transfer well to languages
that are close to English, while self-attentive models have better overall
cross-lingual transferability and perform especially well on distant languages.
| 2,019 | Computation and Language |
Multilingual Embeddings Jointly Induced from Contexts and Concepts:
Simple, Strong and Scalable | Word embeddings induced from local context are prevalent in NLP. A simple and
effective context-based multilingual embedding learner is Levy et al. (2017)'s
S-ID (sentence ID) method. Another line of work induces high-performing
multilingual embeddings from concepts (Dufter et al., 2018). In this paper, we
propose Co+Co, a simple and scalable method that combines context-based and
concept-based learning. From a sentence aligned corpus, concepts are extracted
via sampling; words are then associated with their concept ID and sentence ID
in embedding learning. This is the first work that successfully combines
context-based and concept-based embedding learning. We show that Co+Co performs
well for two different application scenarios: the Parallel Bible Corpus (1000+
languages, low-resource) and EuroParl (12 languages, high-resource). Among
methods applicable to both corpora, Co+Co performs best in our evaluation setup
of six tasks.
| 2,020 | Computation and Language |
Shifting the Baseline: Single Modality Performance on Visual Navigation
& QA | We demonstrate the surprising strength of unimodal baselines in multimodal
domains, and make concrete recommendations for best practices in future
research. Where existing work often compares against random or majority class
baselines, we argue that unimodal approaches better capture and reflect dataset
biases and therefore provide an important comparison when assessing the
performance of multimodal techniques. We present unimodal ablations on three
recent datasets in visual navigation and QA, seeing an up to 29% absolute gain
in performance over published baselines.
| 2,019 | Computation and Language |
Exploring Semantic Incrementality with Dynamic Syntax and Vector Space
Semantics | One of the fundamental requirements for models of semantic processing in
dialogue is incrementality: a model must reflect how people interpret and
generate language at least on a word-by-word basis, and handle phenomena such
as fragments, incomplete and jointly-produced utterances. We show that the
incremental word-by-word parsing process of Dynamic Syntax (DS) can be assigned
a compositional distributional semantics, with the composition operator of DS
corresponding to the general operation of tensor contraction from multilinear
algebra. We provide abstract semantic decorations for the nodes of DS trees, in
terms of vectors, tensors, and sums thereof; using the latter to model the
underspecified elements crucial to assigning partial representations during
incremental processing. As a working example, we give an instantiation of this
theory using plausibility tensors of compositional distributional semantics,
and show how our framework can incrementally assign a semantic plausibility
measure as it parses phrases and sentences.
| 2,018 | Computation and Language |
Incorporating Structured Commonsense Knowledge in Story Completion | The ability to select an appropriate story ending is the first step towards
perfect narrative comprehension. Story ending prediction requires not only the
explicit clues within the context, but also the implicit knowledge (such as
commonsense) to construct a reasonable and consistent story. However, most
previous approaches do not explicitly use background commonsense knowledge. We
present a neural story ending selection model that integrates three types of
information: narrative sequence, sentiment evolution and commonsense knowledge.
Experiments show that our model outperforms state-of-the-art approaches on a
public dataset, ROCStory Cloze Task , and the performance gain from adding the
additional commonsense knowledge is significant.
| 2,018 | Computation and Language |
Embedding Individual Table Columns for Resilient SQL Chatbots | Most of the world's data is stored in relational databases. Accessing these
requires specialized knowledge of the Structured Query Language (SQL), putting
them out of the reach of many people. A recent research thread in Natural
Language Processing (NLP) aims to alleviate this problem by automatically
translating natural language questions into SQL queries. While the proposed
solutions are a great start, they lack robustness and do not easily generalize:
the methods require high quality descriptions of the database table columns,
and the most widely used training dataset, WikiSQL, is heavily biased towards
using those descriptions as part of the questions.
In this work, we propose solutions to both problems: we entirely eliminate
the need for column descriptions, by relying solely on their contents, and we
augment the WikiSQL dataset by paraphrasing column names to reduce bias. We
show that the accuracy of existing methods drops when trained on our augmented,
column-agnostic dataset, and that our own method reaches state of the art
accuracy, while relying on column contents only.
| 2,018 | Computation and Language |
Analyzing and learning the language for different types of harassment | Disclaimer: This paper is concerned with violent online harassment. To
describe the subject at an adequate level of realism, examples of our collected
tweets involve violent, threatening, vulgar and hateful speech language in the
context of racial, sexual, political, appearance and intellectual harassment.
The presence of a significant amount of harassment in user-generated content
and its negative impact calls for robust automatic detection approaches. This
requires that we can identify different forms or types of harassment. Earlier
work has classified harassing language in terms of hurtfulness, abusiveness,
sentiment, and profanity. However, to identify and understand harassment more
accurately, it is essential to determine the context that represents the
interrelated conditions in which they occur. In this paper, we introduce the
notion of contextual type to harassment involving five categories: (i) sexual,
(ii) racial, (iii) appearance-related, (iv) intellectual and (v) political. We
utilize an annotated corpus from Twitter distinguishing these types of
harassment. To study the context for each type that sheds light on the
linguistic meaning, interpretation, and distribution, we conduct two lines of
investigation: an extensive linguistic analysis, and a statistical distribution
of unigrams. We then build type-ware classifiers to automate the identification
of type-specific harassment. Our experiments demonstrate that these classifiers
provide competitive accuracy for identifying and analyzing harassment on social
media. We present extensive discussion and major observations about the
effectiveness of type-aware classifiers using a detailed comparison setup
providing insight into the role of type-dependent features.
| 2,019 | Computation and Language |
Implicit Regularization of Stochastic Gradient Descent in Natural
Language Processing: Observations and Implications | Deep neural networks with remarkably strong generalization performances are
usually over-parameterized. Despite explicit regularization strategies are used
for practitioners to avoid over-fitting, the impacts are often small. Some
theoretical studies have analyzed the implicit regularization effect of
stochastic gradient descent (SGD) on simple machine learning models with
certain assumptions. However, how it behaves practically in state-of-the-art
models and real-world datasets is still unknown. To bridge this gap, we study
the role of SGD implicit regularization in deep learning systems. We show pure
SGD tends to converge to minimas that have better generalization performances
in multiple natural language processing (NLP) tasks. This phenomenon coexists
with dropout, an explicit regularizer. In addition, neural network's finite
learning capability does not impact the intrinsic nature of SGD's implicit
regularization effect. Specifically, under limited training samples or with
certain corrupted labels, the implicit regularization effect remains strong. We
further analyze the stability by varying the weight initialization range. We
corroborate these experimental findings with a decision boundary visualization
using a 3-layer neural network for interpretation. Altogether, our work enables
a deepened understanding on how implicit regularization affects the deep
learning model and sheds light on the future study of the over-parameterized
model's generalization ability.
| 2,018 | Computation and Language |
Dialogue Natural Language Inference | Consistency is a long standing issue faced by dialogue models. In this paper,
we frame the consistency of dialogue agents as natural language inference (NLI)
and create a new natural language inference dataset called Dialogue NLI. We
propose a method which demonstrates that a model trained on Dialogue NLI can be
used to improve the consistency of a dialogue model, and evaluate the method
with human evaluation and with automatic metrics on a suite of evaluation sets
designed to measure a dialogue model's consistency.
| 2,019 | Computation and Language |
Meta-path Augmented Response Generation | We propose a chatbot, namely Mocha to make good use of relevant entities when
generating responses. Augmented with meta-path information, Mocha is able to
mention proper entities following the conversation flow.
| 2,018 | Computation and Language |
Sequence Generation with Guider Network | Sequence generation with reinforcement learning (RL) has received significant
attention recently. However, a challenge with such methods is the sparse-reward
problem in the RL training process, in which a scalar guiding signal is often
only available after an entire sequence has been generated. This type of sparse
reward tends to ignore the global structural information of a sequence, causing
generation of sequences that are semantically inconsistent. In this paper, we
present a model-based RL approach to overcome this issue. Specifically, we
propose a novel guider network to model the sequence-generation environment,
which can assist next-word prediction and provide intermediate rewards for
generator optimization. Extensive experiments show that the proposed method
leads to improved performance for both unconditional and conditional
sequence-generation tasks.
| 2,018 | Computation and Language |
Training Neural Speech Recognition Systems with Synthetic Speech
Augmentation | Building an accurate automatic speech recognition (ASR) system requires a
large dataset that contains many hours of labeled speech samples produced by a
diverse set of speakers. The lack of such open free datasets is one of the main
issues preventing advancements in ASR research. To address this problem, we
propose to augment a natural speech dataset with synthetic speech. We train
very large end-to-end neural speech recognition models using the LibriSpeech
dataset augmented with synthetic speech. These new models achieve state of the
art Word Error Rate (WER) for character-level based models without an external
language model.
| 2,018 | Computation and Language |
Semantically-Aligned Equation Generation for Solving and Reasoning Math
Word Problems | Solving math word problems is a challenging task that requires accurate
natural language understanding to bridge natural language texts and math
expressions. Motivated by the intuition about how human generates the equations
given the problem texts, this paper presents a neural approach to automatically
solve math word problems by operating symbols according to their semantic
meanings in texts. This paper views the process of generating equation as a
bridge between the semantic world and the symbolic world, where the proposed
neural math solver is based on an encoder-decoder framework. In the proposed
model, the encoder is designed to understand the semantics of problems, and the
decoder focuses on tracking semantic meanings of the generated symbols and then
deciding which symbol to generate next. The preliminary experiments are
conducted in a dataset Math23K, and our model significantly outperforms both
the state-of-the-art single model and the best non-retrieval-based model over
about 10% accuracy, demonstrating the effectiveness of bridging the symbolic
and semantic worlds from math word problems.
| 2,019 | Computation and Language |
Improving the Robustness of Speech Translation | Although neural machine translation (NMT) has achieved impressive progress
recently, it is usually trained on the clean parallel data set and hence cannot
work well when the input sentence is the production of the automatic speech
recognition (ASR) system due to the enormous errors in the source. To solve
this problem, we propose a simple but effective method to improve the
robustness of NMT in the case of speech translation. We simulate the noise
existing in the realistic output of the ASR system and inject them into the
clean parallel data so that NMT can work under similar word distributions
during training and testing. Besides, we also incorporate the Chinese Pinyin
feature which is easy to get in speech translation to further improve the
translation performance. Experiment results show that our method has a more
stable performance and outperforms the baseline by an average of 3.12 BLEU on
multiple noisy test sets, even while achieves a generalization improvement on
the WMT'17 Chinese-English test set.
| 2,018 | Computation and Language |
An Empirical Exploration of Curriculum Learning for Neural Machine
Translation | Machine translation systems based on deep neural networks are expensive to
train. Curriculum learning aims to address this issue by choosing the order in
which samples are presented during training to help train better models faster.
We adopt a probabilistic view of curriculum learning, which lets us flexibly
evaluate the impact of curricula design, and perform an extensive exploration
on a German-English translation task. Results show that it is possible to
improve convergence time at no loss in translation quality. However, results
are highly sensitive to the choice of sample difficulty criteria, curriculum
schedule and other hyperparameters.
| 2,018 | Computation and Language |
A Survey on Natural Language Processing for Fake News Detection | Fake news detection is a critical yet challenging problem in Natural Language
Processing (NLP). The rapid rise of social networking platforms has not only
yielded a vast increase in information accessibility but has also accelerated
the spread of fake news. Thus, the effect of fake news has been growing,
sometimes extending to the offline world and threatening public safety. Given
the massive amount of Web content, automatic fake news detection is a practical
NLP problem useful to all online content providers, in order to reduce the
human time and effort to detect and prevent the spread of fake news. In this
paper, we describe the challenges involved in fake news detection and also
describe related tasks. We systematically review and compare the task
formulations, datasets and NLP solutions that have been developed for this
task, and also discuss the potentials and limitations of them. Based on our
insights, we outline promising research directions, including more
fine-grained, detailed, fair, and practical detection models. We also highlight
the difference between fake news detection and other related tasks, and the
importance of NLP solutions for fake news detection.
| 2,020 | Computation and Language |
A Bayesian Approach for Sequence Tagging with Crowds | Current methods for sequence tagging, a core task in NLP, are data hungry,
which motivates the use of crowdsourcing as a cheap way to obtain labelled
data. However, annotators are often unreliable and current aggregation methods
cannot capture common types of span annotation errors. To address this, we
propose a Bayesian method for aggregating sequence tags that reduces errors by
modelling sequential dependencies between the annotations as well as the
ground-truth labels. By taking a Bayesian approach, we account for uncertainty
in the model due to both annotator errors and the lack of data for modelling
annotators who complete few tasks. We evaluate our model on crowdsourced data
for named entity recognition, information extraction and argument mining,
showing that our sequential model outperforms the previous state of the art. We
also find that our approach can reduce crowdsourcing costs through more
effective active learning, as it better captures uncertainty in the sequence
labels when there are few annotations.
| 2,019 | Computation and Language |
Abstractive Summarization of Reddit Posts with Multi-level Memory
Networks | We address the problem of abstractive summarization in two directions:
proposing a novel dataset and a new model. First, we collect Reddit TIFU
dataset, consisting of 120K posts from the online discussion forum Reddit. We
use such informal crowd-generated posts as text source, in contrast with
existing datasets that mostly use formal documents as source such as news
articles. Thus, our dataset could less suffer from some biases that key
sentences usually locate at the beginning of the text and favorable summary
candidates are already inside the text in similar forms. Second, we propose a
novel abstractive summarization model named multi-level memory networks (MMN),
equipped with multi-level memory to store the information of text from
different levels of abstraction. With quantitative evaluation and user studies
via Amazon Mechanical Turk, we show the Reddit TIFU dataset is highly
abstractive and the MMN outperforms the state-of-the-art summarization models.
| 2,019 | Computation and Language |
Adversarial Training of End-to-end Speech Recognition Using a
Criticizing Language Model | In this paper we proposed a novel Adversarial Training (AT) approach for
end-to-end speech recognition using a Criticizing Language Model (CLM). In this
way the CLM and the automatic speech recognition (ASR) model can challenge and
learn from each other iteratively to improve the performance. Since the CLM
only takes the text as input, huge quantities of unpaired text data can be
utilized in this approach within end-to-end training. Moreover, AT can be
applied to any end-to-end ASR model using any deep-learning-based language
modeling frameworks, and compatible with any existing end-to-end decoding
method. Initial results with an example experimental setup demonstrated the
proposed approach is able to gain consistent improvements efficiently from
auxiliary text data under different scenarios.
| 2,018 | Computation and Language |
Importance of Search and Evaluation Strategies in Neural Dialogue
Modeling | We investigate the impact of search strategies in neural dialogue modeling.
We first compare two standard search algorithms, greedy and beam search, as
well as our newly proposed iterative beam search which produces a more diverse
set of candidate responses. We evaluate these strategies in realistic full
conversations with humans and propose a model-based Bayesian calibration to
address annotator bias. These conversations are analyzed using two automatic
metrics: log-probabilities assigned by the model and utterance diversity. Our
experiments reveal that better search algorithms lead to higher rated
conversations. However, finding the optimal selection mechanism to choose from
a more diverse set of candidates is still an open question.
| 2,019 | Computation and Language |
CommonsenseQA: A Question Answering Challenge Targeting Commonsense
Knowledge | When answering a question, people often draw upon their rich world knowledge
in addition to the particular context. Recent work has focused primarily on
answering questions given some relevant document or context, and required very
little general background. To investigate question answering with prior
knowledge, we present CommonsenseQA: a challenging new dataset for commonsense
question answering. To capture common sense beyond associations, we extract
from ConceptNet (Speer et al., 2017) multiple target concepts that have the
same semantic relation to a single source concept. Crowd-workers are asked to
author multiple-choice questions that mention the source concept and
discriminate in turn between each of the target concepts. This encourages
workers to create questions with complex semantics that often require prior
knowledge. We create 12,247 questions through this procedure and demonstrate
the difficulty of our task with a large number of strong baselines. Our best
baseline is based on BERT-large (Devlin et al., 2018) and obtains 56% accuracy,
well below human performance, which is 89%.
| 2,019 | Computation and Language |
Progress and Tradeoffs in Neural Language Models | In recent years, we have witnessed a dramatic shift towards techniques driven
by neural networks for a variety of NLP tasks. Undoubtedly, neural language
models (NLMs) have reduced perplexity by impressive amounts. This progress,
however, comes at a substantial cost in performance, in terms of inference
latency and energy consumption, which is particularly of concern in deployments
on mobile devices. This paper, which examines the quality-performance tradeoff
of various language modeling techniques, represents to our knowledge the first
to make this observation. We compare state-of-the-art NLMs with "classic"
Kneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and
prediction accuracy using two standard benchmarks. On a Raspberry Pi, we find
that orders of increase in latency and energy usage correspond to less change
in perplexity, while the difference is much less pronounced on a desktop.
| 2,018 | Computation and Language |
Image Chat: Engaging Grounded Conversations | To achieve the long-term goal of machines being able to engage humans in
conversation, our models should captivate the interest of their speaking
partners. Communication grounded in images, whereby a dialogue is conducted
based on a given photo, is a setup naturally appealing to humans (Hu et al.,
2014). In this work we study large-scale architectures and datasets for this
goal. We test a set of neural architectures using state-of-the-art image and
text representations, considering various ways to fuse the components. To test
such models, we collect a dataset of grounded human-human conversations, where
speakers are asked to play roles given a provided emotional mood or style, as
the use of such traits is also a key factor in engagingness (Guo et al., 2019).
Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215
possible style traits. Automatic metrics and human evaluations of engagingness
show the efficacy of our approach; in particular, we obtain state-of-the-art
performance on the existing IGC task, and our best performing model is almost
on par with humans on the Image-Chat test set (preferred 47.7% of the time).
| 2,020 | Computation and Language |
Improving the Coverage and the Generalization Ability of Neural Word
Sense Disambiguation through Hypernymy and Hyponymy Relationships | In Word Sense Disambiguation (WSD), the predominant approach generally
involves a supervised system trained on sense annotated corpora. The limited
quantity of such corpora however restricts the coverage and the performance of
these systems. In this article, we propose a new method that solves these
issues by taking advantage of the knowledge present in WordNet, and especially
the hypernymy and hyponymy relationships between synsets, in order to reduce
the number of different sense tags that are necessary to disambiguate all words
of the lexical database. Our method leads to state of the art results on most
WSD evaluation tasks, while improving the coverage of supervised systems,
reducing the training time and the size of the models, without additional
training data. In addition, we exhibit results that significantly outperform
the state of the art when our method is combined with an ensembling technique
and the addition of the WordNet Gloss Tagged as training corpus.
| 2,018 | Computation and Language |
Neural Response Ranking for Social Conversation: A Data-Efficient
Approach | The overall objective of 'social' dialogue systems is to support engaging,
entertaining, and lengthy conversations on a wide variety of topics, including
social chit-chat. Apart from raw dialogue data, user-provided ratings are the
most common signal used to train such systems to produce engaging responses. In
this paper we show that social dialogue systems can be trained effectively from
raw unannotated data. Using a dataset of real conversations collected in the
2017 Alexa Prize challenge, we developed a neural ranker for selecting 'good'
system responses to user utterances, i.e. responses which are likely to lead to
long and engaging conversations. We show that (1) our neural ranker
consistently outperforms several strong baselines when trained to optimise for
user ratings; (2) when trained on larger amounts of data and only using
conversation length as the objective, the ranker performs better than the one
trained using ratings -- ultimately reaching a Precision@1 of 0.87. This
advance will make data collection for social conversational agents simpler and
less expensive in the future.
| 2,018 | Computation and Language |
Analysing Dropout and Compounding Errors in Neural Language Models | This paper carries out an empirical analysis of various dropout techniques
for language modelling, such as Bernoulli dropout, Gaussian dropout, Curriculum
Dropout, Variational Dropout and Concrete Dropout. Moreover, we propose an
extension of variational dropout to concrete dropout and curriculum dropout
with varying schedules. We find these extensions to perform well when compared
to standard dropout approaches, particularly variational curriculum dropout
with a linear schedule. Largest performance increases are made when applying
dropout on the decoder layer. Lastly, we analyze where most of the errors occur
at test time as a post-analysis step to determine if the well-known problem of
compounding errors is apparent and to what end do the proposed methods mitigate
this issue for each dataset. We report results on a 2-hidden layer LSTM, GRU
and Highway network with embedding dropout, dropout on the gated hidden layers
and the output projection layer for each model. We report our results on
Penn-TreeBank and WikiText-2 word-level language modelling datasets, where the
former reduces the long-tail distribution through preprocessing and one which
preserves rare words in the training and test set.
| 2,018 | Computation and Language |
On Evaluating the Generalization of LSTM Models in Formal Languages | Recurrent Neural Networks (RNNs) are theoretically Turing-complete and
established themselves as a dominant model for language processing. Yet, there
still remains an uncertainty regarding their language learning capabilities. In
this paper, we empirically evaluate the inductive learning capabilities of Long
Short-Term Memory networks, a popular extension of simple RNNs, to learn simple
formal languages, in particular $a^nb^n$, $a^nb^nc^n$, and $a^nb^nc^nd^n$. We
investigate the influence of various aspects of learning, such as training data
regimes and model capacity, on the generalization to unobserved samples. We
find striking differences in model performances under different training
settings and highlight the need for careful analysis and assessment when making
claims about the learning capabilities of neural network models.
| 2,018 | Computation and Language |
Simple Attention-Based Representation Learning for Ranking Short Social
Media Posts | This paper explores the problem of ranking short social media posts with
respect to user queries using neural networks. Instead of starting with a
complex architecture, we proceed from the bottom up and examine the
effectiveness of a simple, word-level Siamese architecture augmented with
attention-based mechanisms for capturing semantic "soft" matches between query
and post tokens. Extensive experiments on datasets from the TREC Microblog
Tracks show that our simple models not only achieve better effectiveness than
existing approaches that are far more complex or exploit a more diverse set of
relevance signals, but are also much faster. Implementations of our samCNN
(Simple Attention-based Matching CNN) models are shared with the community to
support future work.
| 2,019 | Computation and Language |
Augmenting Compositional Models for Knowledge Base Completion Using
Gradient Representations | Neural models of Knowledge Base data have typically employed compositional
representations of graph objects: entity and relation embeddings are
systematically combined to evaluate the truth of a candidate Knowedge Base
entry. Using a model inspired by Harmonic Grammar, we propose to tokenize
triplet embeddings by subjecting them to a process of optimization with respect
to learned well-formedness conditions on Knowledge Base triplets. The resulting
model, known as Gradient Graphs, leads to sizable improvements when implemented
as a companion to compositional models. Also, we show that the
"supracompositional" triplet token embeddings it produces have interpretable
properties that prove helpful in performing inference on the resulting triplet
representations.
| 2,019 | Computation and Language |
Augmenting Neural Response Generation with Context-Aware Topical
Attention | Sequence-to-Sequence (Seq2Seq) models have witnessed a notable success in
generating natural conversational exchanges. Notwithstanding the syntactically
well-formed responses generated by these neural network models, they are prone
to be acontextual, short and generic. In this work, we introduce a Topical
Hierarchical Recurrent Encoder Decoder (THRED), a novel, fully data-driven,
multi-turn response generation system intended to produce contextual and
topic-aware responses. Our model is built upon the basic Seq2Seq model by
augmenting it with a hierarchical joint attention mechanism that incorporates
topical concepts and previous interactions into the response generation. To
train our model, we provide a clean and high-quality conversational dataset
mined from Reddit comments. We evaluate THRED on two novel automated metrics,
dubbed Semantic Similarity and Response Echo Index, as well as with human
evaluation. Our experiments demonstrate that the proposed model is able to
generate more diverse and contextually relevant responses compared to the
strong baselines.
| 2,019 | Computation and Language |
Neural Machine Translation into Language Varieties | Both research and commercial machine translation have so far neglected the
importance of properly handling the spelling, lexical and grammar divergences
occurring among language varieties. Notable cases are standard national
varieties such as Brazilian and European Portuguese, and Canadian and European
French, which popular online machine translation services are not keeping
distinct. We show that an evident side effect of modeling such varieties as
unique classes is the generation of inconsistent translations. In this work, we
investigate the problem of training neural machine translation from English to
specific pairs of language varieties, assuming both labeled and unlabeled
parallel texts, and low-resource conditions. We report experiments from English
to two pairs of dialects, EuropeanBrazilian Portuguese and European-Canadian
French, and two pairs of standardized varieties, Croatian-Serbian and
Indonesian-Malay. We show significant BLEU score improvements over baseline
systems when translation into similar languages is learned as a multilingual
task with shared representations.
| 2,018 | Computation and Language |
Sentence Encoders on STILTs: Supplementary Training on Intermediate
Labeled-data Tasks | Pretraining sentence encoders with language modeling and related unsupervised
tasks has recently been shown to be very effective for language understanding
tasks. By supplementing language model-style pretraining with further training
on data-rich supervised tasks, such as natural language inference, we obtain
additional performance improvements on the GLUE benchmark. Applying
supplementary training on BERT (Devlin et al., 2018), we attain a GLUE score of
81.8---the state of the art (as of 02/24/2019) and a 1.4 point improvement over
BERT. We also observe reduced variance across random restarts in this setting.
Our approach yields similar improvements when applied to ELMo (Peters et al.,
2018a) and Radford et al. (2018)'s model. In addition, the benefits of
supplementary training are particularly pronounced in data-constrained regimes,
as we show in experiments with artificially limited training data.
| 2,019 | Computation and Language |
Value-based Search in Execution Space for Mapping Instructions to
Programs | Training models to map natural language instructions to programs given target
world supervision only requires searching for good programs at training time.
Search is commonly done using beam search in the space of partial programs or
program trees, but as the length of the instructions grows finding a good
program becomes difficult. In this work, we propose a search algorithm that
uses the target world state, known at training time, to train a critic network
that predicts the expected reward of every search state. We then score search
states on the beam by interpolating their expected reward with the likelihood
of programs represented by the search state. Moreover, we search not in the
space of programs but in a more compressed state of program executions,
augmented with recent entities and actions. On the SCONE dataset, we show that
our algorithm dramatically improves performance on all three domains compared
to standard beam search and other baselines.
| 2,019 | Computation and Language |
Prior Knowledge Integration for Neural Machine Translation using
Posterior Regularization | Although neural machine translation has made significant progress recently,
how to integrate multiple overlapping, arbitrary prior knowledge sources
remains a challenge. In this work, we propose to use posterior regularization
to provide a general framework for integrating prior knowledge into neural
machine translation. We represent prior knowledge sources as features in a
log-linear model, which guides the learning process of the neural translation
model. Experiments on Chinese-English translation show that our approach leads
to significant improvements.
| 2,018 | Computation and Language |
Neural Task Representations as Weak Supervision for Model Agnostic
Cross-Lingual Transfer | Natural language processing is heavily Anglo-centric, while the demand for
models that work in languages other than English is greater than ever. Yet, the
task of transferring a model from one language to another can be expensive in
terms of annotation costs, engineering time and effort. In this paper, we
present a general framework for easily and effectively transferring neural
models from English to other languages. The framework, which relies on task
representations as a form of weak supervision, is model and task agnostic,
meaning that many existing neural architectures can be ported to other
languages with minimal effort. The only requirement is unlabeled parallel data,
and a loss defined over task representations. We evaluate our framework by
transferring an English sentiment classifier to three different languages. On a
battery of tests, we show that our models outperform a number of strong
baselines and rival state-of-the-art results, which rely on more complex
approaches and significantly more resources and data. Additionally, we find
that the framework proposed in this paper is able to capture semantically rich
and meaningful representations across languages, despite the lack of direct
supervision.
| 2,018 | Computation and Language |
Bi-Directional Differentiable Input Reconstruction for Low-Resource
Neural Machine Translation | We aim to better exploit the limited amounts of parallel text available in
low-resource settings by introducing a differentiable reconstruction loss for
neural machine translation (NMT). This loss compares original inputs to
reconstructed inputs, obtained by back-translating translation hypotheses into
the input language. We leverage differentiable sampling and bi-directional NMT
to train models end-to-end, without introducing additional parameters. This
approach achieves small but consistent BLEU improvements on four language pairs
in both translation directions, and outperforms an alternative differentiable
reconstruction strategy based on hidden states.
| 2,019 | Computation and Language |
Unsupervised Hyperalignment for Multilingual Word Embeddings | We consider the problem of aligning continuous word representations, learned
in multiple languages, to a common space. It was recently shown that, in the
case of two languages, it is possible to learn such a mapping without
supervision. This paper extends this line of work to the problem of aligning
multiple languages to a common space. A solution is to independently map all
languages to a pivot language. Unfortunately, this degrades the quality of
indirect word translation. We thus propose a novel formulation that ensures
composable mappings, leading to better alignments. We evaluate our method by
jointly aligning word vectors in eleven languages, showing consistent
improvement with indirect mappings while maintaining competitive performance on
direct word translation.
| 2,019 | Computation and Language |
Exploiting Explicit Paths for Multi-hop Reading Comprehension | We propose a novel, path-based reasoning approach for the multi-hop reading
comprehension task where a system needs to combine facts from multiple passages
to answer a question. Although inspired by multi-hop reasoning over knowledge
graphs, our proposed approach operates directly over unstructured text. It
generates potential paths through passages and scores them without any direct
path supervision. The proposed model, named PathNet, attempts to extract
implicit relations from text through entity pair representations, and compose
them to encode each path. To capture additional context, PathNet also composes
the passage representations along each path to compute a passage-based
representation. Unlike previous approaches, our model is then able to explain
its reasoning via these explicit paths through the passages. We show that our
approach outperforms prior models on the multi-hop Wikihop dataset, and also
can be generalized to apply to the OpenBookQA dataset, matching
state-of-the-art performance.
| 2,019 | Computation and Language |
Content preserving text generation with attribute controls | In this work, we address the problem of modifying textual attributes of
sentences. Given an input sentence and a set of attribute labels, we attempt to
generate sentences that are compatible with the conditioning information. To
ensure that the model generates content compatible sentences, we introduce a
reconstruction loss which interpolates between auto-encoding and
back-translation loss components. We propose an adversarial loss to enforce
generated samples to be attribute compatible and realistic. Through
quantitative, qualitative and human evaluations we demonstrate that our model
is capable of generating fluent sentences that better reflect the conditioning
information compared to prior methods. We further demonstrate that the model is
capable of simultaneously controlling multiple attributes.
| 2,018 | Computation and Language |
Margin-based Parallel Corpus Mining with Multilingual Sentence
Embeddings | Machine translation is highly sensitive to the size and quality of the
training data, which has led to an increasing interest in collecting and
filtering large parallel corpora. In this paper, we propose a new method for
this task based on multilingual sentence embeddings. In contrast to previous
approaches, which rely on nearest neighbor retrieval with a hard threshold over
cosine similarity, our proposed method accounts for the scale inconsistencies
of this measure, considering the margin between a given sentence pair and its
closest candidates instead. Our experiments show large improvements over
existing methods. We outperform the best published results on the BUCC mining
task and the UN reconstruction task by more than 10 F1 and 30 precision points,
respectively. Filtering the English-German ParaCrawl corpus with our approach,
we obtain 31.2 BLEU points on newstest2014, an improvement of more than one
point over the best official filtered version.
| 2,021 | Computation and Language |
Transfer Learning in Multilingual Neural Machine Translation with
Dynamic Vocabulary | We propose a method to transfer knowledge across neural machine translation
(NMT) models by means of a shared dynamic vocabulary. Our approach allows to
extend an initial model for a given language pair to cover new languages by
adapting its vocabulary as long as new data become available (i.e., introducing
new vocabulary items if they are not included in the initial model). The
parameter transfer mechanism is evaluated in two scenarios: i) to adapt a
trained single language NMT system to work with a new language pair and ii) to
continuously add new language pairs to grow to a multilingual NMT system. In
both the scenarios our goal is to improve the translation performance, while
minimizing the training convergence time. Preliminary experiments spanning five
languages with different training data sizes (i.e., 5k and 50k parallel
sentences) show a significant performance gain ranging from +3.85 up to +13.63
BLEU in different language directions. Moreover, when compared with training an
NMT model from scratch, our transfer-learning approach allows us to reach
higher performance after training up to 4% of the total training steps.
| 2,018 | Computation and Language |
Identifying and Controlling Important Neurons in Neural Machine
Translation | Neural machine translation (NMT) models learn representations containing
substantial linguistic information. However, it is not clear if such
information is fully distributed or if some of it can be attributed to
individual neurons. We develop unsupervised methods for discovering important
neurons in NMT models. Our methods rely on the intuition that different models
learn similar properties, and do not require any costly external supervision.
We show experimentally that translation quality depends on the discovered
neurons, and find that many of them capture common linguistic phenomena.
Finally, we show how to control NMT translations in predictable ways, by
modifying activations of individual neurons.
| 2,018 | Computation and Language |
Unsupervised Identification of Study Descriptors in Toxicology Research:
An Experimental Study | Identifying and extracting data elements such as study descriptors in
publication full texts is a critical yet manual and labor-intensive step
required in a number of tasks. In this paper we address the question of
identifying data elements in an unsupervised manner. Specifically, provided a
set of criteria describing specific study parameters, such as species, route of
administration, and dosing regimen, we develop an unsupervised approach to
identify text segments (sentences) relevant to the criteria. A binary
classifier trained to identify publications that met the criteria performs
better when trained on the candidate sentences than when trained on sentences
randomly picked from the text, supporting the intuition that our method is able
to accurately identify study descriptors.
| 2,018 | Computation and Language |
Relation Mention Extraction from Noisy Data with Hierarchical
Reinforcement Learning | In this paper we address a task of relation mention extraction from noisy
data: extracting representative phrases for a particular relation from noisy
sentences that are collected via distant supervision. Despite its significance
and value in many downstream applications, this task is less studied on noisy
data. The major challenges exists in 1) the lack of annotation on mention
phrases, and more severely, 2) handling noisy sentences which do not express a
relation at all. To address the two challenges, we formulate the task as a
semi-Markov decision process and propose a novel hierarchical reinforcement
learning model. Our model consists of a top-level sentence selector to remove
noisy sentences, a low-level mention extractor to extract relation mentions,
and a reward estimator to provide signals to guide data denoising and mention
extraction without explicit annotations. Experimental results show that our
model is effective to extract relation mentions from noisy data.
| 2,018 | Computation and Language |
Wizard of Wikipedia: Knowledge-Powered Conversational agents | In open-domain dialogue intelligent agents should exhibit the use of
knowledge, however there are few convincing demonstrations of this to date. The
most popular sequence to sequence models typically "generate and hope" generic
utterances that can be memorized in the weights of the model when mapping from
input utterance(s) to output, rather than employing recalled knowledge as
context. Use of knowledge has so far proved difficult, in part because of the
lack of a supervised learning benchmark task which exhibits knowledgeable open
dialogue with clear grounding. To that end we collect and release a large
dataset with conversations directly grounded with knowledge retrieved from
Wikipedia. We then design architectures capable of retrieving knowledge,
reading and conditioning on it, and finally generating natural responses. Our
best performing dialogue models are able to conduct knowledgeable discussions
on open-domain topics as evaluated by automatic metrics and human evaluations,
while our new benchmark allows for measuring further improvements in this
important research direction.
| 2,019 | Computation and Language |
Challenges in detecting evolutionary forces in language change using
diachronic corpora | Newberry et al. (Detecting evolutionary forces in language change, Nature
551, 2017) tackle an important but difficult problem in linguistics, the
testing of selective theories of language change against a null model of drift.
Having applied a test from population genetics (the Frequency Increment Test)
to a number of relevant examples, they suggest stochasticity has a previously
under-appreciated role in language evolution. We replicate their results and
find that while the overall observation holds, results produced by this
approach on individual time series can be sensitive to how the corpus is
organized into temporal segments (binning). Furthermore, we use a large set of
simulations in conjunction with binning to systematically explore the range of
applicability of the Frequency Increment Test. We conclude that care should be
exercised with interpreting results of tests like the Frequency Increment Test
on individual series, given the researcher degrees of freedom available when
applying the test to corpus data, and fundamental differences between genetic
and linguistic data. Our findings have implications for selection testing and
temporal binning in general, as well as demonstrating the usefulness of
simulations for evaluating methods newly introduced to the field.
| 2,020 | Computation and Language |
SimplerVoice: A Key Message & Visual Description Generator System for
Illiteracy | We introduce SimplerVoice: a key message and visual description generator
system to help low-literate adults navigate the information-dense world with
confidence, on their own. SimplerVoice can automatically generate sensible
sentences describing an unknown object, extract semantic meanings of the object
usage in the form of a query string, then, represent the string as multiple
types of visual guidance (pictures, pictographs, etc.). We demonstrate
SimplerVoice system in a case study of generating grocery products' manuals
through a mobile application. To evaluate, we conducted a user study on
SimplerVoice's generated description in comparison to the information
interpreted by users from other methods: the original product package and
search engines' top result, in which SimplerVoice achieved the highest
performance score: 4.82 on 5-point mean opinion score scale. Our result shows
that SimplerVoice is able to provide low-literate end-users with simple yet
informative components to help them understand how to use the grocery products,
and that the system may potentially provide benefits in other real-world use
cases
| 2,018 | Computation and Language |
ColNet: Embedding the Semantics of Web Tables for Column Type Prediction | Automatically annotating column types with knowledge base (KB) concepts is a
critical task to gain a basic understanding of web tables. Current methods rely
on either table metadata like column name or entity correspondences of cells in
the KB, and may fail to deal with growing web tables with incomplete meta
information. In this paper we propose a neural network based column type
annotation framework named ColNet which is able to integrate KB reasoning and
lookup with machine learning and can automatically train Convolutional Neural
Networks for prediction. The prediction model not only considers the contextual
semantics within a cell using word representation, but also embeds the
semantics of a column by learning locality features from multiple cells. The
method is evaluated with DBPedia and two different web table datasets, T2Dv2
from the general Web and Limaye from Wikipedia pages, and achieves higher
performance than the state-of-the-art approaches.
| 2,018 | Computation and Language |
Towards Unsupervised Speech-to-Text Translation | We present a framework for building speech-to-text translation (ST) systems
using only monolingual speech and text corpora, in other words, speech
utterances from a source language and independent text from a target language.
As opposed to traditional cascaded systems and end-to-end architectures, our
system does not require any labeled data (i.e., transcribed source audio or
parallel source and target text corpora) during training, making it especially
applicable to language pairs with very few or even zero bilingual resources.
The framework initializes the ST system with a cross-modal bilingual dictionary
inferred from the monolingual corpora, that maps every source speech segment
corresponding to a spoken word to its target text translation. For unseen
source speech utterances, the system first performs word-by-word translation on
each speech segment in the utterance. The translation is improved by leveraging
a language model and a sequence denoising autoencoder to provide prior
knowledge about the target language. Experimental results show that our
unsupervised system achieves comparable BLEU scores to supervised end-to-end
models despite the lack of supervision. We also provide an ablation analysis to
examine the utility of each component in our system.
| 2,018 | Computation and Language |
Elastic CRFs for Open-ontology Slot Filling | Slot filling is a crucial component in task-oriented dialog systems, which is
to parse (user) utterances into semantic concepts called slots. An ontology is
defined by the collection of slots and the values that each slot can take. The
widely-used practice of treating slot filling as a sequence labeling task
suffers from two drawbacks. First, the ontology is usually pre-defined and
fixed. Most current methods are unable to predict new labels for unseen slots.
Second, the one-hot encoding of slot labels ignores the semantic meanings and
relations for slots, which are implicit in their natural language descriptions.
These observations motivate us to propose a novel model called elastic
conditional random field (eCRF), for open-ontology slot filling. eCRFs can
leverage the neural features of both the utterance and the slot descriptions,
and are able to model the interactions between different slots. Experimental
results show that eCRFs outperforms existing models on both the in-domain and
the cross-doamin tasks, especially in predictions of unseen slots and values.
| 2,021 | Computation and Language |
Semi-Supervised Confidence Network aided Gated Attention based Recurrent
Neural Network for Clickbait Detection | Clickbaits are catchy headlines that are frequently used by social media
outlets in order to allure its viewers into clicking them and thus leading them
to dubious content. Such venal schemes thrive on exploiting the curiosity of
naive social media users, directing traffic to web pages that won't be visited
otherwise. In this paper, we propose a novel, semi-supervised classification
based approach, that employs attentions sampled from a Gumbel-Softmax
distribution to distill contexts that are fairly important in clickbait
detection. An additional loss over the attention weights is used to encode
prior knowledge. Furthermore, we propose a confidence network that enables
learning over weak labels and improves robustness to noisy labels. We show that
with merely 30% of strongly labeled samples we can achieve over 97% of the
accuracy, of current state of the art methods in clickbait detection.
| 2,018 | Computation and Language |
Improving Zero-Shot Translation of Low-Resource Languages | Recent work on multilingual neural machine translation reported competitive
performance with respect to bilingual models and surprisingly good performance
even on (zeroshot) translation directions not observed at training time. We
investigate here a zero-shot translation in a particularly lowresource
multilingual setting. We propose a simple iterative training procedure that
leverages a duality of translations directly generated by the system for the
zero-shot directions. The translations produced by the system (sub-optimal
since they contain mixed language from the shared vocabulary), are then used
together with the original parallel data to feed and iteratively re-train the
multilingual network. Over time, this allows the system to learn from its own
generated and increasingly better output. Our approach shows to be effective in
improving the two zero-shot directions of our multilingual model. In
particular, we observed gains of about 9 BLEU points over a baseline
multilingual model and up to 2.08 BLEU over a pivoting mechanism using two
bilingual models. Further analysis shows that there is also a slight
improvement in the non-zero-shot language directions.
| 2,018 | Computation and Language |
Medical code prediction with multi-view convolution and
description-regularized label-dependent attention | A ubiquitous task in processing electronic medical data is the assignment of
standardized codes representing diagnoses and/or procedures to free-text
documents such as medical reports. This is a difficult natural language
processing task that requires parsing long, heterogeneous documents and
selecting a set of appropriate codes from tens of thousands of
possibilities---many of which have very few positive training samples. We
present a deep learning system that advances the state of the art for the
MIMIC-III dataset, achieving a new best micro F1-measure of 55.85\%,
significantly outperforming the previous best result (Mullenbach et al. 2018).
We achieve this through a number of enhancements, including two major novel
contributions: multi-view convolutional channels, which effectively learn to
adjust kernel sizes throughout the input; and attention regularization,
mediated by natural-language code descriptions, which helps overcome sparsity
for thousands of uncommon codes. These and other modifications are selected to
address difficulties inherent to both automated coding specifically and deep
learning generally. Finally, we investigate our accuracy results in detail to
individually measure the impact of these contributions and point the way
towards future algorithmic improvements.
| 2,018 | Computation and Language |
Cycle-consistency training for end-to-end speech recognition | This paper presents a method to train end-to-end automatic speech recognition
(ASR) models using unpaired data. Although the end-to-end approach can
eliminate the need for expert knowledge such as pronunciation dictionaries to
build ASR systems, it still requires a large amount of paired data, i.e.,
speech utterances and their transcriptions. Cycle-consistency losses have been
recently proposed as a way to mitigate the problem of limited paired data.
These approaches compose a reverse operation with a given transformation, e.g.,
text-to-speech (TTS) with ASR, to build a loss that only requires unsupervised
data, speech in this example. Applying cycle consistency to ASR models is not
trivial since fundamental information, such as speaker traits, are lost in the
intermediate text bottleneck. To solve this problem, this work presents a loss
that is based on the speech encoder state sequence instead of the raw speech
signal. This is achieved by training a Text-To-Encoder model and defining a
loss based on the encoder reconstruction error. Experimental results on the
LibriSpeech corpus show that the proposed cycle-consistency training reduced
the word error rate by 14.7% from an initial model trained with 100-hour paired
data, using an additional 360 hours of audio data without transcriptions. We
also investigate the use of text-only data mainly for language modeling to
further improve the performance in the unpaired data training scenario.
| 2,019 | Computation and Language |
Learning to Explicitate Connectives with Seq2Seq Network for Implicit
Discourse Relation Classification | Implicit discourse relation classification is one of the most difficult steps
in discourse parsing. The difficulty stems from the fact that the coherence
relation must be inferred based on the content of the discourse relational
arguments. Therefore, an effective encoding of the relational arguments is of
crucial importance. We here propose a new model for implicit discourse relation
classification, which consists of a classifier, and a sequence-to-sequence
model which is trained to generate a representation of the discourse relational
arguments by trying to predict the relational arguments including a suitable
implicit connective. Training is possible because such implicit connectives
have been annotated as part of the PDTB corpus. Along with a memory network,
our model could generate more refined representations for the task. And on the
now standard 11-way classification, our method outperforms previous state of
the art systems on the PDTB benchmark on multiple settings including cross
validation.
| 2,019 | Computation and Language |
Weakly Supervised Grammatical Error Correction using Iterative Decoding | We describe an approach to Grammatical Error Correction (GEC) that is
effective at making use of models trained on large amounts of weakly supervised
bitext. We train the Transformer sequence-to-sequence model on 4B tokens of
Wikipedia revisions and employ an iterative decoding strategy that is tailored
to the loosely-supervised nature of the Wikipedia training corpus. Finetuning
on the Lang-8 corpus and ensembling yields an F0.5 of 58.3 on the CoNLL'14
benchmark and a GLEU of 62.4 on JFLEG. The combination of weakly supervised
training and iterative decoding obtains an F0.5 of 48.2 on CoNLL'14 even
without using any labeled GEC data.
| 2,018 | Computation and Language |
Word Mover's Embedding: From Word2Vec to Document Embedding | While the celebrated Word2Vec technique yields semantically rich
representations for individual words, there has been relatively less success in
extending to generate unsupervised sentences or documents embeddings. Recent
work has demonstrated that a distance measure between documents called
\emph{Word Mover's Distance} (WMD) that aligns semantically similar words,
yields unprecedented KNN classification accuracy. However, WMD is expensive to
compute, and it is hard to extend its use beyond a KNN classifier. In this
paper, we propose the \emph{Word Mover's Embedding } (WME), a novel approach to
building an unsupervised document (sentence) embedding from pre-trained word
embeddings. In our experiments on 9 benchmark text classification datasets and
22 textual similarity tasks, the proposed technique consistently matches or
outperforms state-of-the-art techniques, with significantly higher accuracy on
problems of short length.
| 2,018 | Computation and Language |
AttentionXML: Label Tree-based Attention-Aware Deep Model for
High-Performance Extreme Multi-Label Text Classification | Extreme multi-label text classification (XMTC) is an important problem in the
era of big data, for tagging a given text with the most relevant multiple
labels from an extremely large-scale label set. XMTC can be found in many
applications, such as item categorization, web page tagging, and news
annotation. Traditionally most methods used bag-of-words (BOW) as inputs,
ignoring word context as well as deep semantic information. Recent attempts to
overcome the problems of BOW by deep learning still suffer from 1) failing to
capture the important subtext for each label and 2) lack of scalability against
the huge number of labels. We propose a new label tree-based deep learning
model for XMTC, called AttentionXML, with two unique features: 1) a multi-label
attention mechanism with raw text as input, which allows to capture the most
relevant part of text to each label; and 2) a shallow and wide probabilistic
label tree (PLT), which allows to handle millions of labels, especially for
"tail labels". We empirically compared the performance of AttentionXML with
those of eight state-of-the-art methods over six benchmark datasets, including
Amazon-3M with around 3 million labels. AttentionXML outperformed all competing
methods under all experimental settings. Experimental results also show that
AttentionXML achieved the best performance against tail labels among label
tree-based methods. The code and datasets are available at
http://github.com/yourh/AttentionXML .
| 2,019 | Computation and Language |
Transductive Learning with String Kernels for Cross-Domain Text
Classification | For many text classification tasks, there is a major problem posed by the
lack of labeled data in a target domain. Although classifiers for a target
domain can be trained on labeled text data from a related source domain, the
accuracy of such classifiers is usually lower in the cross-domain setting.
Recently, string kernels have obtained state-of-the-art results in various text
classification tasks such as native language identification or automatic essay
scoring. Moreover, classifiers based on string kernels have been found to be
robust to the distribution gap between different domains. In this paper, we
formally describe an algorithm composed of two simple yet effective
transductive learning approaches to further improve the results of string
kernels in cross-domain settings. By adapting string kernels to the test set
without using the ground-truth test labels, we report significantly better
accuracy rates in cross-domain English polarity classification.
| 2,018 | Computation and Language |
The Knowref Coreference Corpus: Removing Gender and Number Cues for
Difficult Pronominal Anaphora Resolution | We introduce a new benchmark for coreference resolution and NLI, Knowref,
that targets common-sense understanding and world knowledge. Previous
coreference resolution tasks can largely be solved by exploiting the number and
gender of the antecedents, or have been handcrafted and do not reflect the
diversity of naturally occurring text. We present a corpus of over 8,000
annotated text passages with ambiguous pronominal anaphora. These instances are
both challenging and realistic. We show that various coreference systems,
whether rule-based, feature-rich, or neural, perform significantly worse on the
task than humans, who display high inter-annotator agreement. To explain this
performance gap, we show empirically that state-of-the art models often fail to
capture context, instead relying on the gender or number of candidate
antecedents to make a decision. We then use problem-specific insights to
propose a data-augmentation trick called antecedent switching to alleviate this
tendency in models. Finally, we show that antecedent switching yields promising
results on other tasks as well: we use it to achieve state-of-the-art results
on the GAP coreference task.
| 2,019 | Computation and Language |
A human-editable Sign Language representation for software editing---and
a writing system? | To equip SL with software properly, we need an input system to represent and
manipulate signed contents in the same way that every day software allows to
process written text. Refuting the claim that video is good enough a medium to
serve the purpose, we propose to build a representation that is: editable,
queryable, synthesisable and user-friendly---we define those terms upfront. The
issue being functionally and conceptually linked to that of writing, we study
existing writing systems, namely those in use for vocal languages, those
designed and proposed for SLs, and more spontaneous ways in which SL users put
their language in writing. Observing each paradigm in turn, we move on to
propose a new approach to satisfy our goals of integration in software. We
finally open the prospect of our proposition being used outside of this
restricted scope, as a writing system in itself, and compare its properties to
the other writing systems presented.
| 2,018 | Computation and Language |
Do RNNs learn human-like abstract word order preferences? | RNN language models have achieved state-of-the-art results on various tasks,
but what exactly they are representing about syntax is as yet unclear. Here we
investigate whether RNN language models learn humanlike word order preferences
in syntactic alternations. We collect language model surprisal scores for
controlled sentence stimuli exhibiting major syntactic alternations in English:
heavy NP shift, particle shift, the dative alternation, and the genitive
alternation. We show that RNN language models reproduce human preferences in
these alternations based on NP length, animacy, and definiteness. We collect
human acceptability ratings for our stimuli, in the first acceptability
judgment experiment directly manipulating the predictors of syntactic
alternations. We show that the RNNs' performance is similar to the human
acceptability ratings and is not matched by an n-gram baseline model. Our
results show that RNNs learn the abstract features of weight, animacy, and
definiteness which underlie soft constraints on syntactic alternations.
| 2,018 | Computation and Language |
Evolutionary Data Measures: Understanding the Difficulty of Text
Classification Tasks | Classification tasks are usually analysed and improved through new model
architectures or hyperparameter optimisation but the underlying properties of
datasets are discovered on an ad-hoc basis as errors occur. However,
understanding the properties of the data is crucial in perfecting models. In
this paper we analyse exactly which characteristics of a dataset best determine
how difficult that dataset is for the task of text classification. We then
propose an intuitive measure of difficulty for text classification datasets
which is simple and fast to calculate. We show that this measure generalises to
unseen data by comparing it to state-of-the-art datasets and results. This
measure can be used to analyse the precise source of errors in a dataset and
allows fast estimation of how difficult a dataset is to learn. We searched for
this measure by training 12 classical and neural network based models on 78
real-world datasets, then use a genetic algorithm to discover the best measure
of difficulty. Our difficulty-calculating code ( https://github.com/Wluper/edm
) and datasets ( http://data.wluper.com ) are publicly available.
| 2,018 | Computation and Language |
A personal model of trumpery: Deception detection in a real-world
high-stakes setting | Language use reveals information about who we are and how we feel1-3. One of
the pioneers in text analysis, Walter Weintraub, manually counted which types
of words people used in medical interviews and showed that the frequency of
first-person singular pronouns (i.e., I, me, my) was a reliable indicator of
depression, with depressed people using I more often than people who are not
depressed4. Several studies have demonstrated that language use also differs
between truthful and deceptive statements5-7, but not all differences are
consistent across people and contexts, making prediction difficult8. Here we
show how well linguistic deception detection performs at the individual level
by developing a model tailored to a single individual: the current US
president. Using tweets fact-checked by an independent third party (Washington
Post), we found substantial linguistic differences between factually correct
and incorrect tweets and developed a quantitative model based on these
differences. Next, we predicted whether out-of-sample tweets were either
factually correct or incorrect and achieved a 73% overall accuracy. Our results
demonstrate the power of linguistic analysis in real-world deception research
when applied at the individual level and provide evidence that factually
incorrect tweets are not random mistakes of the sender.
| 2,018 | Computation and Language |
Compact Personalized Models for Neural Machine Translation | We propose and compare methods for gradient-based domain adaptation of
self-attentive neural machine translation models. We demonstrate that a large
proportion of model parameters can be frozen during adaptation with minimal or
no reduction in translation quality by encouraging structured sparsity in the
set of offset tensors during learning via group lasso regularization. We
evaluate this technique for both batch and incremental adaptation across
multiple data sets and language pairs. Our system architecture - combining a
state-of-the-art self-attentive model with compact domain adaptation - provides
high quality personalized machine translation that is both space and time
efficient.
| 2,018 | Computation and Language |
Leveraging Weakly Supervised Data to Improve End-to-End Speech-to-Text
Translation | End-to-end Speech Translation (ST) models have many potential advantages when
compared to the cascade of Automatic Speech Recognition (ASR) and text Machine
Translation (MT) models, including lowered inference latency and the avoidance
of error compounding. However, the quality of end-to-end ST is often limited by
a paucity of training data, since it is difficult to collect large parallel
corpora of speech and translated transcript pairs. Previous studies have
proposed the use of pre-trained components and multi-task learning in order to
benefit from weakly supervised training data, such as speech-to-transcript or
text-to-foreign-text pairs. In this paper, we demonstrate that using
pre-trained MT or text-to-speech (TTS) synthesis models to convert weakly
supervised data into speech-to-translation pairs for ST training can be more
effective than multi-task learning. Furthermore, we demonstrate that a high
quality end-to-end ST model can be trained using only weakly supervised
datasets, and that synthetic data sourced from unlabeled monolingual text or
speech can be used to improve performance. Finally, we discuss methods for
avoiding overfitting to synthetic speech with a quantitative ablation study.
| 2,019 | Computation and Language |
The Marchex 2018 English Conversational Telephone Speech Recognition
System | In this paper, we describe recent performance improvements to the production
Marchex speech recognition system for our spontaneous customer-to-business
telephone conversations. In our previous work, we focused on in-domain language
and acoustic model training. In this work we employ state-of-the-art
semi-supervised lattice-free maximum mutual information (LF-MMI) training
process which can supervise over full lattices from unlabeled audio. On Marchex
English (ME), a modern evaluation set of conversational North American English,
we observed a 3.3% (3.2% for agent, 3.6% for caller) reduction in absolute word
error rate (WER) with 3x faster decoding speed over the performance of the 2017
production system. We expect this improvement boost Marchex Call Analytics
system performance especially for natural language processing pipeline.
| 2,019 | Computation and Language |
End-to-End Monaural Multi-speaker ASR System without Pretraining | Recently, end-to-end models have become a popular approach as an alternative
to traditional hybrid models in automatic speech recognition (ASR). The
multi-speaker speech separation and recognition task is a central task in
cocktail party problem. In this paper, we present a state-of-the-art monaural
multi-speaker end-to-end automatic speech recognition model. In contrast to
previous studies on the monaural multi-speaker speech recognition, this
end-to-end framework is trained to recognize multiple label sequences
completely from scratch. The system only requires the speech mixture and
corresponding label sequences, without needing any indeterminate supervisions
obtained from non-mixture speech or corresponding labels/alignments. Moreover,
we exploited using the individual attention module for each separated speaker
and the scheduled sampling to further improve the performance. Finally, we
evaluate the proposed model on the 2-speaker mixed speech generated from the
WSJ corpus and the wsj0-2mix dataset, which is a speech separation and
recognition benchmark. The experiments demonstrate that the proposed methods
can improve the performance of the end-to-end model in separating the
overlapping speech and recognizing the separated streams. From the results, the
proposed model leads to ~10.0% relative performance gains in terms of CER and
WER respectively.
| 2,018 | Computation and Language |
Improving Span-based Question Answering Systems with Coarsely Labeled
Data | We study approaches to improve fine-grained short answer Question Answering
models by integrating coarse-grained data annotated for paragraph-level
relevance and show that coarsely annotated data can bring significant
performance gains. Experiments demonstrate that the standard multi-task
learning approach of sharing representations is not the most effective way to
leverage coarse-grained annotations. Instead, we can explicitly model the
latent fine-grained short answer variables and optimize the marginal
log-likelihood directly or use a newly proposed \emph{posterior distillation}
learning objective. Since these latent-variable methods have explicit access to
the relationship between the fine and coarse tasks, they result in
significantly larger improvements from coarse supervision.
| 2,018 | Computation and Language |
Robust and fine-grained prosody control of end-to-end speech synthesis | We propose prosody embeddings for emotional and expressive speech synthesis
networks. The proposed methods introduce temporal structures in the embedding
networks, thus enabling fine-grained control of the speaking style of the
synthesized speech. The temporal structures can be designed either on the
speech side or the text side, leading to different control resolutions in time.
The prosody embedding networks are plugged into end-to-end speech synthesis
networks and trained without any other supervision except for the target speech
for synthesizing. It is demonstrated that the prosody embedding networks
learned to extract prosodic features. By adjusting the learned prosody
features, we could change the pitch and amplitude of the synthesized speech
both at the frame level and the phoneme level. We also introduce the temporal
normalization of prosody embeddings, which shows better robustness against
speaker perturbations during prosody transfer tasks.
| 2,019 | Computation and Language |
Transfer learning of language-independent end-to-end ASR with language
model fusion | This work explores better adaptation methods to low-resource languages using
an external language model (LM) under the framework of transfer learning. We
first build a language-independent ASR system in a unified sequence-to-sequence
(S2S) architecture with a shared vocabulary among all languages. During
adaptation, we perform LM fusion transfer, where an external LM is integrated
into the decoder network of the attention-based S2S model in the whole
adaptation stage, to effectively incorporate linguistic context of the target
language. We also investigate various seed models for transfer learning.
Experimental evaluations using the IARPA BABEL data set show that LM fusion
transfer improves performances on all target five languages compared with
simple transfer learning when the external text data is available. Our final
system drastically reduces the performance gap from the hybrid systems.
| 2,019 | Computation and Language |
DIAG-NRE: A Neural Pattern Diagnosis Framework for Distantly Supervised
Neural Relation Extraction | Pattern-based labeling methods have achieved promising results in alleviating
the inevitable labeling noises of distantly supervised neural relation
extraction. However, these methods require significant expert labor to write
relation-specific patterns, which makes them too sophisticated to generalize
quickly.To ease the labor-intensive workload of pattern writing and enable the
quick generalization to new relation types, we propose a neural pattern
diagnosis framework, DIAG-NRE, that can automatically summarize and refine
high-quality relational patterns from noise data with human experts in the
loop. To demonstrate the effectiveness of DIAG-NRE, we apply it to two
real-world datasets and present both significant and interpretable improvements
over state-of-the-art methods.
| 2,019 | Computation and Language |
Neural Phrase-to-Phrase Machine Translation | In this paper, we propose Neural Phrase-to-Phrase Machine Translation
(NP$^2$MT). Our model uses a phrase attention mechanism to discover relevant
input (source) segments that are used by a decoder to generate output (target)
phrases. We also design an efficient dynamic programming algorithm to decode
segments that allows the model to be trained faster than the existing neural
phrase-based machine translation method by Huang et al. (2018). Furthermore,
our method can naturally integrate with external phrase dictionaries during
decoding. Empirical experiments show that our method achieves comparable
performance with the state-of-the art methods on benchmark datasets. However,
when the training and testing data are from different distributions or domains,
our method performs better.
| 2,018 | Computation and Language |
Unpaired Speech Enhancement by Acoustic and Adversarial Supervision for
Speech Recognition | Many speech enhancement methods try to learn the relationship between noisy
and clean speech, obtained using an acoustic room simulator. We point out
several limitations of enhancement methods relying on clean speech targets; the
goal of this work is proposing an alternative learning algorithm, called
acoustic and adversarial supervision (AAS). AAS makes the enhanced output both
maximizing the likelihood of transcription on the pre-trained acoustic model
and having general characteristics of clean speech, which improve
generalization on unseen noisy speeches. We employ the connectionist temporal
classification and the unpaired conditional boundary equilibrium generative
adversarial network as the loss function of AAS. AAS is tested on two datasets
including additive noise without and with reverberation, Librispeech + DEMAND
and CHiME-4. By visualizing the enhanced speech with different loss
combinations, we demonstrate the role of each supervision. AAS achieves a lower
word error rate than other state-of-the-art methods using the clean speech
target in both datasets.
| 2,018 | Computation and Language |
CIS at TAC Cold Start 2015: Neural Networks and Coreference Resolution
for Slot Filling | This paper describes the CIS slot filling system for the TAC Cold Start
evaluations 2015. It extends and improves the system we have built for the
evaluation last year. This paper mainly describes the changes to our last
year's system. Especially, it focuses on the coreference and classification
component. For coreference, we have performed several analysis and prepared a
resource to simplify our end-to-end system and improve its runtime. For
classification, we propose to use neural networks. We have trained
convolutional and recurrent neural networks and combined them with traditional
evaluation methods, namely patterns and support vector machines. Our runs for
the 2015 evaluation have been designed to directly assess the effect of each
network on the end-to-end performance of the system. The CIS system achieved
rank 3 of all slot filling systems participating in the task.
| 2,018 | Computation and Language |
Off-the-Shelf Unsupervised NMT | We frame unsupervised machine translation (MT) in the context of multi-task
learning (MTL), combining insights from both directions. We leverage
off-the-shelf neural MT architectures to train unsupervised MT models with no
parallel data and show that such models can achieve reasonably good
performance, competitive with models purpose-built for unsupervised MT.
Finally, we propose improvements that allow us to apply our models to
English-Turkish, a truly low-resource language pair.
| 2,018 | Computation and Language |
Recurrent Skipping Networks for Entity Alignment | We consider the problem of learning knowledge graph (KG) embeddings for
entity alignment (EA). Current methods use the embedding models mainly focusing
on triple-level learning, which lacks the ability of capturing long-term
dependencies existing in KGs. Consequently, the embedding-based EA methods
heavily rely on the amount of prior (known) alignment, due to the identity
information in the prior alignment cannot be efficiently propagated from one KG
to another. In this paper, we propose RSN4EA (recurrent skipping networks for
EA), which leverages biased random walk sampling for generating long paths
across KGs and models the paths with a novel recurrent skipping network (RSN).
RSN integrates the conventional recurrent neural network (RNN) with residual
learning and can largely improve the convergence speed and performance with
only a few more parameters. We evaluated RSN4EA on a series of datasets
constructed from real-world KGs. Our experimental results showed that it
outperformed a number of state-of-the-art embedding-based EA methods and also
achieved comparable performance for KG completion.
| 2,018 | Computation and Language |
Hierarchical Neural Network Architecture In Keyword Spotting | Keyword Spotting (KWS) provides the start signal of ASR problem, and thus it
is essential to ensure a high recall rate. However, its real-time property
requires low computation complexity. This contradiction inspires people to find
a suitable model which is small enough to perform well in multi environments.
To deal with this contradiction, we implement the Hierarchical Neural
Network(HNN), which is proved to be effective in many speech recognition
problems. HNN outperforms traditional DNN and CNN even though its model size
and computation complexity are slightly less. Also, its simple topology
structure makes easy to deploy on any device.
| 2,018 | Computation and Language |
Learning to Embed Sentences Using Attentive Recursive Trees | Sentence embedding is an effective feature representation for most deep
learning-based NLP tasks. One prevailing line of methods is using recursive
latent tree-structured networks to embed sentences with task-specific
structures. However, existing models have no explicit mechanism to emphasize
task-informative words in the tree structure. To this end, we propose an
Attentive Recursive Tree model (AR-Tree), where the words are dynamically
located according to their importance in the task. Specifically, we construct
the latent tree for a sentence in a proposed important-first strategy, and
place more attentive words nearer to the root; thus, AR-Tree can inherently
emphasize important words during the bottom-up composition of the sentence
embedding. We propose an end-to-end reinforced training strategy for AR-Tree,
which is demonstrated to consistently outperform, or be at least comparable to,
the state-of-the-art sentence embedding methods on three sentence understanding
tasks.
| 2,018 | Computation and Language |
Code-switching Sentence Generation by Generative Adversarial Networks
and its Application to Data Augmentation | Code-switching is about dealing with alternative languages in speech or text.
It is partially speaker-depend and domain-related, so completely explaining the
phenomenon by linguistic rules is challenging. Compared to most monolingual
tasks, insufficient data is an issue for code-switching. To mitigate the issue
without expensive human annotation, we proposed an unsupervised method for
code-switching data augmentation. By utilizing a generative adversarial
network, we can generate intra-sentential code-switching sentences from
monolingual sentences. We applied proposed method on two corpora, and the
result shows that the generated code-switching sentences improve the
performance of code-switching language models.
| 2,019 | Computation and Language |
Effective Subword Segmentation for Text Comprehension | Representation learning is the foundation of machine reading comprehension
and inference. In state-of-the-art models, character-level representations have
been broadly adopted to alleviate the problem of effectively representing rare
or complex words. However, character itself is not a natural minimal linguistic
unit for representation or word embedding composing due to ignoring the
linguistic coherence of consecutive characters inside word. This paper presents
a general subword-augmented embedding framework for learning and composing
computationally-derived subword-level representations. We survey a series of
unsupervised segmentation methods for subword acquisition and different
subword-augmented strategies for text understanding, showing that
subword-augmented embedding significantly improves our baselines in various
types of text understanding tasks on both English and Chinese benchmarks.
| 2,019 | Computation and Language |
DeepChannel: Salience Estimation by Contrastive Learning for Extractive
Document Summarization | We propose DeepChannel, a robust, data-efficient, and interpretable neural
model for extractive document summarization. Given any document-summary pair,
we estimate a salience score, which is modeled using an attention-based deep
neural network, to represent the salience degree of the summary for yielding
the document. We devise a contrastive training strategy to learn the salience
estimation network, and then use the learned salience score as a guide and
iteratively extract the most salient sentences from the document as our
generated summary. In experiments, our model not only achieves state-of-the-art
ROUGE scores on CNN/Daily Mail dataset, but also shows strong robustness in the
out-of-domain test on DUC2007 test set. Moreover, our model reaches a ROUGE-1
F-1 score of 39.41 on CNN/Daily Mail test set with merely $1 / 100$ training
set, demonstrating a tremendous data efficiency.
| 2,018 | Computation and Language |
WordNet-feelings: A linguistic categorisation of human feelings | In this article, we present the first in depth linguistic study of human
feelings. While there has been substantial research on incorporating some
affective categories into linguistic analysis (e.g. sentiment, and to a lesser
extent, emotion), the more diverse category of human feelings has thus far not
been investigated. We surveyed the extensive interdisciplinary literature
around feelings to construct a working definition of what constitutes a feeling
and propose 9 broad categories of feeling. We identified potential feeling
words based on their pointwise mutual information with morphological variants
of the word `feel' in the Google n-gram corpus, and present a manual annotation
exercise where 317 WordNet senses of one hundred of these words were
categorised as `not a feeling' or as one of the 9 proposed categories of
feeling. We then proceeded to annotate 11386 WordNet senses of all these words
to create WordNet-feelings, a new affective dataset that identifies 3664 word
senses as feelings, and associates each of these with one of the 9 categories
of feeling. WordNet-feelings can be used in conjunction with other datasets
such as SentiWordNet that annotate word senses with complementary affective
properties such as valence and intensity.
| 2,018 | Computation and Language |
Semantic Term "Blurring" and Stochastic "Barcoding" for Improved
Unsupervised Text Classification | The abundance of text data being produced in the modern age makes it
increasingly important to intuitively group, categorize, or classify text data
by theme for efficient retrieval and search. Yet, the high dimensionality and
imprecision of text data, or more generally language as a whole, prove to be
challenging when attempting to perform unsupervised document clustering. In
this thesis, we present two novel methods for improving unsupervised document
clustering/classification by theme. The first is to improve document
representations. We look to exploit "term neighborhoods" and "blur" semantic
weight across neighboring terms. These neighborhoods are located in the
semantic space afforded by "word embeddings." The second method is for cluster
revision, based on what we deem as "stochastic barcoding", or "S- Barcode"
patterns. Text data is inherently high dimensional, yet clustering typically
takes place in a low dimensional representation space. Our method utilizes
lower dimension clustering results as initial cluster configurations, and
iteratively revises the configuration in the high dimensional space. We show
with experimental results how both of the two methods improve the quality of
document clustering. While this thesis elaborates on the two new conceptual
contributions, a joint thesis by David Yan details the feature transformation
and software architecture we developed for unsupervised document
classification.
| 2,018 | Computation and Language |
Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement
in Multi-Talker Environments | In this paper, we address the problem of enhancing the speech of a speaker of
interest in a cocktail party scenario when visual information of the speaker of
interest is available. Contrary to most previous studies, we do not learn
visual features on the typically small audio-visual datasets, but use an
already available face landmark detector (trained on a separate image dataset).
The landmarks are used by LSTM-based models to generate time-frequency masks
which are applied to the acoustic mixed-speech spectrogram. Results show that:
(i) landmark motion features are very effective features for this task, (ii)
similarly to previous work, reconstruction of the target speaker's spectrogram
mediated by masking is significantly more accurate than direct spectrogram
reconstruction, and (iii) the best masks depend on both motion landmark
features and the input mixed-speech spectrogram. To the best of our knowledge,
our proposed models are the first models trained and evaluated on the limited
size GRID and TCD-TIMIT datasets, that achieve speaker-independent speech
enhancement in a multi-talker setting.
| 2,021 | Computation and Language |
UAlacant machine translation quality estimation at WMT 2018: a simple
approach using phrase tables and feed-forward neural networks | We describe the Universitat d'Alacant submissions to the word- and
sentence-level machine translation (MT) quality estimation (QE) shared task at
WMT 2018. Our approach to word-level MT QE builds on previous work to mark the
words in the machine-translated sentence as \textit{OK} or \textit{BAD}, and is
extended to determine if a word or sequence of words need to be inserted in the
gap after each word. Our sentence-level submission simply uses the edit
operations predicted by the word-level approach to approximate TER. The method
presented ranked first in the sub-task of identifying insertions in gaps for
three out of the six datasets, and second in the rest of them.
| 2,018 | Computation and Language |
Discriminative training of RNNLMs with the average word error criterion | In automatic speech recognition (ASR), recurrent neural language models
(RNNLM) are typically used to refine hypotheses in the form of lattices or
n-best lists, which are generated by a beam search decoder with a weaker
language model. The RNNLMs are usually trained generatively using the
perplexity (PPL) criterion on large corpora of grammatically correct text.
However, the hypotheses are noisy, and the RNNLM doesn't always make the
choices that minimise the metric we optimise for, the word error rate (WER). To
address this mismatch we propose to use a task specific loss to train an RNNLM
to discriminate between multiple hypotheses within lattice rescoring scenario.
By fine-tuning the RNNLM on lattices with the average edit distance loss, we
show that we obtain a 1.9% relative improvement in word error rate over a
purely generatively trained model.
| 2,018 | Computation and Language |
Language GANs Falling Short | Generating high-quality text with sufficient diversity is essential for a
wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE)
models trained with teacher forcing have consistently been reported as weak
baselines, where poor performance is attributed to exposure bias (Bengio et
al., 2015; Ranzato et al., 2015); at inference time, the model is fed its own
prediction instead of a ground-truth token, which can lead to accumulating
errors and poor samples. This line of reasoning has led to an outbreak of
adversarial based approaches for NLG, on the account that GANs do not suffer
from exposure bias. In this work, we make several surprising observations which
contradict common beliefs. First, we revisit the canonical evaluation framework
for NLG, and point out fundamental flaws with quality-only evaluation: we show
that one can outperform such metrics using a simple, well-known temperature
parameter to artificially reduce the entropy of the model's conditional
distributions. Second, we leverage the control over the quality / diversity
trade-off given by this parameter to evaluate models over the whole
quality-diversity spectrum and find MLE models constantly outperform the
proposed GAN variants over the whole quality-diversity space. Our results have
several implications: 1) The impact of exposure bias on sample quality is less
severe than previously thought, 2) temperature tuning provides a better quality
/ diversity trade-off than adversarial training while being easier to train,
easier to cross-validate, and less computationally expensive. Code to reproduce
the experiments is available at github.com/pclucas14/GansFallingShort
| 2,020 | Computation and Language |
Fast Neural Chinese Word Segmentation for Long Sentences | Rapidly developed neural models have achieved competitive performance in
Chinese word segmentation (CWS) as their traditional counterparts. However,
most of methods encounter the computational inefficiency especially for long
sentences because of the increasing model complexity and slower decoders. This
paper presents a simple neural segmenter which directly labels the gap
existence between adjacent characters to alleviate the existing drawback. Our
segmenter is fully end-to-end and capable of performing segmentation very fast.
We also show a performance difference with different tag sets. The experiments
show that our segmenter can provide comparable performance with
state-of-the-art.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.