Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Study of Enhancement, Augmentation, and Autoencoder Methods for Domain
Adaptation in Distant Speech Recognition | Speech recognizers trained on close-talking speech do not generalize to
distant speech and the word error rate degradation can be as large as 40%
absolute. Most studies focus on tackling distant speech recognition as a
separate problem, leaving little effort to adapting close-talking speech
recognizers to distant speech. In this work, we review several approaches from
a domain adaptation perspective. These approaches, including speech
enhancement, multi-condition training, data augmentation, and autoencoders, all
involve a transformation of the data between domains. We conduct experiments on
the AMI data set, where these approaches can be realized under the same
controlled setting. These approaches lead to different amounts of improvement
under their respective assumptions. The purpose of this paper is to quantify
and characterize the performance gap between the two domains, setting up the
basis for studying adaptation of speech recognizers from close-talking speech
to distant speech. Our results also have implications for improving distant
speech recognition.
| 2,018 | Computation and Language |
Double Path Networks for Sequence to Sequence Learning | Encoder-decoder based Sequence to Sequence learning (S2S) has made remarkable
progress in recent years. Different network architectures have been used in the
encoder/decoder. Among them, Convolutional Neural Networks (CNN) and Self
Attention Networks (SAN) are the prominent ones. The two architectures achieve
similar performances but use very different ways to encode and decode context:
CNN use convolutional layers to focus on the local connectivity of the
sequence, while SAN uses self-attention layers to focus on global semantics. In
this work we propose Double Path Networks for Sequence to Sequence learning
(DPN-S2S), which leverage the advantages of both models by using double path
information fusion. During the encoding step, we develop a double path
architecture to maintain the information coming from different paths with
convolutional layers and self-attention layers separately. To effectively use
the encoded context, we develop a cross attention module with gating and use it
to automatically pick up the information needed during the decoding step. By
deeply integrating the two paths with cross attention, both types of
information are combined and well exploited. Experiments show that our proposed
method can significantly improve the performance of sequence to sequence
learning over state-of-the-art systems.
| 2,018 | Computation and Language |
Unsupervised Adaptation with Interpretable Disentangled Representations
for Distant Conversational Speech Recognition | The current trend in automatic speech recognition is to leverage large
amounts of labeled data to train supervised neural network models.
Unfortunately, obtaining data for a wide range of domains to train robust
models can be costly. However, it is relatively inexpensive to collect large
amounts of unlabeled data from domains that we want the models to generalize
to. In this paper, we propose a novel unsupervised adaptation method that
learns to synthesize labeled data for the target domain from unlabeled
in-domain data and labeled out-of-domain data. We first learn without
supervision an interpretable latent representation of speech that encodes
linguistic and nuisance factors (e.g., speaker and channel) using different
latent variables. To transform a labeled out-of-domain utterance without
altering its transcript, we transform the latent nuisance variables while
maintaining the linguistic variables. To demonstrate our approach, we focus on
a channel mismatch setting, where the domain of interest is distant
conversational speech, and labels are only available for close-talking speech.
Our proposed method is evaluated on the AMI dataset, outperforming all
baselines and bridging the gap between unadapted and in-domain models by over
77% without using any parallel data.
| 2,018 | Computation and Language |
On Accurate Evaluation of GANs for Language Generation | Generative Adversarial Networks (GANs) are a promising approach to language
generation. The latest works introducing novel GAN models for language
generation use n-gram based metrics for evaluation and only report single
scores of the best run. In this paper, we argue that this often misrepresents
the true picture and does not tell the full story, as GAN models can be
extremely sensitive to the random initialization and small deviations from the
best hyperparameter choice. In particular, we demonstrate that the previously
used BLEU score is not sensitive to semantic deterioration of generated texts
and propose alternative metrics that better capture the quality and diversity
of the generated samples. We also conduct a set of experiments comparing a
number of GAN models for text with a conventional Language Model (LM) and find
that neither of the considered models performs convincingly better than the LM.
| 2,019 | Computation and Language |
OpenEDGAR: Open Source Software for SEC EDGAR Analysis | OpenEDGAR is an open source Python framework designed to rapidly construct
research databases based on the Electronic Data Gathering, Analysis, and
Retrieval (EDGAR) system operated by the US Securities and Exchange Commission
(SEC). OpenEDGAR is built on the Django application framework, supports
distributed compute across one or more servers, and includes functionality to
(i) retrieve and parse index and filing data from EDGAR, (ii) build tables for
key metadata like form type and filer, (iii) retrieve, parse, and update CIK to
ticker and industry mappings, (iv) extract content and metadata from filing
documents, and (v) search filing document contents. OpenEDGAR is designed for
use in both academic research and industrial applications, and is distributed
under MIT License at https://github.com/LexPredict/openedgar.
| 2,018 | Computation and Language |
Visually grounded cross-lingual keyword spotting in speech | Recent work considered how images paired with speech can be used as
supervision for building speech systems when transcriptions are not available.
We ask whether visual grounding can be used for cross-lingual keyword spotting:
given a text keyword in one language, the task is to retrieve spoken utterances
containing that keyword in another language. This could enable searching
through speech in a low-resource language using text queries in a high-resource
language. As a proof-of-concept, we use English speech with German queries: we
use a German visual tagger to add keyword labels to each training image, and
then train a neural network to map English speech to German keywords. Without
seeing parallel speech-transcriptions or translations, the model achieves a
precision at ten of 58%. We show that most erroneous retrievals contain
equivalent or semantically relevant keywords; excluding these would improve
P@10 to 91%.
| 2,018 | Computation and Language |
Graph-Based Decoding for Event Sequencing and Coreference Resolution | Events in text documents are interrelated in complex ways. In this paper, we
study two types of relation: Event Coreference and Event Sequencing. We show
that the popular tree-like decoding structure for automated Event Coreference
is not suitable for Event Sequencing. To this end, we propose a graph-based
decoding algorithm that is applicable to both tasks. The new decoding algorithm
supports flexible feature sets for both tasks. Empirically, our event
coreference system has achieved state-of-the-art performance on the TAC-KBP
2015 event coreference task and our event sequencing system beats a strong
temporal-based, oracle-informed baseline. We discuss the challenges of studying
these event relations.
| 2,018 | Computation and Language |
Generative Neural Machine Translation | We introduce Generative Neural Machine Translation (GNMT), a latent variable
architecture which is designed to model the semantics of the source and target
sentences. We modify an encoder-decoder translation model by adding a latent
variable as a language agnostic representation which is encouraged to learn the
meaning of the sentence. GNMT achieves competitive BLEU scores on pure
translation tasks, and is superior when there are missing words in the source
sentence. We augment the model to facilitate multilingual translation and
semi-supervised learning without adding parameters. This framework
significantly reduces overfitting when there is limited paired data available,
and is effective for translating between pairs of languages not seen during
training.
| 2,018 | Computation and Language |
Generating Sentences Using a Dynamic Canvas | We introduce the Attentive Unsupervised Text (W)riter (AUTR), which is a word
level generative model for natural language. It uses a recurrent neural network
with a dynamic attention and canvas memory mechanism to iteratively construct
sentences. By viewing the state of the memory at intermediate stages and where
the model is placing its attention, we gain insight into how it constructs
sentences. We demonstrate that AUTR learns a meaningful latent representation
for each sentence, and achieves competitive log-likelihood lower bounds whilst
being computationally efficient. It is effective at generating and
reconstructing sentences, as well as imputing missing words.
| 2,018 | Computation and Language |
An Evaluation of Neural Machine Translation Models on Historical
Spelling Normalization | In this paper, we apply different NMT models to the problem of historical
spelling normalization for five languages: English, German, Hungarian,
Icelandic, and Swedish. The NMT models are at different levels, have different
attention mechanisms, and different neural network architectures. Our results
show that NMT models are much better than SMT models in terms of character
error rate. The vanilla RNNs are competitive to GRUs/LSTMs in historical
spelling normalization. Transformer models perform better only when provided
with more training data. We also find that subword-level models with a small
subword vocabulary are better than character-level models for low-resource
languages. In addition, we propose a hybrid method which further improves the
performance of historical spelling normalization.
| 2,018 | Computation and Language |
Bringing replication and reproduction together with generalisability in
NLP: Three reproduction studies for Target Dependent Sentiment Analysis | Lack of repeatability and generalisability are two significant threats to
continuing scientific development in Natural Language Processing. Language
models and learning methods are so complex that scientific conference papers no
longer contain enough space for the technical depth required for replication or
reproduction. Taking Target Dependent Sentiment Analysis as a case study, we
show how recent work in the field has not consistently released code, or
described settings for learning methods in enough detail, and lacks
comparability and generalisability in train, test or validation data. To
investigate generalisability and to enable state of the art comparative
evaluations, we carry out the first reproduction studies of three groups of
complementary methods and perform the first large-scale mass evaluation on six
different English datasets. Reflecting on our experiences, we recommend that
future replication or reproduction experiments should always consider a variety
of datasets alongside documenting and releasing their methods and published
code in order to minimise the barriers to both repeatability and
generalisability. We have released our code with a model zoo on GitHub with
Jupyter Notebooks to aid understanding and full documentation, and we recommend
that others do the same with their papers at submission time through an
anonymised GitHub account.
| 2,018 | Computation and Language |
Beyond Bags of Words: Inferring Systemic Nets | Textual analytics based on representations of documents as bags of words have
been reasonably successful. However, analysis that requires deeper insight into
language, into author properties, or into the contexts in which documents were
created requires a richer representation. Systemic nets are one such
representation. They have not been extensively used because they required human
effort to construct. We show that systemic nets can be algorithmically inferred
from corpora, that the resulting nets are plausible, and that they can provide
practical benefits for knowledge discovery problems. This opens up a new class
of practical analysis techniques for textual analytics.
| 2,018 | Computation and Language |
SMHD: A Large-Scale Resource for Exploring Online Language Usage for
Multiple Mental Health Conditions | Mental health is a significant and growing public health concern. As language
usage can be leveraged to obtain crucial insights into mental health
conditions, there is a need for large-scale, labeled, mental health-related
datasets of users who have been diagnosed with one or more of such conditions.
In this paper, we investigate the creation of high-precision patterns to
identify self-reported diagnoses of nine different mental health conditions,
and obtain high-quality labeled data without the need for manual labelling. We
introduce the SMHD (Self-reported Mental Health Diagnoses) dataset and make it
available. SMHD is a novel large dataset of social media posts from users with
one or multiple mental health conditions along with matched control users. We
examine distinctions in users' language, as measured by linguistic and
psychological variables. We further explore text classification methods to
identify individuals with mental conditions through their language.
| 2,018 | Computation and Language |
How Predictable is Your State? Leveraging Lexical and Contextual
Information for Predicting Legislative Floor Action at the State Level | Modeling U.S. Congressional legislation and roll-call votes has received
significant attention in previous literature. However, while legislators across
50 state governments and D.C. propose over 100,000 bills each year, and on
average enact over 30% of them, state level analysis has received relatively
less attention due in part to the difficulty in obtaining the necessary data.
Since each state legislature is guided by their own procedures, politics and
issues, however, it is difficult to qualitatively asses the factors that affect
the likelihood of a legislative initiative succeeding. Herein, we present
several methods for modeling the likelihood of a bill receiving floor action
across all 50 states and D.C. We utilize the lexical content of over 1 million
bills, along with contextual legislature and legislator derived features to
build our predictive models, allowing a comparison of the factors that are
important to the lawmaking process. Furthermore, we show that these signals
hold complementary predictive power, together achieving an average improvement
in accuracy of 18% over state specific baselines.
| 2,018 | Computation and Language |
Urdu Word Segmentation using Conditional Random Fields (CRFs) | State-of-the-art Natural Language Processing algorithms rely heavily on
efficient word segmentation. Urdu is amongst languages for which word
segmentation is a complex task as it exhibits space omission as well as space
insertion issues. This is partly due to the Arabic script which although
cursive in nature, consists of characters that have inherent joining and
non-joining attributes regardless of word boundary. This paper presents a word
segmentation system for Urdu which uses a Conditional Random Field sequence
modeler with orthographic, linguistic and morphological features. Our proposed
model automatically learns to predict white space as word boundary as well as
Zero Width Non-Joiner (ZWNJ) as sub-word boundary. Using a manually annotated
corpus, our model achieves F1 score of 0.97 for word boundary identification
and 0.85 for sub-word boundary identification tasks. We have made our code and
corpus publicly available to make our results reproducible.
| 2,018 | Computation and Language |
Transfer Learning for Context-Aware Question Matching in
Information-seeking Conversations in E-commerce | Building multi-turn information-seeking conversation systems is an important
and challenging research topic. Although several advanced neural text matching
models have been proposed for this task, they are generally not efficient for
industrial applications. Furthermore, they rely on a large amount of labeled
data, which may not be available in real-world applications. To alleviate these
problems, we study transfer learning for multi-turn information seeking
conversations in this paper. We first propose an efficient and effective
multi-turn conversation model based on convolutional neural networks. After
that, we extend our model to adapt the knowledge learned from a resource-rich
domain to enhance the performance. Finally, we deployed our model in an
industrial chatbot called AliMe Assist
(https://consumerservice.taobao.com/online-help) and observed a significant
improvement over the existing online model.
| 2,018 | Computation and Language |
Learning Cross-lingual Distributed Logical Representations for Semantic
Parsing | With the development of several multilingual datasets used for semantic
parsing, recent research efforts have looked into the problem of learning
semantic parsers in a multilingual setup. However, how to improve the
performance of a monolingual semantic parser for a specific language by
leveraging data annotated in different languages remains a research question
that is under-explored. In this work, we present a study to show how learning
distributed representations of the logical forms from data annotated in
different languages can be used for improving the performance of a monolingual
semantic parser. We extend two existing monolingual semantic parsers to
incorporate such cross-lingual distributed logical representations as features.
Experiments show that our proposed approach is able to yield improved semantic
parsing results on the standard multilingual GeoQuery dataset.
| 2,018 | Computation and Language |
Automatic Language Identification for Romance Languages using Stop Words
and Diacritics | Automatic language identification is a natural language processing problem
that tries to determine the natural language of a given content. In this paper
we present a statistical method for automatic language identification of
written text using dictionaries containing stop words and diacritics. We
propose different approaches that combine the two dictionaries to accurately
determine the language of textual corpora. This method was chosen because stop
words and diacritics are very specific to a language, although some languages
have some similar words and special characters they are not all common. The
languages taken into account were romance languages because they are very
similar and usually it is hard to distinguish between them from a computational
point of view. We have tested our method using a Twitter corpus and a news
article corpus. Both corpora consists of UTF-8 encoded text, so the diacritics
could be taken into account, in the case that the text has no diacritics only
the stop words are used to determine the language of the text. The experimental
results show that the proposed method has an accuracy of over 90% for small
texts and over 99.8% for
| 2,018 | Computation and Language |
Morphological and Language-Agnostic Word Segmentation for NMT | The state of the art of handling rich morphology in neural machine
translation (NMT) is to break word forms into subword units, so that the
overall vocabulary size of these units fits the practical limits given by the
NMT model and GPU memory capacity. In this paper, we compare two common but
linguistically uninformed methods of subword construction (BPE and STE, the
method implemented in Tensor2Tensor toolkit) and two linguistically-motivated
methods: Morfessor and one novel method, based on a derivational dictionary.
Our experiments with German-to-Czech translation, both morphologically rich,
document that so far, the non-motivated methods perform better. Furthermore, we
iden- tify a critical difference between BPE and STE and show a simple pre-
processing step for BPE that considerably increases translation quality as
evaluated by automatic measures.
| 2,018 | Computation and Language |
Nearly Zero-Shot Learning for Semantic Decoding in Spoken Dialogue
Systems | This paper presents two ways of dealing with scarce data in semantic decoding
using N-Best speech recognition hypotheses. First, we learn features by using a
deep learning architecture in which the weights for the unknown and known
categories are jointly optimised. Second, an unsupervised method is used for
further tuning the weights. Sharing weights injects prior knowledge to unknown
categories. The unsupervised tuning (i.e. the risk minimisation) improves the
F-Measure when recognising nearly zero-shot data on the DSTC3 corpus. This
unsupervised method can be applied subject to two assumptions: the rank of the
class marginal is assumed to be known and the class-conditional scores of the
classifier are assumed to follow a Gaussian distribution.
| 2,018 | Computation and Language |
Aspect Sentiment Model for Micro Reviews | This paper aims at an aspect sentiment model for aspect-based sentiment
analysis (ABSA) focused on micro reviews. This task is important in order to
understand short reviews majority of the users write, while existing topic
models are targeted for expert-level long reviews with sufficient co-occurrence
patterns to observe. Current methods on aggregating micro reviews using
metadata information may not be effective as well due to metadata absence,
topical heterogeneity, and cold start problems. To this end, we propose a model
called Micro Aspect Sentiment Model (MicroASM). MicroASM is based on the
observation that short reviews 1) are viewed with sentiment-aspect word pairs
as building blocks of information, and 2) can be clustered into larger reviews.
When compared to the current state-of-the-art aspect sentiment models,
experiments show that our model provides better performance on aspect-level
tasks such as aspect term extraction and document-level tasks such as sentiment
classification.
| 2,017 | Computation and Language |
Entity Commonsense Representation for Neural Abstractive Summarization | A major proportion of a text summary includes important entities found in the
original text. These entities build up the topic of the summary. Moreover, they
hold commonsense information once they are linked to a knowledge base. Based on
these observations, this paper investigates the usage of linked entities to
guide the decoder of a neural text summarizer to generate concise and better
summaries. To this end, we leverage on an off-the-shelf entity linking system
(ELS) to extract linked entities and propose Entity2Topic (E2T), a module
easily attachable to a sequence-to-sequence model that transforms a list of
entities into a vector representation of the topic of the summary. Current
available ELS's are still not sufficiently effective, possibly introducing
unresolved ambiguities and irrelevant entities. We resolve the imperfections of
the ELS by (a) encoding entities with selective disambiguation, and (b) pooling
entity vectors using firm attention. By applying E2T to a simple
sequence-to-sequence model with attention mechanism as base model, we see
significant improvements of the performance in the Gigaword (sentence to title)
and CNN (long document to multi-sentence highlights) summarization datasets by
at least 2 ROUGE points.
| 2,018 | Computation and Language |
Cold-Start Aware User and Product Attention for Sentiment Classification | The use of user/product information in sentiment analysis is important,
especially for cold-start users/products, whose number of reviews are very
limited. However, current models do not deal with the cold-start problem which
is typical in review websites. In this paper, we present Hybrid Contextualized
Sentiment Classifier (HCSC), which contains two modules: (1) a fast word
encoder that returns word vectors embedded with short and long range dependency
features; and (2) Cold-Start Aware Attention (CSAA), an attention mechanism
that considers the existence of cold-start problem when attentively pooling the
encoded word vectors. HCSC introduces shared vectors that are constructed from
similar users/products, and are used when the original distinct vectors do not
have sufficient information (i.e. cold-start). This is decided by a
frequency-guided selective gate vector. Our experiments show that in terms of
RMSE, HCSC performs significantly better when compared with on famous datasets,
despite having less complexity, and thus can be trained much faster. More
importantly, our model performs significantly better than previous models when
the training data is sparse and has cold-start problems.
| 2,018 | Computation and Language |
Humor Detection in English-Hindi Code-Mixed Social Media Content :
Corpus and Baseline System | The tremendous amount of user generated data through social networking sites
led to the gaining popularity of automatic text classification in the field of
computational linguistics over the past decade. Within this domain, one problem
that has drawn the attention of many researchers is automatic humor detection
in texts. In depth semantic understanding of the text is required to detect
humor which makes the problem difficult to automate. With increase in the
number of social media users, many multilingual speakers often interchange
between languages while posting on social media which is called code-mixing. It
introduces some challenges in the field of linguistic analysis of social media
content (Barman et al., 2014), like spelling variations and non-grammatical
structures in a sentence. Past researches include detecting puns in texts (Kao
et al., 2016) and humor in one-lines (Mihalcea et al., 2010) in a single
language, but with the tremendous amount of code-mixed data available online,
there is a need to develop techniques which detects humor in code-mixed tweets.
In this paper, we analyze the task of humor detection in texts and describe a
freely available corpus containing English-Hindi code-mixed tweets annotated
with humorous(H) or non-humorous(N) tags. We also tagged the words in the
tweets with Language tags (English/Hindi/Others). Moreover, we describe the
experiments carried out on the corpus and provide a baseline classification
system which distinguishes between humorous and non-humorous texts.
| 2,018 | Computation and Language |
Translations as Additional Contexts for Sentence Classification | In sentence classification tasks, additional contexts, such as the
neighboring sentences, may improve the accuracy of the classifier. However,
such contexts are domain-dependent and thus cannot be used for another
classification task with an inappropriate domain. In contrast, we propose the
use of translated sentences as context that is always available regardless of
the domain. We find that naive feature expansion of translations gains only
marginal improvements and may decrease the performance of the classifier, due
to possible inaccurate translations thus producing noisy sentence vectors. To
this end, we present multiple context fixing attachment (MCFA), a series of
modules attached to multiple sentence vectors to fix the noise in the vectors
using the other sentence vectors as context. We show that our method performs
competitively compared to previous models, achieving best classification
performance on multiple data sets. We are the first to use translations as
domain-free contexts for sentence classification.
| 2,018 | Computation and Language |
SemAxis: A Lightweight Framework to Characterize Domain-Specific Word
Semantics Beyond Sentiment | Because word semantics can substantially change across communities and
contexts, capturing domain-specific word semantics is an important challenge.
Here, we propose SEMAXIS, a simple yet powerful framework to characterize word
semantics using many semantic axes in word- vector spaces beyond sentiment. We
demonstrate that SEMAXIS can capture nuanced semantic representations in
multiple online communities. We also show that, when the sentiment axis is
examined, SEMAXIS outperforms the state-of-the-art approaches in building
domain-specific sentiment lexicons.
| 2,018 | Computation and Language |
Extracting Parallel Sentences with Bidirectional Recurrent Neural
Networks to Improve Machine Translation | Parallel sentence extraction is a task addressing the data sparsity problem
found in multilingual natural language processing applications. We propose a
bidirectional recurrent neural network based approach to extract parallel
sentences from collections of multilingual texts. Our experiments with noisy
parallel corpora show that we can achieve promising results against a
competitive baseline by removing the need of specific feature engineering or
additional external resources. To justify the utility of our approach, we
extract sentence pairs from Wikipedia articles to train machine translation
systems and show significant improvements in translation performance.
| 2,018 | Computation and Language |
A Survey on Open Information Extraction | We provide a detailed overview of the various approaches that were proposed
to date to solve the task of Open Information Extraction. We present the major
challenges that such systems face, show the evolution of the suggested
approaches over time and depict the specific issues they address. In addition,
we provide a critique of the commonly applied evaluation procedures for
assessing the performance of Open IE systems and highlight some directions for
future work.
| 2,018 | Computation and Language |
Gender Prediction in English-Hindi Code-Mixed Social Media Content :
Corpus and Baseline System | The rapid expansion in the usage of social media networking sites leads to a
huge amount of unprocessed user generated data which can be used for text
mining. Author profiling is the problem of automatically determining profiling
aspects like the author's gender and age group through a text is gaining much
popularity in computational linguistics. Most of the past research in author
profiling is concentrated on English texts \cite{1,2}. However many users often
change the language while posting on social media which is called code-mixing,
and it develops some challenges in the field of text classification and author
profiling like variations in spelling, non-grammatical structure and
transliteration \cite{3}. There are very few English-Hindi code-mixed annotated
datasets of social media content present online \cite{4}. In this paper, we
analyze the task of author's gender prediction in code-mixed content and
present a corpus of English-Hindi texts collected from Twitter which is
annotated with author's gender. We also explore language identification of
every word in this corpus. We present a supervised classification baseline
system which uses various machine learning algorithms to identify the gender of
an author using a text, based on character and word level features.
| 2,018 | Computation and Language |
NCRF++: An Open-source Neural Sequence Labeling Toolkit | This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++
is designed for quick implementation of different neural sequence labeling
models with a CRF inference layer. It provides users with an inference for
building the custom model structure through configuration file with flexible
neural feature design and utilization. Built on PyTorch, the core operations
are calculated in batch, making the toolkit efficient with the acceleration of
GPU. It also includes the implementations of most state-of-the-art neural
sequence labeling models such as LSTM-CRF, facilitating reproducing and
refinement on those methods.
| 2,018 | Computation and Language |
Grounded Textual Entailment | Capturing semantic relations between sentences, such as entailment, is a
long-standing challenge for computational semantics. Logic-based models analyse
entailment in terms of possible worlds (interpretations, or situations) where a
premise P entails a hypothesis H iff in all worlds where P is true, H is also
true. Statistical models view this relationship probabilistically, addressing
it in terms of whether a human would likely infer H from P. In this paper, we
wish to bridge these two perspectives, by arguing for a visually-grounded
version of the Textual Entailment task. Specifically, we ask whether models can
perform better if, in addition to P and H, there is also an image
(corresponding to the relevant "world" or "situation"). We use a multimodal
version of the SNLI dataset (Bowman et al., 2015) and we compare "blind" and
visually-augmented models of textual entailment. We show that visual
information is beneficial, but we also conduct an in-depth error analysis that
reveals that current multimodal models are not performing "grounding" in an
optimal fashion.
| 2,018 | Computation and Language |
Abstract Meaning Representation for Multi-Document Summarization | Generating an abstract from a collection of documents is a desirable
capability for many real-world applications. However, abstractive approaches to
multi-document summarization have not been thoroughly investigated. This paper
studies the feasibility of using Abstract Meaning Representation (AMR), a
semantic representation of natural language grounded in linguistic theory, as a
form of content representation. Our approach condenses source documents to a
set of summary graphs following the AMR formalism. The summary graphs are then
transformed to a set of summary sentences in a surface realization step. The
framework is fully data-driven and flexible. Each component can be optimized
independently using small-scale, in-domain training data. We perform
experiments on benchmark summarization datasets and report promising results.
We also describe opportunities and challenges for advancing this line of
research.
| 2,018 | Computation and Language |
Structure-Infused Copy Mechanisms for Abstractive Summarization | Seq2seq learning has produced promising results on summarization. However, in
many cases, system summaries still struggle to keep the meaning of the original
intact. They may miss out important words or relations that play critical roles
in the syntactic structure of source sentences. In this paper, we present
structure-infused copy mechanisms to facilitate copying important words and
relations from the source sentence to summary sentence. The approach naturally
combines source dependency structure with the copy mechanism of an abstractive
sentence summarizer. Experimental results demonstrate the effectiveness of
incorporating source-side syntactic information in the system, and our proposed
approach compares favorably to state-of-the-art methods.
| 2,018 | Computation and Language |
The Road to Success: Assessing the Fate of Linguistic Innovations in
Online Communities | We investigate the birth and diffusion of lexical innovations in a large
dataset of online social communities. We build on sociolinguistic theories and
focus on the relation between the spread of a novel term and the social role of
the individuals who use it, uncovering characteristics of innovators and
adopters. Finally, we perform a prediction task that allows us to anticipate
whether an innovation will successfully spread within a community.
| 2,018 | Computation and Language |
Semantic Variation in Online Communities of Practice | We introduce a framework for quantifying semantic variation of common words
in Communities of Practice and in sets of topic-related communities. We show
that while some meaning shifts are shared across related communities, others
are community-specific, and therefore independent from the discussed topic. We
propose such findings as evidence in favour of sociolinguistic theories of
socially-driven semantic variation. Results are evaluated using an independent
language modelling task. Furthermore, we investigate extralinguistic features
and show that factors such as prominence and dissemination of words are related
to semantic variation.
| 2,018 | Computation and Language |
An Empirical Analysis of the Correlation of Syntax and Prosody | The relation of syntax and prosody (the syntax--prosody interface) has been
an active area of research, mostly in linguistics and typically studied under
controlled conditions. More recently, prosody has also been successfully used
in the data-based training of syntax parsers. However, there is a gap between
the controlled and detailed study of the individual effects between syntax and
prosody and the large-scale application of prosody in syntactic parsing with
only a shallow analysis of the respective influences. In this paper, we close
the gap by investigating the significance of correlations of prosodic
realization with specific syntactic functions using linear mixed effects models
in a very large corpus of read-out German encyclopedic texts. Using this
corpus, we are able to analyze prosodic structuring performed by a diverse set
of speakers while they try to optimize factual content delivery. After
normalization by speaker, we obtain significant effects, e.g. confirming that
the subject function, as compared to the object function, has a positive effect
on pitch and duration of a word, but a negative effect on loudness.
| 2,018 | Computation and Language |
Discovering User Groups for Natural Language Generation | We present a model which predicts how individual users of a dialog system
understand and produce utterances based on user groups. In contrast to previous
work, these user groups are not specified beforehand, but learned in training.
We evaluate on two referring expression (RE) generation tasks; our experiments
show that our model can identify user groups and learn how to most effectively
talk to them, and can dynamically assign unseen users to the correct groups as
they interact with the system.
| 2,018 | Computation and Language |
A Dataset for Building Code-Mixed Goal Oriented Conversation Systems | There is an increasing demand for goal-oriented conversation systems which
can assist users in various day-to-day activities such as booking tickets,
restaurant reservations, shopping, etc. Most of the existing datasets for
building such conversation systems focus on monolingual conversations and there
is hardly any work on multilingual and/or code-mixed conversations. Such
datasets and systems thus do not cater to the multilingual regions of the
world, such as India, where it is very common for people to speak more than one
language and seamlessly switch between them resulting in code-mixed
conversations. For example, a Hindi speaking user looking to book a restaurant
would typically ask, "Kya tum is restaurant mein ek table book karne mein meri
help karoge?" ("Can you help me in booking a table at this restaurant?"). To
facilitate the development of such code-mixed conversation models, we build a
goal-oriented dialog dataset containing code-mixed conversations. Specifically,
we take the text from the DSTC2 restaurant reservation dataset and create
code-mixed versions of it in Hindi-English, Bengali-English, Gujarati-English
and Tamil-English. We also establish initial baselines on this dataset using
existing state of the art models. This dataset along with our baseline
implementations is made publicly available for research purposes.
| 2,018 | Computation and Language |
Scheduled Policy Optimization for Natural Language Communication with
Intelligent Agents | We investigate the task of learning to follow natural language instructions
by jointly reasoning with visual observations and language inputs. In contrast
to existing methods which start with learning from demonstrations (LfD) and
then use reinforcement learning (RL) to fine-tune the model parameters, we
propose a novel policy optimization algorithm which dynamically schedules
demonstration learning and RL. The proposed training paradigm provides
efficient exploration and better generalization beyond existing methods.
Comparing to existing ensemble models, the best single model based on our
proposed method tremendously decreases the execution error by over 50% on a
block-world environment. To further illustrate the exploration strategy of our
RL algorithm, We also include systematic studies on the evolution of policy
entropy during training.
| 2,018 | Computation and Language |
Study of Semi-supervised Approaches to Improving English-Mandarin
Code-Switching Speech Recognition | In this paper, we present our overall efforts to improve the performance of a
code-switching speech recognition system using semi-supervised training methods
from lexicon learning to acoustic modeling, on the South East Asian
Mandarin-English (SEAME) data. We first investigate semi-supervised lexicon
learning approach to adapt the canonical lexicon, which is meant to alleviate
the heavily accented pronunciation issue within the code-switching conversation
of the local area. As a result, the learned lexicon yields improved
performance. Furthermore, we attempt to use semi-supervised training to deal
with those transcriptions that are highly mismatched between human transcribers
and ASR system. Specifically, we conduct semi-supervised training assuming
those poorly transcribed data as unsupervised data. We found the
semi-supervised acoustic modeling can lead to improved results. Finally, to
make up for the limitation of the conventional n-gram language models due to
data sparsity issue, we perform lattice rescoring using neural network language
models, and significant WER reduction is obtained.
| 2,018 | Computation and Language |
GILE: A Generalized Input-Label Embedding for Text Classification | Neural text classification models typically treat output labels as
categorical variables which lack description and semantics. This forces their
parametrization to be dependent on the label set size, and, hence, they are
unable to scale to large label sets and generalize to unseen ones. Existing
joint input-label text models overcome these issues by exploiting label
descriptions, but they are unable to capture complex label relationships, have
rigid parametrization, and their gains on unseen labels happen often at the
expense of weak performance on the labels seen during training. In this paper,
we propose a new input-label model which generalizes over previous such models,
addresses their limitations and does not compromise performance on seen labels.
The model consists of a joint non-linear input-label embedding with
controllable capacity and a joint-space-dependent classification unit which is
trained with cross-entropy loss to optimize classification performance. We
evaluate models on full-resource and low- or zero-resource text classification
of multilingual news and biomedical text with a large label set. Our model
outperforms monolingual and multilingual models which do not leverage label
semantics and previous joint input-label space models in both scenarios.
| 2,019 | Computation and Language |
Multimodal Sentiment Analysis using Hierarchical Fusion with Context
Modeling | Multimodal sentiment analysis is a very actively growing field of research. A
promising area of opportunity in this field is to improve the multimodal fusion
mechanism. We present a novel feature fusion strategy that proceeds in a
hierarchical fashion, first fusing the modalities two in two and only then
fusing all three modalities. On multimodal sentiment analysis of individual
utterances, our strategy outperforms conventional concatenation of features by
1%, which amounts to 5% reduction in error rate. On utterance-level multimodal
sentiment analysis of multi-utterance video clips, for which current
state-of-the-art techniques incorporate contextual information from other
utterances of the same clip, our hierarchical fusion gives up to 2.4% (almost
10% error rate reduction) over currently used concatenation. The implementation
of our method is publicly available in the form of open-source code.
| 2,018 | Computation and Language |
Evaluation of sentence embeddings in downstream and linguistic probing
tasks | Despite the fast developmental pace of new sentence embedding methods, it is
still challenging to find comprehensive evaluations of these different
techniques. In the past years, we saw significant improvements in the field of
sentence embeddings and especially towards the development of universal
sentence encoders that could provide inductive transfer to a wide variety of
downstream tasks. In this work, we perform a comprehensive evaluation of recent
methods using a wide variety of downstream and linguistic feature probing
tasks. We show that a simple approach using bag-of-words with a recently
introduced language model for deep context-dependent word embeddings proved to
yield better results in many tasks when compared to sentence encoders trained
on entailment datasets. We also show, however, that we are still far away from
a universal encoder that can perform consistently across several downstream
tasks.
| 2,018 | Computation and Language |
Biased Embeddings from Wild Data: Measuring, Understanding and Removing | Many modern Artificial Intelligence (AI) systems make use of data embeddings,
particularly in the domain of Natural Language Processing (NLP). These
embeddings are learnt from data that has been gathered "from the wild" and have
been found to contain unwanted biases. In this paper we make three
contributions towards measuring, understanding and removing this problem. We
present a rigorous way to measure some of these biases, based on the use of
word lists created for social psychology applications; we observe how gender
bias in occupations reflects actual gender bias in the same occupations in the
real world; and finally we demonstrate how a simple projection can
significantly reduce the effects of embedding bias. All this is part of an
ongoing effort to understand how trust can be built into AI systems.
| 2,018 | Computation and Language |
Incorporating Chinese Characters of Words for Lexical Sememe Prediction | Sememes are minimum semantic units of concepts in human languages, such that
each word sense is composed of one or multiple sememes. Words are usually
manually annotated with their sememes by linguists, and form linguistic
common-sense knowledge bases widely used in various NLP tasks. Recently, the
lexical sememe prediction task has been introduced. It consists of
automatically recommending sememes for words, which is expected to improve
annotation efficiency and consistency. However, existing methods of lexical
sememe prediction typically rely on the external context of words to represent
the meaning, which usually fails to deal with low-frequency and
out-of-vocabulary words. To address this issue for Chinese, we propose a novel
framework to take advantage of both internal character information and external
context information of words. We experiment on HowNet, a Chinese sememe
knowledge base, and demonstrate that our framework outperforms state-of-the-art
baselines by a large margin, and maintains a robust performance even for
low-frequency words.
| 2,018 | Computation and Language |
Multimodal Grounding for Language Processing | This survey discusses how recent developments in multimodal processing
facilitate conceptual grounding of language. We categorize the information flow
in multimodal processing with respect to cognitive models of human information
processing and analyze different methods for combining multimodal
representations. Based on this methodological inventory, we discuss the benefit
of multimodal grounding for a variety of language processing tasks and the
challenges that arise. We particularly focus on multimodal grounding of verbs
which play a crucial role for the compositional power of language.
| 2,019 | Computation and Language |
An Improved Text Sentiment Classification Model Using TF-IDF and Next
Word Negation | With the rapid growth of Text sentiment analysis, the demand for automatic
classification of electronic documents has increased by leaps and bound. The
paradigm of text classification or text mining has been the subject of many
research works in recent time. In this paper we propose a technique for text
sentiment classification using term frequency- inverse document frequency
(TF-IDF) along with Next Word Negation (NWN). We have also compared the
performances of binary bag of words model, TF-IDF model and TF-IDF with next
word negation (TF-IDF-NWN) model for text classification. Our proposed model is
then applied on three different text mining algorithms and we found the Linear
Support vector machine (LSVM) is the most appropriate to work with our proposed
model. The achieved results show significant increase in accuracy compared to
earlier methods.
| 2,018 | Computation and Language |
Measuring Semantic Coherence of a Conversation | Conversational systems have become increasingly popular as a way for humans
to interact with computers. To be able to provide intelligent responses,
conversational systems must correctly model the structure and semantics of a
conversation. We introduce the task of measuring semantic (in)coherence in a
conversation with respect to background knowledge, which relies on the
identification of semantic relations between concepts introduced during a
conversation. We propose and evaluate graph-based and machine learning-based
approaches for measuring semantic coherence using knowledge graphs, their
vector space embeddings and word embedding models, as sources of background
knowledge. We demonstrate how these approaches are able to uncover different
coherence patterns in conversations on the Ubuntu Dialogue Corpus.
| 2,018 | Computation and Language |
Semi-tied Units for Efficient Gating in LSTM and Highway Networks | Gating is a key technique used for integrating information from multiple
sources by long short-term memory (LSTM) models and has recently also been
applied to other models such as the highway network. Although gating is
powerful, it is rather expensive in terms of both computation and storage as
each gating unit uses a separate full weight matrix. This issue can be severe
since several gates can be used together in e.g. an LSTM cell. This paper
proposes a semi-tied unit (STU) approach to solve this efficiency issue, which
uses one shared weight matrix to replace those in all the units in the same
layer. The approach is termed "semi-tied" since extra parameters are used to
separately scale each of the shared output values. These extra scaling factors
are associated with the network activation functions and result in the use of
parameterised sigmoid, hyperbolic tangent, and rectified linear unit functions.
Speech recognition experiments using British English multi-genre broadcast data
showed that using STUs can reduce the calculation and storage cost by a factor
of three for highway networks and four for LSTMs, while giving similar word
error rates to the original models.
| 2,018 | Computation and Language |
SubGram: Extending Skip-gram Word Representation with Substrings | Skip-gram (word2vec) is a recent method for creating vector representations
of words ("distributed word representations") using a neural network. The
representation gained popularity in various areas of natural language
processing, because it seems to capture syntactic and semantic information
about words without any explicit supervision in this respect. We propose
SubGram, a refinement of the Skip-gram model to consider also the word
structure during the training process, achieving large gains on the Skip-gram
original test set.
| 2,016 | Computation and Language |
Nonparametric Topic Modeling with Neural Inference | This work focuses on combining nonparametric topic models with Auto-Encoding
Variational Bayes (AEVB). Specifically, we first propose iTM-VAE, where the
topics are treated as trainable parameters and the document-specific topic
proportions are obtained by a stick-breaking construction. The inference of
iTM-VAE is modeled by neural networks such that it can be computed in a simple
feed-forward manner. We also describe how to introduce a hyper-prior into
iTM-VAE so as to model the uncertainty of the prior parameter. Actually, the
hyper-prior technique is quite general and we show that it can be applied to
other AEVB based models to alleviate the {\it collapse-to-prior} problem
elegantly. Moreover, we also propose HiTM-VAE, where the document-specific
topic distributions are generated in a hierarchical manner. HiTM-VAE is even
more flexible and can generate topic distributions with better variability.
Experimental results on 20News and Reuters RCV1-V2 datasets show that the
proposed models outperform the state-of-the-art baselines significantly. The
advantages of the hyper-prior technique and the hierarchical model construction
are also confirmed by experiments.
| 2,018 | Computation and Language |
On Enhancing Speech Emotion Recognition using Generative Adversarial
Networks | Generative Adversarial Networks (GANs) have gained a lot of attention from
machine learning community due to their ability to learn and mimic an input
data distribution. GANs consist of a discriminator and a generator working in
tandem playing a min-max game to learn a target underlying data distribution;
when fed with data-points sampled from a simpler distribution (like uniform or
Gaussian distribution). Once trained, they allow synthetic generation of
examples sampled from the target distribution. We investigate the application
of GANs to generate synthetic feature vectors used for speech emotion
recognition. Specifically, we investigate two set ups: (i) a vanilla GAN that
learns the distribution of a lower dimensional representation of the actual
higher dimensional feature vector and, (ii) a conditional GAN that learns the
distribution of the higher dimensional feature vectors conditioned on the
labels or the emotional class to which it belongs. As a potential practical
application of these synthetically generated samples, we measure any
improvement in a classifier's performance when the synthetic data is used along
with real data for training. We perform cross-validation analyses followed by a
cross-corpus study.
| 2,018 | Computation and Language |
Unsupervised Word Segmentation from Speech with Attention | We present a first attempt to perform attentional word segmentation directly
from the speech signal, with the final goal to automatically identify lexical
units in a low-resource, unwritten language (UL). Our methodology assumes a
pairing between recordings in the UL with translations in a well-resourced
language. It uses Acoustic Unit Discovery (AUD) to convert speech into a
sequence of pseudo-phones that is segmented using neural soft-alignments
produced by a neural machine translation model. Evaluation uses an actual Bantu
UL, Mboshi; comparisons to monolingual and bilingual baselines illustrate the
potential of attentional word segmentation for language documentation.
| 2,018 | Computation and Language |
Combining Word Feature Vector Method with the Convolutional Neural
Network for Slot Filling in Spoken Language Understanding | Slot filling is an important problem in Spoken Language Understanding (SLU)
and Natural Language Processing (NLP), which involves identifying a user's
intent and assigning a semantic concept to each word in a sentence. This paper
presents a word feature vector method and combines it into the convolutional
neural network (CNN). We consider 18 word features and each word feature is
constructed by merging similar word labels. By introducing the concept of
external library, we propose a feature set approach that is beneficial for
building the relationship between a word from the training dataset and the
feature. Computational results are reported using the ATIS dataset and
comparisons with traditional CNN as well as bi-directional sequential CNN are
also presented.
| 2,018 | Computation and Language |
GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model
Shrinking | Model compression is essential for serving large deep neural nets on devices
with limited resources or applications that require real-time responses. As a
case study, a state-of-the-art neural language model usually consists of one or
more recurrent layers sandwiched between an embedding layer used for
representing input tokens and a softmax layer for generating output tokens. For
problems with a very large vocabulary size, the embedding and the softmax
matrices can account for more than half of the model size. For instance, the
bigLSTM model achieves state-of- the-art performance on the One-Billion-Word
(OBW) dataset with around 800k vocabulary, and its word embedding and softmax
matrices use more than 6GBytes space, and are responsible for over 90% of the
model parameters. In this paper, we propose GroupReduce, a novel compression
method for neural language models, based on vocabulary-partition (block) based
low-rank matrix approximation and the inherent frequency distribution of tokens
(the power-law distribution of words). The experimental results show our method
can significantly outperform traditional compression methods such as low-rank
approximation and pruning. On the OBW dataset, our method achieved 6.6 times
compression rate for the embedding and softmax matrices, and when combined with
quantization, our method can achieve 26 times compression rate, which
translates to a factor of 12.8 times compression for the entire model with very
little degradation in perplexity.
| 2,018 | Computation and Language |
A Comparison of Transformer and Recurrent Neural Networks on
Multilingual Neural Machine Translation | Recently, neural machine translation (NMT) has been extended to
multilinguality, that is to handle more than one translation direction with a
single system. Multilingual NMT showed competitive performance against pure
bilingual systems. Notably, in low-resource settings, it proved to work
effectively and efficiently, thanks to shared representation space that is
forced across languages and induces a sort of transfer-learning. Furthermore,
multilingual NMT enables so-called zero-shot inference across language pairs
never seen at training time. Despite the increasing interest in this framework,
an in-depth analysis of what a multilingual NMT model is capable of and what it
is not is still missing. Motivated by this, our work (i) provides a
quantitative and comparative analysis of the translations produced by
bilingual, multilingual and zero-shot systems; (ii) investigates the
translation quality of two of the currently dominant neural architectures in
MT, which are the Recurrent and the Transformer ones; and (iii) quantitatively
explores how the closeness between languages influences the zero-shot
translation. Our analysis leverages multiple professional post-edits of
automatic translations by several different systems and focuses both on
automatic standard metrics (BLEU and TER) and on widely used error categories,
which are lexical, morphology, and word order errors.
| 2,018 | Computation and Language |
Comparative Analysis of Neural QA models on SQuAD | The task of Question Answering has gained prominence in the past few decades
for testing the ability of machines to understand natural language. Large
datasets for Machine Reading have led to the development of neural models that
cater to deeper language understanding compared to information retrieval tasks.
Different components in these neural architectures are intended to tackle
different challenges. As a first step towards achieving generalization across
multiple domains, we attempt to understand and compare the peculiarities of
existing end-to-end neural models on the Stanford Question Answering Dataset
(SQuAD) by performing quantitative as well as qualitative analysis of the
results attained by each of them. We observed that prediction errors reflect
certain model-specific biases, which we further discuss in this paper.
| 2,018 | Computation and Language |
Private Text Classification | Confidential text corpora exist in many forms, but do not allow arbitrary
sharing. We explore how to use such private corpora using privacy preserving
text analytics. We construct typical text processing applications using
appropriate privacy preservation techniques (including homomorphic encryption,
Rademacher operators and secure computation). We set out the preliminary
materials from Rademacher operators for binary classifiers, and then construct
basic text processing approaches to match those binary classifiers.
| 2,018 | Computation and Language |
A Syntactically Constrained Bidirectional-Asynchronous Approach for
Emotional Conversation Generation | Traditional neural language models tend to generate generic replies with poor
logic and no emotion. In this paper, a syntactically constrained
bidirectional-asynchronous approach for emotional conversation generation
(E-SCBA) is proposed to address this issue. In our model, pre-generated emotion
keywords and topic keywords are asynchronously introduced into the process of
decoding. It is much different from most existing methods which generate
replies from the first word to the last. Through experiments, the results
indicate that our approach not only improves the diversity of replies, but
gains a boost on both logic and emotion compared with baselines.
| 2,018 | Computation and Language |
EmotionX-DLC: Self-Attentive BiLSTM for Detecting Sequential Emotions in
Dialogue | In this paper, we propose a self-attentive bidirectional long short-term
memory (SA-BiLSTM) network to predict multiple emotions for the EmotionX
challenge. The BiLSTM exhibits the power of modeling the word dependencies, and
extracting the most relevant features for emotion classification. Building on
top of BiLSTM, the self-attentive network can model the contextual dependencies
between utterances which are helpful for classifying the ambiguous emotions. We
achieve 59.6 and 55.0 unweighted accuracy scores in the \textit{Friends} and
the \textit{EmotionPush} test sets, respectively.
| 2,018 | Computation and Language |
Response Generation by Context-aware Prototype Editing | Open domain response generation has achieved remarkable progress in recent
years, but sometimes yields short and uninformative responses. We propose a new
paradigm for response generation, that is response generation by editing, which
significantly increases the diversity and informativeness of the generation
results. Our assumption is that a plausible response can be generated by
slightly revising an existing response prototype. The prototype is retrieved
from a pre-defined index and provides a good start-point for generation because
it is grammatical and informative. We design a response editing model, where an
edit vector is formed by considering differences between a prototype context
and a current context, and then the edit vector is fed to a decoder to revise
the prototype response for the current context. Experiment results on a large
scale dataset demonstrate that the response editing model outperforms
generative and retrieval-based models on various aspects.
| 2,018 | Computation and Language |
End-to-End Speech Recognition From the Raw Waveform | State-of-the-art speech recognition systems rely on fixed, hand-crafted
features such as mel-filterbanks to preprocess the waveform before the training
pipeline. In this paper, we study end-to-end systems trained directly from the
raw waveform, building on two alternatives for trainable replacements of
mel-filterbanks that use a convolutional architecture. The first one is
inspired by gammatone filterbanks (Hoshen et al., 2015; Sainath et al, 2015),
and the second one by the scattering transform (Zeghidour et al., 2017). We
propose two modifications to these architectures and systematically compare
them to mel-filterbanks, on the Wall Street Journal dataset. The first
modification is the addition of an instance normalization layer, which greatly
improves on the gammatone-based trainable filterbanks and speeds up the
training of the scattering-based filterbanks. The second one relates to the
low-pass filter used in these approaches. These modifications consistently
improve performances for both approaches, and remove the need for a careful
initialization in scattering-based trainable filterbanks. In particular, we
show a consistent improvement in word error rate of the trainable filterbanks
relatively to comparable mel-filterbanks. It is the first time end-to-end
models trained from the raw signal significantly outperform mel-filterbanks on
a large vocabulary task under clean recording conditions.
| 2,018 | Computation and Language |
Using J-K fold Cross Validation to Reduce Variance When Tuning NLP
Models | K-fold cross validation (CV) is a popular method for estimating the true
performance of machine learning models, allowing model selection and parameter
tuning. However, the very process of CV requires random partitioning of the
data and so our performance estimates are in fact stochastic, with variability
that can be substantial for natural language processing tasks. We demonstrate
that these unstable estimates cannot be relied upon for effective parameter
tuning. The resulting tuned parameters are highly sensitive to how our data is
partitioned, meaning that we often select sub-optimal parameter choices and
have serious reproducibility issues.
Instead, we propose to use the less variable J-K-fold CV, in which J
independent K-fold cross validations are used to assess performance. Our main
contributions are extending J-K-fold CV from performance estimation to
parameter tuning and investigating how to choose J and K. We argue that
variability is more important than bias for effective tuning and so advocate
lower choices of K than are typically seen in the NLP literature, instead use
the saved computation to increase J. To demonstrate the generality of our
recommendations we investigate a wide range of case-studies: sentiment
classification (both general and target-specific), part-of-speech tagging and
document classification.
| 2,018 | Computation and Language |
Learning from Chunk-based Feedback in Neural Machine Translation | We empirically investigate learning from partial feedback in neural machine
translation (NMT), when partial feedback is collected by asking users to
highlight a correct chunk of a translation. We propose a simple and effective
way of utilizing such feedback in NMT training. We demonstrate how the common
machine translation problem of domain mismatch between training and deployment
can be reduced solely based on chunk-level user feedback. We conduct a series
of simulation experiments to test the effectiveness of the proposed method. Our
results show that chunk-level feedback outperforms sentence based feedback by
up to 2.61% BLEU absolute.
| 2,018 | Computation and Language |
Recurrent DNNs and its Ensembles on the TIMIT Phone Recognition Task | In this paper, we have investigated recurrent deep neural networks (DNNs) in
combination with regularization techniques as dropout, zoneout, and
regularization post-layer. As a benchmark, we chose the TIMIT phone recognition
task due to its popularity and broad availability in the community. It also
simulates a low-resource scenario that is helpful in minor languages. Also, we
prefer the phone recognition task because it is much more sensitive to an
acoustic model quality than a large vocabulary continuous speech recognition
task. In recent years, recurrent DNNs pushed the error rates in automatic
speech recognition down. But, there was no clear winner in proposed
architectures. The dropout was used as the regularization technique in most
cases, but combination with other regularization techniques together with model
ensembles was omitted. However, just an ensemble of recurrent DNNs performed
best and achieved an average phone error rate from 10 experiments 14.84 %
(minimum 14.69 %) on core test set that is slightly lower then the
best-published PER to date, according to our knowledge. Finally, in contrast of
the most papers, we published the open-source scripts to easily replicate the
results and to help continue the development.
| 2,018 | Computation and Language |
Dynamic Multi-Level Multi-Task Learning for Sentence Simplification | Sentence simplification aims to improve readability and understandability,
based on several operations such as splitting, deletion, and paraphrasing.
However, a valid simplified sentence should also be logically entailed by its
input sentence. In this work, we first present a strong pointer-copy mechanism
based sequence-to-sequence sentence simplification model, and then improve its
entailment and paraphrasing capabilities via multi-task learning with related
auxiliary tasks of entailment and paraphrase generation. Moreover, we propose a
novel 'multi-level' layered soft sharing approach where each auxiliary task
shares different (higher versus lower) level layers of the sentence
simplification model, depending on the task's semantic versus lexico-syntactic
nature. We also introduce a novel multi-armed bandit based training approach
that dynamically learns how to effectively switch across tasks during
multi-task learning. Experiments on multiple popular datasets demonstrate that
our model outperforms competitive simplification systems in SARI and FKGL
automatic metrics, and human evaluation. Further, we present several ablation
analyses on alternative layer sharing methods, soft versus hard sharing,
dynamic multi-armed bandit sampling approaches, and our model's learned
entailment and paraphrasing skills.
| 2,018 | Computation and Language |
A Scalable Machine Learning Approach for Inferring Probabilistic
US-LI-RADS Categorization | We propose a scalable computerized approach for large-scale inference of
Liver Imaging Reporting and Data System (LI-RADS) final assessment categories
in narrative ultrasound (US) reports. Although our model was trained on reports
created using a LI-RADS template, it was also able to infer LI-RADS scoring for
unstructured reports that were created before the LI-RADS guidelines were
established. No human-labelled data was required in any step of this study; for
training, LI-RADS scores were automatically extracted from those reports that
contained structured LI-RADS scores, and it translated the derived knowledge to
reasoning on unstructured radiology reports. By providing automated LI-RADS
categorization, our approach may enable standardizing screening recommendations
and treatment planning of patients at risk for hepatocellular carcinoma, and it
may facilitate AI-based healthcare research with US images by offering large
scale text mining and data gathering opportunities from standard hospital
clinical data repositories.
| 2,018 | Computation and Language |
Speaker Adapted Beamforming for Multi-Channel Automatic Speech
Recognition | This paper presents, in the context of multi-channel ASR, a method to adapt a
mask based, statistically optimal beamforming approach to a speaker of
interest. The beamforming vector of the statistically optimal beamformer is
computed by utilizing speech and noise masks, which are estimated by a neural
network. The proposed adaptation approach is based on the integration of the
beamformer, which includes the mask estimation network, and the acoustic model
of the ASR system. This allows for the propagation of the training error, from
the acoustic modeling cost function, all the way through the beamforming
operation and through the mask estimation network. By using the results of a
first pass recognition and by keeping all other parameters fixed, the mask
estimation network can therefore be fine tuned by retraining. Utterances of a
speaker of interest can thus be used in a two pass approach, to optimize the
beamforming for the speech characteristics of that specific speaker. It is
shown that this approach improves the ASR performance of a state-of-the-art
multi-channel ASR system on the CHiME-4 data. Furthermore the effect of the
adaptation on the estimated speech masks is discussed.
| 2,018 | Computation and Language |
Joint Neural Entity Disambiguation with Output Space Search | In this paper, we present a novel model for entity disambiguation that
combines both local contextual information and global evidences through Limited
Discrepancy Search (LDS). Given an input document, we start from a complete
solution constructed by a local model and conduct a search in the space of
possible corrections to improve the local solution from a global view point.
Our search utilizes a heuristic function to focus more on the least confident
local decisions and a pruning function to score the global solutions based on
their local fitness and the global coherences among the predicted entities.
Experimental results on CoNLL 2003 and TAC 2010 benchmarks verify the
effectiveness of our model.
| 2,018 | Computation and Language |
Automated Fact Checking: Task formulations, methods and future
directions | The recently increased focus on misinformation has stimulated research in
fact checking, the task of assessing the truthfulness of a claim. Research in
automating this task has been conducted in a variety of disciplines including
natural language processing, machine learning, knowledge representation,
databases, and journalism. While there has been substantial progress, relevant
papers and articles have been published in research communities that are often
unaware of each other and use inconsistent terminology, thus impeding
understanding and further progress. In this paper we survey automated fact
checking research stemming from natural language processing and related
disciplines, unifying the task formulations and methodologies across papers and
authors. Furthermore, we highlight the use of evidence as an important
distinguishing factor among them cutting across task formulations and methods.
We conclude with proposing avenues for future NLP research on automated fact
checking.
| 2,018 | Computation and Language |
Word Tagging with Foundational Ontology Classes: Extending the
WordNet-DOLCE Mapping to Verbs | Semantic annotation is fundamental to deal with large-scale lexical
information, mapping the information to an enumerable set of categories over
which rules and algorithms can be applied, and foundational ontology classes
can be used as a formal set of categories for such tasks. A previous alignment
between WordNet noun synsets and DOLCE provided a starting point for
ontology-based annotation, but in NLP tasks verbs are also of substantial
importance. This work presents an extension to the WordNet-DOLCE noun mapping,
aligning verbs according to their links to nouns denoting perdurants,
transferring to the verb the DOLCE class assigned to the noun that best
represents that verb's occurrence. To evaluate the usefulness of this resource,
we implemented a foundational ontology-based semantic annotation framework,
that assigns a high-level foundational category to each word or phrase in a
text, and compared it to a similar annotation tool, obtaining an increase of
9.05% in accuracy.
| 2,016 | Computation and Language |
Categorization of Semantic Roles for Dictionary Definitions | Understanding the semantic relationships between terms is a fundamental task
in natural language processing applications. While structured resources that
can express those relationships in a formal way, such as ontologies, are still
scarce, a large number of linguistic resources gathering dictionary definitions
is becoming available, but understanding the semantic structure of natural
language definitions is fundamental to make them useful in semantic
interpretation tasks. Based on an analysis of a subset of WordNet's glosses, we
propose a set of semantic roles that compose the semantic structure of a
dictionary definition, and show how they are related to the definition's
syntactic configuration, identifying patterns that can be used in the
development of information extraction frameworks and semantic models.
| 2,016 | Computation and Language |
Using Neural Network for Identifying Clickbaits in Online News Media | Online news media sometimes use misleading headlines to lure users to open
the news article. These catchy headlines that attract users but disappointed
them at the end, are called Clickbaits. Because of the importance of automatic
clickbait detection in online medias, lots of machine learning methods were
proposed and employed to find the clickbait headlines. In this research, a
model using deep learning methods is proposed to find the clickbaits in
Clickbait Challenge 2017's dataset. The proposed model gained the first rank in
the Clickbait Challenge 2017 in terms of Mean Squared Error. Also, data
analytics and visualization techniques are employed to explore and discover the
provided dataset to get more insight from the data.
| 2,018 | Computation and Language |
Semantic Relation Classification: Task Formalisation and Refinement | The identification of semantic relations between terms within texts is a
fundamental task in Natural Language Processing which can support applications
requiring a lightweight semantic interpretation model. Currently, semantic
relation classification concentrates on relations which are evaluated over
open-domain data. This work provides a critique on the set of abstract
relations used for semantic relation classification with regard to their
ability to express relationships between terms which are found in a
domain-specific corpora. Based on this analysis, this work proposes an
alternative semantic relation model based on reusing and extending the set of
abstract relations present in the DOLCE ontology. The resulting set of
relations is well grounded, allows to capture a wide range of relations and
could thus be used as a foundation for automatic classification of semantic
relations.
| 2,016 | Computation and Language |
Building a Knowledge Graph from Natural Language Definitions for
Interpretable Text Entailment Recognition | Natural language definitions of terms can serve as a rich source of
knowledge, but structuring them into a comprehensible semantic model is
essential to enable them to be used in semantic interpretation tasks. We
propose a method and provide a set of tools for automatically building a graph
world knowledge base from natural language definitions. Adopting a conceptual
model composed of a set of semantic roles for dictionary definitions, we
trained a classifier for automatically labeling definitions, preparing the data
to be later converted to a graph representation. WordNetGraph, a knowledge
graph built out of noun and verb WordNet definitions according to this
methodology, was successfully used in an interpretable text entailment
recognition approach which uses paths in this graph to provide clear
justifications for entailment decisions.
| 2,018 | Computation and Language |
Opinion Dynamics Modeling for Movie Review Transcripts Classification
with Hidden Conditional Random Fields | In this paper, the main goal is to detect a movie reviewer's opinion using
hidden conditional random fields. This model allows us to capture the dynamics
of the reviewer's opinion in the transcripts of long unsegmented audio reviews
that are analyzed by our system. High level linguistic features are computed at
the level of inter-pausal segments. The features include syntactic features, a
statistical word embedding model and subjectivity lexicons. The proposed system
is evaluated on the ICT-MMMO corpus. We obtain a F1-score of 82\%, which is
better than logistic regression and recurrent neural network approaches. We
also offer a discussion that sheds some light on the capacity of our system to
adapt the word embedding model learned from general written texts data to
spoken movie reviews and thus model the dynamics of the opinion.
| 2,018 | Computation and Language |
StructVAE: Tree-structured Latent Variable Models for Semi-supervised
Semantic Parsing | Semantic parsing is the task of transducing natural language (NL) utterances
into formal meaning representations (MRs), commonly represented as tree
structures. Annotating NL utterances with their corresponding MRs is expensive
and time-consuming, and thus the limited availability of labeled data often
becomes the bottleneck of data-driven, supervised models. We introduce
StructVAE, a variational auto-encoding model for semisupervised semantic
parsing, which learns both from limited amounts of parallel data, and
readily-available unlabeled NL utterances. StructVAE models latent MRs not
observed in the unlabeled data as tree-structured latent variables. Experiments
on semantic parsing on the ATIS domain and Python code generation show that
with extra unlabeled data, StructVAE outperforms strong supervised models.
| 2,018 | Computation and Language |
Multi-Layer Ensembling Techniques for Multilingual Intent Classification | In this paper we determine how multi-layer ensembling improves performance on
multilingual intent classification. We develop a novel multi-layer ensembling
approach that ensembles both different model initializations and different
model architectures. We also introduce a new banking domain dataset and compare
results against the standard ATIS dataset and the Chinese SMP2017 dataset to
determine ensembling performance in multilingual and multi-domain contexts. We
run ensemble experiments across all three datasets, and conclude that
ensembling provides significant performance increases, and that multi-layer
ensembling is a no-risk way to improve performance on intent classification. We
also find that a diverse ensemble of simple models can reach perform comparable
to much more sophisticated state-of-the-art models. Our best F 1 scores on
ATIS, Banking, and SMP are 97.54%, 91.79%, and 93.55% respectively, which
compare well with the state-of-the-art on ATIS and best submission to the
SMP2017 competition. The total ensembling performance increases we achieve are
0.23%, 1.96%, and 4.04% F 1 respectively.
| 2,018 | Computation and Language |
RSDD-Time: Temporal Annotation of Self-Reported Mental Health Diagnoses | Self-reported diagnosis statements have been widely employed in studying
language related to mental health in social media. However, existing research
has largely ignored the temporality of mental health diagnoses. In this work,
we introduce RSDD-Time: a new dataset of 598 manually annotated self-reported
depression diagnosis posts from Reddit that include temporal information about
the diagnosis. Annotations include whether a mental health condition is present
and how recently the diagnosis happened. Furthermore, we include exact temporal
spans that relate to the date of diagnosis. This information is valuable for
various computational methods to examine mental health through social media
because one's mental health state is not static. We also test several baseline
classification and extraction approaches, which suggest that extracting
temporal information from self-reported diagnosis statements is challenging.
| 2,018 | Computation and Language |
A Survey of Recent DNN Architectures on the TIMIT Phone Recognition Task | In this survey paper, we have evaluated several recent deep neural network
(DNN) architectures on a TIMIT phone recognition task. We chose the TIMIT
corpus due to its popularity and broad availability in the community. It also
simulates a low-resource scenario that is helpful in minor languages. Also, we
prefer the phone recognition task because it is much more sensitive to an
acoustic model quality than a large vocabulary continuous speech recognition
(LVCSR) task. In recent years, many DNN published papers reported results on
TIMIT. However, the reported phone error rates (PERs) were often much higher
than a PER of a simple feed-forward (FF) DNN. That was the main motivation of
this paper: To provide a baseline DNNs with open-source scripts to easily
replicate the baseline results for future papers with lowest possible PERs.
According to our knowledge, the best-achieved PER of this survey is better than
the best-published PER to date.
| 2,018 | Computation and Language |
Ontology Alignment in the Biomedical Domain Using Entity Definitions and
Context | Ontology alignment is the task of identifying semantically equivalent
entities from two given ontologies. Different ontologies have different
representations of the same entity, resulting in a need to de-duplicate
entities when merging ontologies. We propose a method for enriching entities in
an ontology with external definition and context information, and use this
additional information for ontology alignment. We develop a neural architecture
capable of encoding the additional information when available, and show that
the addition of external data results in an F1-score of 0.69 on the Ontology
Alignment Evaluation Initiative (OAEI) largebio SNOMED-NCI subtask, comparable
with the entity-level matchers in a SOTA system.
| 2,018 | Computation and Language |
TxPI-u: A Resource for Personality Identification of Undergraduates | Resources such as labeled corpora are necessary to train automatic models
within the natural language processing (NLP) field. Historically, a large
number of resources regarding a broad number of problems are available mostly
in English. One of such problems is known as Personality Identification where
based on a psychological model (e.g. The Big Five Model), the goal is to find
the traits of a subject's personality given, for instance, a text written by
the same subject. In this paper we introduce a new corpus in Spanish called
Texts for Personality Identification (TxPI). This corpus will help to develop
models to automatically assign a personality trait to an author of a text
document. Our corpus, TxPI-u, contains information of 416 Mexican undergraduate
students with some demographics information such as, age, gender, and the
academic program they are enrolled. Finally, as an additional contribution, we
present a set of baselines to provide a comparison scheme for further research.
| 2,018 | Computation and Language |
A Supervised Approach To The Interpretation Of Imperative To-Do Lists | To-do lists are a popular medium for personal information management. As
to-do tasks are increasingly tracked in electronic form with mobile and desktop
organizers, so does the potential for software support for the corresponding
tasks by means of intelligent agents. While there has been work in the area of
personal assistants for to-do tasks, no work has focused on classifying user
intention and information extraction as we do. We show that our methods perform
well across two corpora that span sub-domains, one of which we released.
| 2,018 | Computation and Language |
Injecting Relational Structural Representation in Neural Networks for
Question Similarity | Effectively using full syntactic parsing information in Neural Networks (NNs)
to solve relational tasks, e.g., question similarity, is still an open problem.
In this paper, we propose to inject structural representations in NNs by (i)
learning an SVM model using Tree Kernels (TKs) on relatively few pairs of
questions (few thousands) as gold standard (GS) training data is typically
scarce, (ii) predicting labels on a very large corpus of question pairs, and
(iii) pre-training NNs on such large corpus. The results on Quora and SemEval
question similarity datasets show that NNs trained with our approach can learn
more accurate models, especially after fine tuning on GS.
| 2,018 | Computation and Language |
An empirical study on the names of points of interest and their changes
with geographic distance | While Points Of Interest (POIs), such as restaurants, hotels, and barber
shops, are part of urban areas irrespective of their specific locations, the
names of these POIs often reveal valuable information related to local culture,
landmarks, influential families, figures, events, and so on. Place names have
long been studied by geographers, e.g., to understand their origins and
relations to family names. However, there is a lack of large-scale empirical
studies that examine the localness of place names and their changes with
geographic distance. In addition to enhancing our understanding of the
coherence of geographic regions, such empirical studies are also significant
for geographic information retrieval where they can inform computational models
and improve the accuracy of place name disambiguation. In this work, we conduct
an empirical study based on 112,071 POIs in seven US metropolitan areas
extracted from an open Yelp dataset. We propose to adopt term frequency and
inverse document frequency in geographic contexts to identify local terms used
in POI names and to analyze their usages across different POI types. Our
results show an uneven usage of local terms across POI types, which is highly
consistent among different geographic regions. We also examine the decaying
effect of POI name similarity with the increase of distance among POIs. While
our analysis focuses on urban POI names, the presented methods can be
generalized to other place types as well, such as mountain peaks and streets.
| 2,018 | Computation and Language |
Coherence Models for Dialogue | Coherence across multiple turns is a major challenge for state-of-the-art
dialogue models. Arguably the most successful approach to automatically
learning text coherence is the entity grid, which relies on modelling patterns
of distribution of entities across multiple sentences of a text. Originally
applied to the evaluation of automatic summaries and the news genre, among its
many extensions, this model has also been successfully used to assess dialogue
coherence. Nevertheless, both the original grid and its extensions do not model
intents, a crucial aspect that has been studied widely in the literature in
connection to dialogue structure. We propose to augment the original grid
document representation for dialogue with the intentional structure of the
conversation. Our models outperform the original grid representation on both
text discrimination and insertion, the two main standard tasks for coherence
assessment across three different dialogue datasets, confirming that intents
play a key role in modelling dialogue coherence.
| 2,018 | Computation and Language |
Dictionary-Guided Editing Networks for Paraphrase Generation | An intuitive way for a human to write paraphrase sentences is to replace
words or phrases in the original sentence with their corresponding synonyms and
make necessary changes to ensure the new sentences are fluent and grammatically
correct. We propose a novel approach to modeling the process with
dictionary-guided editing networks which effectively conduct rewriting on the
source sentence to generate paraphrase sentences. It jointly learns the
selection of the appropriate word level and phrase level paraphrase pairs in
the context of the original sentence from an off-the-shelf dictionary as well
as the generation of fluent natural language sentences. Specifically, the
system retrieves a set of word level and phrase level araphrased pairs derived
from the Paraphrase Database (PPDB) for the original sentence, which is used to
guide the decision of which the words might be deleted or inserted with the
soft attention mechanism under the sequence-to-sequence framework. We conduct
experiments on two benchmark datasets for paraphrase generation, namely the
MSCOCO and Quora dataset. The evaluation results demonstrate that our
dictionary-guided editing networks outperforms the baseline methods.
| 2,018 | Computation and Language |
BFGAN: Backward and Forward Generative Adversarial Networks for
Lexically Constrained Sentence Generation | Incorporating prior knowledge like lexical constraints into the model's
output to generate meaningful and coherent sentences has many applications in
dialogue system, machine translation, image captioning, etc. However, existing
RNN-based models incrementally generate sentences from left to right via beam
search, which makes it difficult to directly introduce lexical constraints into
the generated sentences. In this paper, we propose a new algorithmic framework,
dubbed BFGAN, to address this challenge. Specifically, we employ a backward
generator and a forward generator to generate lexically constrained sentences
together, and use a discriminator to guide the joint training of two generators
by assigning them reward signals. Due to the difficulty of BFGAN training, we
propose several training techniques to make the training process more stable
and efficient. Our extensive experiments on two large-scale datasets with human
evaluation demonstrate that BFGAN has significant improvements over previous
methods.
| 2,019 | Computation and Language |
Modeling Word Emotion in Historical Language: Quantity Beats Supposed
Stability in Seed Word Selection | To understand historical texts, we must be aware that language -- including
the emotional connotation attached to words -- changes over time. In this
paper, we aim at estimating the emotion which is associated with a given word
in former language stages of English and German. Emotion is represented
following the popular Valence-Arousal-Dominance (VAD) annotation scheme. While
being more expressive than polarity alone, existing word emotion induction
methods are typically not suited for addressing it. To overcome this
limitation, we present adaptations of two popular algorithms to VAD. To measure
their effectiveness in diachronic settings, we present the first gold standard
for historical word emotions, which was created by scholars with proficiency in
the respective language stages and covers both English and German. In contrast
to claims in previous work, our findings indicate that hand-selecting small
sets of seed words with supposedly stable emotional meaning is actually harmful
rather than helpful.
| 2,019 | Computation and Language |
Par4Sim -- Adaptive Paraphrasing for Text Simplification | Learning from a real-world data stream and continuously updating the model
without explicit supervision is a new challenge for NLP applications with
machine learning components. In this work, we have developed an adaptive
learning system for text simplification, which improves the underlying
learning-to-rank model from usage data, i.e. how users have employed the system
for the task of simplification. Our experimental result shows that, over a
period of time, the performance of the embedded paraphrase ranking model
increases steadily improving from a score of 62.88% up to 75.70% based on the
NDCG@10 evaluation metrics. To our knowledge, this is the first study where an
NLP component is adaptively improved through usage.
| 2,018 | Computation and Language |
End-to-End Audio Visual Scene-Aware Dialog using Multimodal
Attention-Based Video Features | Dialog systems need to understand dynamic visual scenes in order to have
conversations with users about the objects and events around them. Scene-aware
dialog systems for real-world applications could be developed by integrating
state-of-the-art technologies from multiple research areas, including:
end-to-end dialog technologies, which generate system responses using models
trained from dialog data; visual question answering (VQA) technologies, which
answer questions about images using learned image features; and video
description technologies, in which descriptions/captions are generated from
videos using multimodal information. We introduce a new dataset of dialogs
about videos of human behaviors. Each dialog is a typed conversation that
consists of a sequence of 10 question-and-answer(QA) pairs between two Amazon
Mechanical Turk (AMT) workers. In total, we collected dialogs on roughly 9,000
videos. Using this new dataset for Audio Visual Scene-aware dialog (AVSD), we
trained an end-to-end conversation model that generates responses in a dialog
about a video. Our experiments demonstrate that using multimodal features that
were developed for multimodal attention-based video description enhances the
quality of generated dialog about dynamic scenes (videos). Our dataset, model
code and pretrained models will be publicly available for a new Video
Scene-Aware Dialog challenge.
| 2,018 | Computation and Language |
Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation | The variational autoencoder (VAE) imposes a probabilistic distribution
(typically Gaussian) on the latent space and penalizes the Kullback--Leibler
(KL) divergence between the posterior and prior. In NLP, VAEs are extremely
difficult to train due to the problem of KL collapsing to zero. One has to
implement various heuristics such as KL weight annealing and word dropout in a
carefully engineered manner to successfully train a VAE for text. In this
paper, we propose to use the Wasserstein autoencoder (WAE) for probabilistic
sentence generation, where the encoder could be either stochastic or
deterministic. We show theoretically and empirically that, in the original WAE,
the stochastically encoded Gaussian distribution tends to become a Dirac-delta
function, and we propose a variant of WAE that encourages the stochasticity of
the encoder. Experimental results show that the latent space learned by WAE
exhibits properties of continuity and smoothness as in VAEs, while
simultaneously achieving much higher BLEU scores for sentence reconstruction.
| 2,019 | Computation and Language |
Paragraph-based complex networks: application to document classification
and authenticity verification | With the increasing number of texts made available on the Internet, many
applications have relied on text mining tools to tackle a diversity of
problems. A relevant model to represent texts is the so-called word adjacency
(co-occurrence) representation, which is known to capture mainly syntactical
features of texts.In this study, we introduce a novel network representation
that considers the semantic similarity between paragraphs. Two main properties
of paragraph networks are considered: (i) their ability to incorporate
characteristics that can discriminate real from artificial, shuffled
manuscripts and (ii) their ability to capture syntactical and semantic textual
features. Our results revealed that real texts are organized into communities,
which turned out to be an important feature for discriminating them from
artificial texts. Interestingly, we have also found that, differently from
traditional co-occurrence networks, the adopted representation is able to
capture semantic features. Additionally, the proposed framework was employed to
analyze the Voynich manuscript, which was found to be compatible with texts
written in natural languages. Taken together, our findings suggest that the
proposed methodology can be combined with traditional network models to improve
text classification tasks.
| 2,019 | Computation and Language |
Jack the Reader - A Machine Reading Framework | Many Machine Reading and Natural Language Understanding tasks require reading
supporting text in order to answer questions. For example, in Question
Answering, the supporting text can be newswire or Wikipedia articles; in
Natural Language Inference, premises can be seen as the supporting text and
hypotheses as questions. Providing a set of useful primitives operating in a
single framework of related tasks would allow for expressive modelling, and
easier model comparison and replication. To that end, we present Jack the
Reader (Jack), a framework for Machine Reading that allows for quick model
prototyping by component reuse, evaluation of new models on existing datasets
as well as integrating new datasets and applying them on a growing set of
implemented baseline models. Jack is currently supporting (but not limited to)
three tasks: Question Answering, Natural Language Inference, and Link
Prediction. It is developed with the aim of increasing research efficiency and
code reuse.
| 2,018 | Computation and Language |
The Natural Language Decathlon: Multitask Learning as Question Answering | Deep learning has improved performance on many natural language processing
(NLP) tasks individually. However, general NLP models cannot emerge within a
paradigm that focuses on the particularities of a single metric, dataset, and
task. We introduce the Natural Language Decathlon (decaNLP), a challenge that
spans ten tasks: question answering, machine translation, summarization,
natural language inference, sentiment analysis, semantic role labeling,
zero-shot relation extraction, goal-oriented dialogue, semantic parsing, and
commonsense pronoun resolution. We cast all tasks as question answering over a
context. Furthermore, we present a new Multitask Question Answering Network
(MQAN) jointly learns all tasks in decaNLP without any task-specific modules or
parameters in the multitask setting. MQAN shows improvements in transfer
learning for machine translation and named entity recognition, domain
adaptation for sentiment analysis and natural language inference, and zero-shot
capabilities for text classification. We demonstrate that the MQAN's
multi-pointer-generator decoder is key to this success and performance further
improves with an anti-curriculum training strategy. Though designed for
decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic
parsing task in the single-task setting. We also release code for procuring and
processing data, training and evaluating models, and reproducing all
experiments for decaNLP.
| 2,018 | Computation and Language |
Persistent Hidden States and Nonlinear Transformation for Long
Short-Term Memory | Recurrent neural networks (RNNs) have been drawing much attention with great
success in many applications like speech recognition and neural machine
translation. Long short-term memory (LSTM) is one of the most popular RNN units
in deep learning applications. LSTM transforms the input and the previous
hidden states to the next states with the affine transformation, multiplication
operations and a nonlinear activation function, which makes a good data
representation for a given task. The affine transformation includes rotation
and reflection, which change the semantic or syntactic information of
dimensions in the hidden states. However, considering that a model interprets
the output sequence of LSTM over the whole input sequence, the dimensions of
the states need to keep the same type of semantic or syntactic information
regardless of the location in the sequence. In this paper, we propose a simple
variant of the LSTM unit, persistent recurrent unit (PRU), where each dimension
of hidden states keeps persistent information across time, so that the space
keeps the same meaning over the whole sequence. In addition, to improve the
nonlinear transformation power, we add a feedforward layer in the PRU
structure. In the experiment, we evaluate our proposed methods with three
different tasks, and the results confirm that our methods have better
performance than the conventional LSTM.
| 2,018 | Computation and Language |
Combination of Domain Knowledge and Deep Learning for Sentiment Analysis | The emerging technique of deep learning has been widely applied in many
different areas. However, when adopted in a certain specific domain, this
technique should be combined with domain knowledge to improve efficiency and
accuracy. In particular, when analyzing the applications of deep learning in
sentiment analysis, we found that the current approaches are suffering from the
following drawbacks: (i) the existing works have not paid much attention to the
importance of different types of sentiment terms, which is an important concept
in this area; and (ii) the loss function currently employed does not well
reflect the degree of error of sentiment misclassification. To overcome such
problem, we propose to combine domain knowledge with deep learning. Our
proposal includes using sentiment scores, learnt by quadratic programming, to
augment training data; and introducing the penalty matrix for enhancing the
loss function of cross entropy. When experimented, we achieved a significant
improvement in classification results.
| 2,019 | Computation and Language |
Emotion Representation Mapping for Automatic Lexicon Construction
(Mostly) Performs on Human Level | Emotion Representation Mapping (ERM) has the goal to convert existing emotion
ratings from one representation format into another one, e.g., mapping
Valence-Arousal-Dominance annotations for words or sentences into Ekman's Basic
Emotions and vice versa. ERM can thus not only be considered as an alternative
to Word Emotion Induction (WEI) techniques for automatic emotion lexicon
construction but may also help mitigate problems that come from the
proliferation of emotion representation formats in recent years. We propose a
new neural network approach to ERM that not only outperforms the previous
state-of-the-art. Equally important, we present a refined evaluation
methodology and gather strong evidence that our model yields results which are
(almost) as reliable as human annotations, even in cross-lingual settings.
Based on these results we generate new emotion ratings for 13 typologically
diverse languages and claim that they have near-gold quality, at least.
| 2,018 | Computation and Language |
Improving Text-to-SQL Evaluation Methodology | To be informative, an evaluation must measure how well systems generalize to
realistic unseen data. We identify limitations of and propose improvements to
current evaluations of text-to-SQL systems. First, we compare human-generated
and automatically generated questions, characterizing properties of queries
necessary for real-world applications. To facilitate evaluation on multiple
datasets, we release standardized and improved versions of seven existing
datasets and one new text-to-SQL dataset. Second, we show that the current
division of data into training and test sets measures robustness to variations
in the way questions are asked, but only partially tests how well systems
generalize to new queries; therefore, we propose a complementary dataset split
for evaluation of future work. Finally, we demonstrate how the common practice
of anonymizing variables during evaluation removes an important challenge of
the task. Our observations highlight key difficulties, and our methodology
enables effective measurement of future development.
| 2,018 | Computation and Language |
On Adversarial Examples for Character-Level Neural Machine Translation | Evaluating on adversarial examples has become a standard procedure to measure
robustness of deep learning models. Due to the difficulty of creating white-box
adversarial examples for discrete text input, most analyses of the robustness
of NLP models have been done through black-box adversarial examples. We
investigate adversarial examples for character-level neural machine translation
(NMT), and contrast black-box adversaries with a novel white-box adversary,
which employs differentiable string-edit operations to rank adversarial
changes. We propose two novel types of attacks which aim to remove or change a
word in a translation, rather than simply break the NMT. We demonstrate that
white-box adversarial examples are significantly stronger than their black-box
counterparts in different attack scenarios, which show more serious
vulnerabilities than previously known. In addition, after performing
adversarial training, which takes only 3 times longer than regular training, we
can improve the model's robustness significantly.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.