Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Evaluating the Ability of LSTMs to Learn Context-Free Grammars | While long short-term memory (LSTM) neural net architectures are designed to
capture sequence information, human language is generally composed of
hierarchical structures. This raises the question as to whether LSTMs can learn
hierarchical structures. We explore this question with a well-formed bracket
prediction task using two types of brackets modeled by an LSTM. Demonstrating
that such a system is learnable by an LSTM is the first step in demonstrating
that the entire class of CFLs is also learnable. We observe that the model
requires exponential memory in terms of the number of characters and embedded
depth, where a sub-linear memory should suffice. Still, the model does more
than memorize the training input. It learns how to distinguish between relevant
and irrelevant information. On the other hand, we also observe that the model
does not generalize well. We conclude that LSTMs do not learn the relevant
underlying context-free rules, suggesting the good overall performance is
attained rather by an efficient way of evaluating nuisance variables. LSTMs are
a way to quickly reach good results for many natural language tasks, but to
understand and generate natural language one has to investigate other concepts
that can make more direct use of natural language's structural nature.
| 2,018 | Computation and Language |
Building Corpora for Single-Channel Speech Separation Across Multiple
Domains | To date, the bulk of research on single-channel speech separation has been
conducted using clean, near-field, read speech, which is not representative of
many modern applications. In this work, we develop a procedure for constructing
high-quality synthetic overlap datasets, necessary for most deep learning-based
separation frameworks. We produced datasets that are more representative of
realistic applications using the CHiME-5 and Mixer 6 corpora and evaluate
standard methods on this data to demonstrate the shortcomings of current
source-separation performance. We also demonstrate the value of a wide variety
of data in training robust models that generalize well to multiple conditions.
| 2,018 | Computation and Language |
Proceedings of the 2018 Workshop on Compositional Approaches in Physics,
NLP, and Social Sciences | The ability to compose parts to form a more complex whole, and to analyze a
whole as a combination of elements, is desirable across disciplines. This
workshop bring together researchers applying compositional approaches to
physics, NLP, cognitive science, and game theory. Within NLP, a long-standing
aim is to represent how words can combine to form phrases and sentences. Within
the framework of distributional semantics, words are represented as vectors in
vector spaces. The categorical model of Coecke et al. [2010], inspired by
quantum protocols, has provided a convincing account of compositionality in
vector space models of NLP. There is furthermore a history of vector space
models in cognitive science. Theories of categorization such as those developed
by Nosofsky [1986] and Smith et al. [1988] utilise notions of distance between
feature vectors. More recently G\"ardenfors [2004, 2014] has developed a model
of concepts in which conceptual spaces provide geometric structures, and
information is represented by points, vectors and regions in vector spaces. The
same compositional approach has been applied to this formalism, giving
conceptual spaces theory a richer model of compositionality than previously
[Bolt et al., 2018]. Compositional approaches have also been applied in the
study of strategic games and Nash equilibria. In contrast to classical game
theory, where games are studied monolithically as one global object,
compositional game theory works bottom-up by building large and complex games
from smaller components. Such an approach is inherently difficult since the
interaction between games has to be considered. Research into categorical
compositional methods for this field have recently begun [Ghani et al., 2018].
Moreover, the interaction between the three disciplines of cognitive science,
linguistics and game theory is a fertile ground for research. Game theory in
cognitive science is a well-established area [Camerer, 2011]. Similarly game
theoretic approaches have been applied in linguistics [J\"ager, 2008]. Lastly,
the study of linguistics and cognitive science is intimately intertwined
[Smolensky and Legendre, 2006, Jackendoff, 2007]. Physics supplies
compositional approaches via vector spaces and categorical quantum theory,
allowing the interplay between the three disciplines to be examined.
| 2,018 | Computation and Language |
The RLLChatbot: a solution to the ConvAI challenge | Current conversational systems can follow simple commands and answer basic
questions, but they have difficulty maintaining coherent and open-ended
conversations about specific topics. Competitions like the Conversational
Intelligence (ConvAI) challenge are being organized to push the research
development towards that goal. This article presents in detail the RLLChatbot
that participated in the 2017 ConvAI challenge. The goal of this research is to
better understand how current deep learning and reinforcement learning tools
can be used to build a robust yet flexible open domain conversational agent. We
provide a thorough description of how a dialog system can be built and trained
from mostly public-domain datasets using an ensemble model. The first
contribution of this work is a detailed description and analysis of different
text generation models in addition to novel message ranking and selection
methods. Moreover, a new open-source conversational dataset is presented.
Training on this data significantly improves the Recall@k score of the ranking
and selection mechanisms compared to our baseline model responsible for
selecting the message returned at each interaction.
| 2,018 | Computation and Language |
The relationship between linguistic expression and symptoms of
depression, anxiety, and suicidal thoughts: A longitudinal study of blog
content | Due to its popularity and availability, social media data may present a new
way to identify individuals who are experiencing mental illness. By analysing
blog content, this study aimed to investigate the associations between
linguistic features and symptoms of depression, generalised anxiety, and
suicidal ideation. This study utilised a longitudinal study design. Individuals
who blogged were invited to participate in a study in which they completed
fortnightly mental health questionnaires including the PHQ9 and GAD7 for a
period of 36 weeks. Linguistic features were extracted from blog data using the
LIWC tool. Bivariate and multivariate analyses were performed to investigate
the correlations between the linguistic features and mental health scores
between subjects. We then used the multivariate regression model to predict
longitudinal changes in mood within subjects. A total of 153 participants
consented to taking part, with 38 participants completing the required number
of questionnaires and blog posts during the study period. Between-subject
analysis revealed that several linguistic features, including tentativeness and
non-fluencies, were significantly associated with depression and anxiety
symptoms, but not suicidal thoughts. Within-subject analysis showed no robust
correlations between linguistic features and changes in mental health score.
This study provides further support for the relationship between linguistic
features within social media data and symptoms of depression and anxiety. The
lack of robust within-subject correlations indicate that the relationship
observed at the group level may not generalise to individual changes over time.
| 2,021 | Computation and Language |
Learning to Compose Topic-Aware Mixture of Experts for Zero-Shot Video
Captioning | Although promising results have been achieved in video captioning, existing
models are limited to the fixed inventory of activities in the training corpus,
and do not generalize to open vocabulary scenarios. Here we introduce a novel
task, zero-shot video captioning, that aims at describing out-of-domain videos
of unseen activities. Videos of different activities usually require different
captioning strategies in many aspects, i.e. word selection, semantic
construction, and style expression etc, which poses a great challenge to depict
novel activities without paired training data. But meanwhile, similar
activities share some of those aspects in common. Therefore, We propose a
principled Topic-Aware Mixture of Experts (TAMoE) model for zero-shot video
captioning, which learns to compose different experts based on different topic
embeddings, implicitly transferring the knowledge learned from seen activities
to unseen ones. Besides, we leverage external topic-related text corpus to
construct the topic embedding for each activity, which embodies the most
relevant semantic vectors within the topic. Empirical results not only validate
the effectiveness of our method in utilizing semantic knowledge for video
captioning, but also show its strong generalization ability when describing
novel activities.
| 2,018 | Computation and Language |
Improved Audio Embeddings by Adjacency-Based Clustering with
Applications in Spoken Term Detection | Embedding audio signal segments into vectors with fixed dimensionality is
attractive because all following processing will be easier and more efficient,
for example modeling, classifying or indexing. Audio Word2Vec previously
proposed was shown to be able to represent audio segments for spoken words as
such vectors carrying information about the phonetic structures of the signal
segments. However, each linguistic unit (word, syllable, phoneme in text form)
corresponds to unlimited number of audio segments with vector representations
inevitably spread over the embedding space, which causes some confusion. It is
therefore desired to better cluster the audio embeddings such that those
corresponding to the same linguistic unit can be more compactly distributed. In
this paper, inspired by Siamese networks, we propose some approaches to achieve
the above goal. This includes identifying positive and negative pairs from
unlabeled data for Siamese style training, disentangling acoustic factors such
as speaker characteristics from the audio embedding, handling unbalanced data
distribution, and having the embedding processes learn from the adjacency
relationships among data points. All these can be done in an unsupervised way.
Improved performance was obtained in preliminary experiments on the LibriSpeech
data set, including clustering characteristics analysis and applications of
spoken term detection.
| 2,018 | Computation and Language |
microNER: A Micro-Service for German Named Entity Recognition based on
BiLSTM-CRF | For named entity recognition (NER), bidirectional recurrent neural networks
became the state-of-the-art technology in recent years. Competing approaches
vary with respect to pre-trained word embeddings as well as models for
character embeddings to represent sequence information most effectively. For
NER in German language texts, these model variations have not been studied
extensively. We evaluate the performance of different word and character
embeddings on two standard German datasets and with a special focus on
out-of-vocabulary words. With F-Scores above 82% for the GermEval'14 dataset
and above 85% for the CoNLL'03 dataset, we achieve (near) state-of-the-art
performance for this task. We publish several pre-trained models wrapped into a
micro-service based on Docker to allow for easy integration of German NER into
other applications via a JSON API.
| 2,018 | Computation and Language |
Transfer Learning from LDA to BiLSTM-CNN for Offensive Language
Detection in Twitter | We investigate different strategies for automatic offensive language
classification on German Twitter data. For this, we employ a sequentially
combined BiLSTM-CNN neural network. Based on this model, three transfer
learning tasks to improve the classification performance with background
knowledge are tested. We compare 1. Supervised category transfer: social media
data annotated with near-offensive language categories, 2. Weakly-supervised
category transfer: tweets annotated with emojis they contain, 3. Unsupervised
category transfer: tweets annotated with topic clusters obtained by Latent
Dirichlet Allocation (LDA). Further, we investigate the effect of three
different strategies to mitigate negative effects of 'catastrophic forgetting'
during transfer learning. Our results indicate that transfer learning in
general improves offensive language detection. Best results are achieved from
pre-training our model on the unsupervised topic clustering of tweets in
combination with thematic user cluster information.
| 2,018 | Computation and Language |
Compositional Language Understanding with Text-based Relational
Reasoning | Neural networks for natural language reasoning have largely focused on
extractive, fact-based question-answering (QA) and common-sense inference.
However, it is also crucial to understand the extent to which neural networks
can perform relational reasoning and combinatorial generalization from natural
language---abilities that are often obscured by annotation artifacts and the
dominance of language modeling in standard QA benchmarks. In this work, we
present a novel benchmark dataset for language understanding that isolates
performance on relational reasoning. We also present a neural message-passing
baseline and show that this model, which incorporates a relational inductive
bias, is superior at combinatorial generalization compared to a traditional
recurrent neural network approach.
| 2,018 | Computation and Language |
IMS at the PolEval 2018: A Bulky Ensemble Depedency Parser meets 12
Simple Rules for Predicting Enhanced Dependencies in Polish | This paper presents the IMS contribution to the PolEval 2018 Shared Task. We
submitted systems for both of the Subtasks of Task 1. In Subtask (A), which was
about dependency parsing, we used our ensemble system from the CoNLL 2017 UD
Shared Task. The system first preprocesses the sentences with a CRF
POS/morphological tagger and predicts supertags with a neural tagger. Then, it
employs multiple instances of three different parsers and merges their outputs
by applying blending. The system achieved the second place out of four
participating teams. In this paper we show which components of the system were
the most responsible for its final performance.
The goal of Subtask (B) was to predict enhanced graphs. Our approach
consisted of two steps: parsing the sentences with our ensemble system from
Subtask (A), and applying 12 simple rules to obtain the final dependency
graphs. The rules introduce additional enhanced arcs only for tokens with
"conj" heads (conjuncts). They do not predict semantic relations at all. The
system ranked first out of three participating teams. In this paper we show
examples of rules we designed and analyze the relation between the quality of
automatically parsed trees and the accuracy of the enhanced graphs.
| 2,018 | Computation and Language |
Data Selection with Feature Decay Algorithms Using an Approximated
Target Side | Data selection techniques applied to neural machine translation (NMT) aim to
increase the performance of a model by retrieving a subset of sentences for use
as training data.
One of the possible data selection techniques are transductive learning
methods, which select the data based on the test set, i.e. the document to be
translated. A limitation of these methods to date is that using the source-side
test set does not by itself guarantee that sentences are selected with correct
translations, or translations that are suitable given the test-set domain. Some
corpora, such as subtitle corpora, may contain parallel sentences with
inaccurate translations caused by localization or length restrictions.
In order to try to fix this problem, in this paper we propose to use an
approximated target-side in addition to the source-side when selecting suitable
sentence-pairs for training a model. This approximated target-side is built by
pre-translating the source-side.
In this work, we explore the performance of this general idea for one
specific data selection approach called Feature Decay Algorithms (FDA).
We train German-English NMT models on data selected by using the test set
(source), the approximated target side, and a mixture of both. Our findings
reveal that models built using a combination of outputs of FDA (using the test
set and an approximated target side) perform better than those solely using the
test set. We obtain a statistically significant improvement of more than 1.5
BLEU points over a model trained with all data, and more than 0.5 BLEU points
over a strong FDA baseline that uses source-side information only.
| 2,018 | Computation and Language |
Attention Fusion Networks: Combining Behavior and E-mail Content to
Improve Customer Support | Customer support is a central objective at Square as it helps us build and
maintain great relationships with our sellers. In order to provide the best
experience, we strive to deliver the most accurate and quasi-instantaneous
responses to questions regarding our products.
In this work, we introduce the Attention Fusion Network model which combines
signals extracted from seller interactions on the Square product ecosystem,
along with submitted email questions, to predict the most relevant solution to
a seller's inquiry. We show that the innovative combination of two very
different data sources that are rarely used together, using state-of-the-art
deep learning systems outperforms, candidate models that are trained only on a
single source.
| 2,018 | Computation and Language |
Towards Fluent Translations from Disfluent Speech | When translating from speech, special consideration for conversational speech
phenomena such as disfluencies is necessary. Most machine translation training
data consists of well-formed written texts, causing issues when translating
spontaneous speech. Previous work has introduced an intermediate step between
speech recognition (ASR) and machine translation (MT) to remove disfluencies,
making the data better-matched to typical translation text and significantly
improving performance. However, with the rise of end-to-end speech translation
systems, this intermediate step must be incorporated into the
sequence-to-sequence architecture. Further, though translated speech datasets
exist, they are typically news or rehearsed speech without many disfluencies
(e.g. TED), or the disfluencies are translated into the references (e.g.
Fisher). To generate clean translations from disfluent speech, cleaned
references are necessary for evaluation. We introduce a corpus of cleaned
target data for the Fisher Spanish-English dataset for this task. We compare
how different architectures handle disfluencies and provide a baseline for
removing disfluencies in end-to-end translation.
| 2,018 | Computation and Language |
Confusion2Vec: Towards Enriching Vector Space Word Representations with
Representational Ambiguities | Word vector representations are a crucial part of Natural Language Processing
(NLP) and Human Computer Interaction. In this paper, we propose a novel word
vector representation, Confusion2Vec, motivated from the human speech
production and perception that encodes representational ambiguity. Humans
employ both acoustic similarity cues and contextual cues to decode information
and we focus on a model that incorporates both sources of information. The
representational ambiguity of acoustics, which manifests itself in word
confusions, is often resolved by both humans and machines through contextual
cues. A range of representational ambiguities can emerge in various domains
further to acoustic perception, such as morphological transformations,
paraphrasing for NLP tasks like machine translation etc. In this work, we
present a case study in application to Automatic Speech Recognition (ASR),
where the word confusions are related to acoustic similarity. We present
several techniques to train an acoustic perceptual similarity representation
ambiguity. We term this Confusion2Vec and learn on unsupervised-generated data
from ASR confusion networks or lattice-like structures. Appropriate evaluations
for the Confusion2Vec are formulated for gauging acoustic similarity in
addition to semantic-syntactic and word similarity evaluations. The
Confusion2Vec is able to model word confusions efficiently, without
compromising on the semantic-syntactic word relations, thus effectively
enriching the word vector space with extra task relevant ambiguity information.
We provide an intuitive exploration of the 2-dimensional Confusion2Vec space
using Principal Component Analysis of the embedding and relate to semantic,
syntactic and acoustic relationships. The potential of Confusion2Vec in the
utilization of uncertainty present in lattices is demonstrated through small
examples relating to ASR error correction.
| 2,019 | Computation and Language |
Evaluating the Complementarity of Taxonomic Relation Extraction Methods
Across Different Languages | Modern information systems are changing the idea of "data processing" to the
idea of "concept processing", meaning that instead of processing words, such
systems process semantic concepts which carry meaning and share contexts with
other concepts. Ontology is commonly used as a structure that captures the
knowledge about a certain area via providing concepts and relations between
them. Traditionally, concept hierarchies have been built manually by knowledge
engineers or domain experts. However, the manual construction of a concept
hierarchy suffers from several limitations such as its coverage and the
enormous costs of its extension and maintenance. Ontology learning, usually
referred to the (semi-)automatic support in ontology development, is usually
divided into steps, going from concepts identification, passing through
hierarchy and non-hierarchy relations detection and, seldom, axiom extraction.
It is reasonable to say that among these steps the current frontier is in the
establishment of concept hierarchies, since this is the backbone of ontologies
and, therefore, a good concept hierarchy is already a valuable resource for
many ontology applications. The automatic construction of concept hierarchies
from texts is a complex task and much work have been proposing approaches to
better extract relations between concepts. These different proposals have never
been contrasted against each other on the same set of data and across different
languages. Such comparison is important to see whether they are complementary
or incremental. Also, we can see whether they present different tendencies
towards recall and precision. This paper evaluates these different methods on
the basis of hierarchy metrics such as density and depth, and evaluation
metrics such as Recall and Precision. Results shed light over the comprehensive
set of methods according to the literature in the area.
| 2,018 | Computation and Language |
Information Flow in Pregroup Models of Natural Language | This paper is about pregroup models of natural languages, and how they relate
to the explicitly categorical use of pregroups in Compositional Distributional
Semantics and Natural Language Processing. These categorical interpretations
make certain assumptions about the nature of natural languages that, when
stated formally, may be seen to impose strong restrictions on pregroup grammars
for natural languages.
We formalize this as a hypothesis about the form that pregroup models of
natural languages must take, and demonstrate by an artificial language example
that these restrictions are not imposed by the pregroup axioms themselves. We
compare and contrast the artificial language examples with natural languages
(using Welsh, a language where the 'noun' type cannot be taken as primitive, as
an illustrative example).
The hypothesis is simply that there must exist a causal connection, or
information flow, between the words of a sentence in a language whose purpose
is to communicate information. This is not necessarily the case with formal
languages that are simply generated by a series of 'meaning-free' rules. This
imposes restrictions on the types of pregroup grammars that we expect to find
in natural languages; we formalize this in algebraic, categorical, and
graphical terms.
We take some preliminary steps in providing conditions that ensure pregroup
models satisfy these conjectured properties, and discuss the more general forms
this hypothesis may take.
| 2,018 | Computation and Language |
Applying Distributional Compositional Categorical Models of Meaning to
Language Translation | The aim of this paper is twofold: first we will use vector space
distributional compositional categorical models of meaning to compare the
meaning of sentences in Irish and in English (and thus ascertain when a
sentence is the translation of another sentence) using the cosine similarity
score. Then we shall outline a procedure which translates nouns by
understanding their context, using a conceptual space model of cognition. We
shall use metrics on the category ConvexRel to determine the distance between
concepts (and determine when a noun is the translation of another noun). This
paper will focus on applications to Irish, a member of the Gaelic family of
languages.
| 2,018 | Computation and Language |
Classical Copying versus Quantum Entanglement in Natural Language: The
Case of VP-ellipsis | This paper compares classical copying and quantum entanglement in natural
language by considering the case of verb phrase (VP) ellipsis. VP ellipsis is a
non-linear linguistic phenomenon that requires the reuse of resources, making
it the ideal test case for a comparative study of different copying behaviours
in compositional models of natural language. Following the line of research in
compositional distributional semantics set out by (Coecke et al., 2010) we
develop an extension of the Lambek calculus which admits a controlled form of
contraction to deal with the copying of linguistic resources. We then develop
two different compositional models of distributional meaning for this calculus.
In the first model, we follow the categorical approach of (Coecke et al., 2013)
in which a functorial passage sends the proofs of the grammar to linear maps on
vector spaces and we use Frobenius algebras to allow for copying. In the second
case, we follow the more traditional approach that one finds in categorial
grammars, whereby an intermediate step interprets proofs as non-linear lambda
terms, using multiple variable occurrences that model classical copying. As a
case study, we apply the models to derive different readings of ambiguous
elliptical phrases and compare the analyses that each model provides.
| 2,018 | Computation and Language |
Doc2Im: document to image conversion through self-attentive embedding | Text classification is a fundamental task in NLP applications. Latest
research in this field has largely been divided into two major sub-fields.
Learning representations is one sub-field and learning deeper models, both
sequential and convolutional, which again connects back to the representation
is the other side. We posit the idea that the stronger the representation is,
the simpler classifier models are needed to achieve higher performance. In this
paper we propose a completely novel direction to text classification research,
wherein we convert text to a representation very similar to images, such that
any deep network able to handle images is equally able to handle text. We take
a deeper look at the representation of documents as an image and subsequently
utilize very simple convolution based models taken as is from computer vision
domain. This image can be cropped, re-scaled, re-sampled and augmented just
like any other image to work with most of the state-of-the-art large
convolution based models which have been designed to handle large image
datasets. We show impressive results with some of the latest benchmarks in the
related fields. We perform transfer learning experiments, both from text to
text domain and also from image to text domain. We believe this is a paradigm
shift from the way document understanding and text classification has been
traditionally done, and will drive numerous novel research ideas in the
community.
| 2,018 | Computation and Language |
Marshall-Olkin Power-Law Distributions in Length-Frequency of Entities | Entities involve important concepts with concrete meanings and play important
roles in numerous linguistic tasks. Entities have different forms in different
linguistic tasks and researchers treat those different forms as different
concepts. In this paper, we are curious to know whether there are some common
characteristics that connect those different forms of entities. Specifically,
we investigate the underlying distributions of entities from different types
and different languages, trying to figure out some common characteristics
behind those diverse entities. After analyzing twelve datasets about different
types of entities and eighteen datasets about entities in different languages,
we find that while these entities are dramatically diverse from each other in
many aspects, their length-frequencies can be well characterized by a family of
Marshall-Olkin power-law (MOPL) distributions. We conduct experiments on those
thirty datasets about entities in different types and different languages, and
experimental results demonstrate that MOPL models characterize the
length-frequencies of entities much better than two state-of-the-art power-law
models and an alternative log-normal model. Experimental results also
demonstrate that MOPL models are scalable to the length-frequency of entities
in large-scale real-world datasets.
| 2,023 | Computation and Language |
Untangling the GDPR Using ConRelMiner | The General Data Protection Regulation (GDPR) poses enormous challenges on
companies and organizations with respect to understanding, implementing, and
maintaining the contained constraints. We report on how the ConRelMiner method
can be used for untangling the GDPR. For this, the GDPR is filtered and grouped
along the roles mentioned by the GDPR and the reduction of sentences to be read
by analysts is shown. Moreover, the output of the ConRelMiner - a cluster graph
with relations between the sentences - is displayed and interpreted. Overall
the goal is to illustrate how the effort for implementing the GDPR can be
reduced and a structured and meaningful representation of the relevant GDPR
sentences can be found.
| 2,018 | Computation and Language |
Effective Representation for Easy-First Dependency Parsing | Easy-first parsing relies on subtree re-ranking to build the complete parse
tree. Whereas the intermediate state of parsing processing is represented by
various subtrees, whose internal structural information is the key lead for
later parsing action decisions, we explore a better representation for such
subtrees. In detail, this work introduces a bottom-up subtree encoding method
based on the child-sum tree-LSTM. Starting from an easy-first dependency parser
without other handcraft features, we show that the effective subtree encoder
does promote the parsing process, and can make a greedy search easy-first
parser achieve promising results on benchmark treebanks compared to
state-of-the-art baselines. Furthermore, with the help of the current
pre-training language model, we further improve the state-of-the-art results of
the easy-first approach.
| 2,019 | Computation and Language |
Few-shot learning with attention-based sequence-to-sequence models | End-to-end approaches have recently become popular as a means of simplifying
the training and deployment of speech recognition systems. However, they often
require large amounts of data to perform well on large vocabulary tasks. With
the aim of making end-to-end approaches usable by a broader range of
researchers, we explore the potential to use end-to-end methods in small
vocabulary contexts where smaller datasets may be used. A significant drawback
of small-vocabulary systems is the difficulty of expanding the vocabulary
beyond the original training samples -- therefore we also study strategies to
extend the vocabulary with only few examples per new class (few-shot learning).
Our results show that an attention-based encoder-decoder can be competitive
against a strong baseline on a small vocabulary keyword classification task,
reaching 97.5% of accuracy on Tensorflow's Speech Commands dataset. It also
shows promising results on the few-shot learning problem where a simple
strategy achieved 68.8\% of accuracy on new keywords with only 10 examples for
each new class. This score goes up to 88.4\% with a larger set of 100 examples.
| 2,019 | Computation and Language |
Implicit Argument Prediction as Reading Comprehension | Implicit arguments, which cannot be detected solely through syntactic cues,
make it harder to extract predicate-argument tuples. We present a new model for
implicit argument prediction that draws on reading comprehension, casting the
predicate-argument tuple with the missing argument as a query. We also draw on
pointer networks and multi-hop computation. Our model shows good performance on
an argument cloze task as well as on a nominal implicit argument prediction
task.
| 2,018 | Computation and Language |
Federated Learning for Mobile Keyboard Prediction | We train a recurrent neural network language model using a distributed,
on-device learning framework called federated learning for the purpose of
next-word prediction in a virtual keyboard for smartphones. Server-based
training using stochastic gradient descent is compared with training on client
devices using the Federated Averaging algorithm. The federated algorithm, which
enables training on a higher-quality dataset for this use case, is shown to
achieve better prediction recall. This work demonstrates the feasibility and
benefit of training language models on client devices without exporting
sensitive user data to servers. The federated learning environment gives users
greater control over the use of their data and simplifies the task of
incorporating privacy by default with distributed training and aggregation
across a population of client devices.
| 2,019 | Computation and Language |
Incorporating Relevant Knowledge in Context Modeling and Response
Generation | To sustain engaging conversation, it is critical for chatbots to make good
use of relevant knowledge. Equipped with a knowledge base, chatbots are able to
extract conversation-related attributes and entities to facilitate context
modeling and response generation. In this work, we distinguish the uses of
attribute and entity and incorporate them into the encoder-decoder architecture
in different manners. Based on the augmented architecture, our chatbot, namely
Mike, is able to generate responses by referring to proper entities from the
collected knowledge. To validate the proposed approach, we build a movie
conversation corpus on which the proposed approach significantly outperforms
other four knowledge-grounded models.
| 2,018 | Computation and Language |
Neural sequence labeling for Vietnamese POS Tagging and NER | This paper presents a neural architecture for Vietnamese sequence labeling
tasks including part-of-speech (POS) tagging and named entity recognition
(NER). We applied the model described in \cite{lample-EtAl:2016:N16-1} that is
a combination of bidirectional Long-Short Term Memory and Conditional Random
Fields, which rely on two sources of information about words: character-based
word representations learned from the supervised corpus and pre-trained word
embeddings learned from other unannotated corpora. Experiments on benchmark
datasets show that this work achieves state-of-the-art performances on both
tasks - 93.52\% accuracy for POS tagging and 94.88\% F1 for NER. Our sourcecode
is available at here.
| 2,018 | Computation and Language |
Encoding Implicit Relation Requirements for Relation Extraction: A Joint
Inference Approach | Relation extraction is the task of identifying predefined relationship
between entities, and plays an essential role in information extraction,
knowledge base construction, question answering and so on. Most existing
relation extractors make predictions for each entity pair locally and
individually, while ignoring implicit global clues available across different
entity pairs and in the knowledge base, which often leads to conflicts among
local predictions from different entity pairs. This paper proposes a joint
inference framework that employs such global clues to resolve disagreements
among local predictions. We exploit two kinds of clues to generate constraints
which can capture the implicit type and cardinality requirements of a relation.
Those constraints can be examined in either hard style or soft style, both of
which can be effectively explored in an integer linear program formulation.
Experimental results on both English and Chinese datasets show that our
proposed framework can effectively utilize those two categories of global clues
and resolve the disagreements among local predictions, thus improve various
relation extractors when such clues are applicable to the datasets. Our
experiments also indicate that the clues learnt automatically from existing
knowledge bases perform comparably to or better than those refined by human.
| 2,018 | Computation and Language |
Multimodal Grounding for Sequence-to-Sequence Speech Recognition | Humans are capable of processing speech by making use of multiple sensory
modalities. For example, the environment where a conversation takes place
generally provides semantic and/or acoustic context that helps us to resolve
ambiguities or to recall named entities. Motivated by this, there have been
many works studying the integration of visual information into the speech
recognition pipeline. Specifically, in our previous work, we propose a
multistep visual adaptive training approach which improves the accuracy of an
audio-based Automatic Speech Recognition (ASR) system. This approach, however,
is not end-to-end as it requires fine-tuning the whole model with an adaptation
layer. In this paper, we propose novel end-to-end multimodal ASR systems and
compare them to the adaptive approach by using a range of visual
representations obtained from state-of-the-art convolutional neural networks.
We show that adaptive training is effective for S2S models leading to an
absolute improvement of 1.4% in word error rate. As for the end-to-end systems,
although they perform better than baseline, the improvements are slightly less
than adaptive training, 0.8 absolute WER reduction in single-best models. Using
ensemble decoding, end-to-end models reach a WER of 15% which is the lowest
score among all systems.
| 2,019 | Computation and Language |
Learning Semantic Representations for Novel Words: Leveraging Both Form
and Context | Word embeddings are a key component of high-performing natural language
processing (NLP) systems, but it remains a challenge to learn good
representations for novel words on the fly, i.e., for words that did not occur
in the training data. The general problem setting is that word embeddings are
induced on an unlabeled training corpus and then a model is trained that embeds
novel words into this induced embedding space. Currently, two approaches for
learning embeddings of novel words exist: (i) learning an embedding from the
novel word's surface-form (e.g., subword n-grams) and (ii) learning an
embedding from the context in which it occurs. In this paper, we propose an
architecture that leverages both sources of information - surface-form and
context - and show that it results in large increases in embedding quality. Our
architecture obtains state-of-the-art results on the Definitional Nonce and
Contextual Rare Words datasets. As input, we only require an embedding set and
an unlabeled corpus for training our architecture to produce embeddings
appropriate for the induced embedding space. Thus, our model can easily be
integrated into any existing NLP system and enhance its capability to handle
novel words.
| 2,018 | Computation and Language |
Long Short-Term Memory with Dynamic Skip Connections | In recent years, long short-term memory (LSTM) has been successfully used to
model sequential data of variable length. However, LSTM can still experience
difficulty in capturing long-term dependencies. In this work, we tried to
alleviate this problem by introducing a dynamic skip connection, which can
learn to directly connect two dependent words. Since there is no dependency
information in the training data, we propose a novel reinforcement
learning-based method to model the dependency relationship and connect
dependent words. The proposed model computes the recurrent transition functions
based on the skip connections, which provides a dynamic skipping advantage over
RNNs that always tackle entire sentences sequentially. Our experimental results
on three natural language processing tasks demonstrate that the proposed method
can achieve better performance than existing methods. In the number prediction
experiment, the proposed model outperformed LSTM with respect to accuracy by
nearly 20%.
| 2,018 | Computation and Language |
Multimodal One-Shot Learning of Speech and Images | Imagine a robot is shown new concepts visually together with spoken tags,
e.g. "milk", "eggs", "butter". After seeing one paired audio-visual example per
class, it is shown a new set of unseen instances of these objects, and asked to
pick the "milk". Without receiving any hard labels, could it learn to match the
new continuous speech input to the correct visual instance? Although unimodal
one-shot learning has been studied, where one labelled example in a single
modality is given per class, this example motivates multimodal one-shot
learning. Our main contribution is to formally define this task, and to propose
several baseline and advanced models. We use a dataset of paired spoken and
visual digits to specifically investigate recent advances in Siamese
convolutional neural networks. Our best Siamese model achieves twice the
accuracy of a nearest neighbour model using pixel-distance over images and
dynamic time warping over speech in 11-way cross-modal matching.
| 2,019 | Computation and Language |
A Hierarchical Framework for Relation Extraction with Reinforcement
Learning | Most existing methods determine relation types only after all the entities
have been recognized, thus the interaction between relation types and entity
mentions is not fully modeled. This paper presents a novel paradigm to deal
with relation extraction by regarding the related entities as the arguments of
a relation. We apply a hierarchical reinforcement learning (HRL) framework in
this paradigm to enhance the interaction between entity mentions and relation
types. The whole extraction process is decomposed into a hierarchy of two-level
RL policies for relation detection and entity extraction respectively, so that
it is more feasible and natural to deal with overlapping relations. Our model
was evaluated on public datasets collected via distant supervision, and results
show that it gains better performance than existing methods and is more
powerful for extracting overlapping relations.
| 2,018 | Computation and Language |
Zero-shot Neural Transfer for Cross-lingual Entity Linking | Cross-lingual entity linking maps an entity mention in a source language to
its corresponding entry in a structured knowledge base that is in a different
(target) language. While previous work relies heavily on bilingual lexical
resources to bridge the gap between the source and the target languages, these
resources are scarce or unavailable for many low-resource languages. To address
this problem, we investigate zero-shot cross-lingual entity linking, in which
we assume no bilingual lexical resources are available in the source
low-resource language. Specifically, we propose pivot-based entity linking,
which leverages information from a high-resource "pivot" language to train
character-level neural entity linking models that are transferred to the source
low-resource language in a zero-shot manner. With experiments on 9 low-resource
languages and transfer through a total of 54 languages, we show that our
proposed pivot-based framework improves entity linking accuracy 17% (absolute)
on average over the baseline systems, for the zero-shot scenario. Further, we
also investigate the use of language-universal phonological representations
which improves average accuracy (absolute) by 36% when transferring between
languages that use different scripts.
| 2,018 | Computation and Language |
Dual Latent Variable Model for Low-Resource Natural Language Generation
in Dialogue Systems | Recent deep learning models have shown improving results to natural language
generation (NLG) irrespective of providing sufficient annotated data. However,
a modest training data may harm such models performance. Thus, how to build a
generator that can utilize as much of knowledge from a low-resource setting
data is a crucial issue in NLG. This paper presents a variational neural-based
generation model to tackle the NLG problem of having limited labeled dataset,
in which we integrate a variational inference into an encoder-decoder generator
and introduce a novel auxiliary autoencoding with an effective training
procedure. Experiments showed that the proposed methods not only outperform the
previous models when having sufficient training dataset but also show strong
ability to work acceptably well when the training data is scarce.
| 2,018 | Computation and Language |
Adversarially-Trained Normalized Noisy-Feature Auto-Encoder for Text
Generation | This article proposes Adversarially-Trained Normalized Noisy-Feature
Auto-Encoder (ATNNFAE) for byte-level text generation. An ATNNFAE consists of
an auto-encoder where the internal code is normalized on the unit sphere and
corrupted by additive noise. Simultaneously, a replica of the decoder (sharing
the same parameters as the AE decoder) is used as the generator and fed with
random latent vectors. An adversarial discriminator is trained to distinguish
training samples reconstructed from the AE from samples produced through the
random-input generator, making the entire generator-discriminator path
differentiable for discrete data like text. The combined effect of noise
injection in the code and shared weights between the decoder and the generator
can prevent the mode collapsing phenomenon commonly observed in GANs. Since
perplexity cannot be applied to non-sequential text generation, we propose a
new evaluation method using the total variance distance between frequencies of
hash-coded byte-level n-grams (NGTVD). NGTVD is a single benchmark that can
characterize both the quality and the diversity of the generated texts.
Experiments are offered in 6 large-scale datasets in Arabic, Chinese and
English, with comparisons against n-gram baselines and recurrent neural
networks (RNNs). Ablation study on both the noise level and the discriminator
is performed. We find that RNNs have trouble competing with the n-gram
baselines, and the ATNNFAE results are generally competitive.
| 2,018 | Computation and Language |
Densely Connected Attention Propagation for Reading Comprehension | We propose DecaProp (Densely Connected Attention Propagation), a new densely
connected neural architecture for reading comprehension (RC). There are two
distinct characteristics of our model. Firstly, our model densely connects all
pairwise layers of the network, modeling relationships between passage and
query across all hierarchical levels. Secondly, the dense connectors in our
network are learned via attention instead of standard residual skip-connectors.
To this end, we propose novel Bidirectional Attention Connectors (BAC) for
efficiently forging connections throughout the network. We conduct extensive
experiments on four challenging RC benchmarks. Our proposed approach achieves
state-of-the-art results on all four, outperforming existing baselines by up to
$2.6\%-14.2\%$ in absolute F1 score.
| 2,019 | Computation and Language |
Speech Intention Understanding in a Head-final Language: A
Disambiguation Utilizing Intonation-dependency | For a large portion of real-life utterances, the intention cannot be solely
decided by either their semantic or syntactic characteristics. Although not all
the sociolinguistic and pragmatic information can be digitized, at least
phonetic features are indispensable in understanding the spoken language.
Especially in head-final languages such as Korean, sentence-final prosody has
great importance in identifying the speaker's intention. This paper suggests a
system which identifies the inherent intention of a spoken utterance given its
transcript, in some cases using auxiliary acoustic features. The main point
here is a separate distinction for cases where discrimination of intention
requires an acoustic cue. Thus, the proposed classification system decides
whether the given utterance is a fragment, statement, question, command, or a
rhetorical question/command, utilizing the intonation-dependency coming from
the head-finality. Based on an intuitive understanding of the Korean language
that is engaged in the data annotation, we construct a network which identifies
the intention of a speech, and validate its utility with the test sentences.
The system, if combined with up-to-date speech recognizers, is expected to be
flexibly inserted into various language understanding modules.
| 2,022 | Computation and Language |
Improving End-to-end Speech Recognition with Pronunciation-assisted
Sub-word Modeling | Most end-to-end speech recognition systems model text directly as a sequence
of characters or sub-words. Current approaches to sub-word extraction only
consider character sequence frequencies, which at times produce inferior
sub-word segmentation that might lead to erroneous speech recognition output.
We propose pronunciation-assisted sub-word modeling (PASM), a sub-word
extraction method that leverages the pronunciation information of a word.
Experiments show that the proposed method can greatly improve upon the
character-based baseline, and also outperform commonly used byte-pair encoding
methods.
| 2,019 | Computation and Language |
Multi-labeled Relation Extraction with Attentive Capsule Network | To disclose overlapped multiple relations from a sentence still keeps
challenging. Most current works in terms of neural models inconveniently
assuming that each sentence is explicitly mapped to a relation label, cannot
handle multiple relations properly as the overlapped features of the relations
are either ignored or very difficult to identify. To tackle with the new issue,
we propose a novel approach for multi-labeled relation extraction with capsule
network which acts considerably better than current convolutional or recurrent
net in identifying the highly overlapped relations within an individual
sentence. To better cluster the features and precisely extract the relations,
we further devise attention-based routing algorithm and sliding-margin loss
function, and embed them into our capsule network. The experimental results
show that the proposed approach can indeed extract the highly overlapped
features and achieve significant performance improvement for relation
extraction comparing to the state-of-the-art works.
| 2,018 | Computation and Language |
User Modeling for Task Oriented Dialogues | We introduce end-to-end neural network based models for simulating users of
task-oriented dialogue systems. User simulation in dialogue systems is crucial
from two different perspectives: (i) automatic evaluation of different dialogue
models, and (ii) training task-oriented dialogue systems. We design a
hierarchical sequence-to-sequence model that first encodes the initial user
goal and system turns into fixed length representations using Recurrent Neural
Networks (RNN). It then encodes the dialogue history using another RNN layer.
At each turn, user responses are decoded from the hidden representations of the
dialogue level RNN. This hierarchical user simulator (HUS) approach allows the
model to capture undiscovered parts of the user goal without the need of an
explicit dialogue state tracking. We further develop several variants by
utilizing a latent variable model to inject random variations into user
responses to promote diversity in simulated user responses and a novel goal
regularization mechanism to penalize divergence of user responses from the
initial user goal. We evaluate the proposed models on movie ticket booking
domain by systematically interacting each user simulator with various dialogue
system policies trained with different objectives and users.
| 2,018 | Computation and Language |
ReDecode Framework for Iterative Improvement in Paraphrase Generation | Generating paraphrases, that is, different variations of a sentence conveying
the same meaning, is an important yet challenging task in NLP. Automatically
generating paraphrases has its utility in many NLP tasks like question
answering, information retrieval, conversational systems to name a few. In this
paper, we introduce iterative refinement of generated paraphrases within VAE
based generation framework. Current sequence generation models lack the
capability to (1) make improvements once the sentence is generated; (2) rectify
errors made while decoding. We propose a technique to iteratively refine the
output using multiple decoders, each one attending on the output sentence
generated by the previous decoder. We improve current state of the art results
significantly - with over 9% and 28% absolute increase in METEOR scores on
Quora question pairs and MSCOCO datasets respectively. We also show
qualitatively through examples that our re-decoding approach generates better
paraphrases compared to a single decoder by rectifying errors and making
improvements in paraphrase structure, inducing variations and introducing new
but semantically coherent information.
| 2,018 | Computation and Language |
Product Title Refinement via Multi-Modal Generative Adversarial Learning | Nowadays, an increasing number of customers are in favor of using E-commerce
Apps to browse and purchase products. Since merchants are usually inclined to
employ redundant and over-informative product titles to attract customers'
attention, it is of great importance to concisely display short product titles
on limited screen of cell phones. Previous researchers mainly consider textual
information of long product titles and lack of human-like view during training
and evaluation procedure. In this paper, we propose a Multi-Modal Generative
Adversarial Network (MM-GAN) for short product title generation, which
innovatively incorporates image information, attribute tags from the product
and the textual information from original long titles. MM-GAN treats short
titles generation as a reinforcement learning process, where the generated
titles are evaluated by the discriminator in a human-like view.
| 2,018 | Computation and Language |
Sequence-Level Knowledge Distillation for Model Compression of
Attention-based Sequence-to-Sequence Speech Recognition | We investigate the feasibility of sequence-level knowledge distillation of
Sequence-to-Sequence (Seq2Seq) models for Large Vocabulary Continuous Speech
Recognition (LVSCR). We first use a pre-trained larger teacher model to
generate multiple hypotheses per utterance with beam search. With the same
input, we then train the student model using these hypotheses generated from
the teacher as pseudo labels in place of the original ground truth labels. We
evaluate our proposed method using Wall Street Journal (WSJ) corpus. It
achieved up to $ 9.8 \times$ parameter reduction with accuracy loss of up to
7.0\% word-error rate (WER) increase
| 2,018 | Computation and Language |
Forecasting People's Needs in Hurricane Events from Social Network | Social networks can serve as a valuable communication channel for calls for
help, offering assistance, and coordinating rescue activities in disaster.
Social networks such as Twitter allow users to continuously update relevant
information, which is especially useful during a crisis, where the rapidly
changing conditions make it crucial to be able to access accurate information
promptly. Social media helps those directly affected to inform others of
conditions on the ground in real time and thus enables rescue workers to
coordinate their efforts more effectively, better meeting the survivors' need.
This paper presents a new sequence to sequence based framework for forecasting
people's needs during disasters using social media and weather data. It
consists of two Long Short-Term Memory (LSTM) models, one of which encodes
input sequences of weather information and the other plays as a conditional
decoder that decodes the encoded vector and forecasts the survivors' needs.
Case studies utilizing data collected during Hurricane Sandy in 2012, Hurricane
Harvey and Hurricane Irma in 2017 were analyzed and the results compared with
those obtained using a statistical language model n-gram and an LSTM generative
model. Our proposed sequence to sequence method forecast people's needs more
successfully than either of the other models. This new approach shows great
promise for enhancing disaster management activities such as evacuation
planning and commodity flow management.
| 2,020 | Computation and Language |
The Cinderella Complex: Word Embeddings Reveal Gender Stereotypes in
Movies and Books | Our analysis of thousands of movies and books reveals how these cultural
products weave stereotypical gender roles into morality tales and perpetuate
gender inequality through storytelling. Using the word embedding techniques, we
reveal the constructed emotional dependency of female characters on male
characters in stories.
| 2,020 | Computation and Language |
Learning Personalized End-to-End Goal-Oriented Dialog | Most existing works on dialog systems only consider conversation content
while neglecting the personality of the user the bot is interacting with, which
begets several unsolved issues. In this paper, we present a personalized
end-to-end model in an attempt to leverage personalization in goal-oriented
dialogs. We first introduce a Profile Model which encodes user profiles into
distributed embeddings and refers to conversation history from other similar
users. Then a Preference Model captures user preferences over knowledge base
entities to handle the ambiguity in user requests. The two models are combined
into the Personalized MemN2N. Experiments show that the proposed model achieves
qualitative performance improvements over state-of-the-art methods. As for
human evaluation, it also outperforms other approaches in terms of task
completion rate and user satisfaction.
| 2,018 | Computation and Language |
Fine-tuning of Language Models with Discriminator | Cross-entropy loss is a common choice when it comes to multiclass
classification tasks and language modeling in particular. Minimizing this loss
results in language models of very good quality. We show that it is possible to
fine-tune these models and make them perform even better if they are fine-tuned
with sum of cross-entropy loss and reverse Kullback-Leibler divergence. The
latter is estimated using discriminator network that we train in advance.
During fine-tuning probabilities of rare words that are usually underestimated
by language models become bigger. The novel approach that we propose allows us
to reach state-of-the-art quality on Penn Treebank: perplexity decreases from
52.4 to 52.1. Our fine-tuning algorithm is rather fast, scales well to
different architectures and datasets and requires almost no hyperparameter
tuning: the only hyperparameter that needs to be tuned is learning rate.
| 2,019 | Computation and Language |
Not Just Depressed: Bipolar Disorder Prediction on Reddit | Bipolar disorder, an illness characterized by manic and depressive episodes,
affects more than 60 million people worldwide. We present a preliminary study
on bipolar disorder prediction from user-generated text on Reddit, which relies
on users' self-reported labels. Our benchmark classifiers for bipolar disorder
prediction outperform the baselines and reach accuracy and F1-scores of above
86%. Feature analysis shows interesting differences in language use between
users with bipolar disorders and the control group, including differences in
the use of emotion-expressive words.
| 2,018 | Computation and Language |
A Deep Ensemble Framework for Fake News Detection and Classification | Fake news, rumor, incorrect information, and misinformation detection are
nowadays crucial issues as these might have serious consequences for our social
fabrics. The rate of such information is increasing rapidly due to the
availability of enormous web information sources including social media feeds,
news blogs, online newspapers etc.
In this paper, we develop various deep learning models for detecting fake
news and classifying them into the pre-defined fine-grained categories.
At first, we develop models based on Convolutional Neural Network (CNN) and
Bi-directional Long Short Term Memory (Bi-LSTM) networks. The representations
obtained from these two models are fed into a Multi-layer Perceptron Model
(MLP) for the final classification. Our experiments on a benchmark dataset show
promising results with an overall accuracy of 44.87\%, which outperforms the
current state of the art.
| 2,018 | Computation and Language |
Classifying Patent Applications with Ensemble Methods | We present methods for the automatic classification of patent applications
using an annotated dataset provided by the organizers of the ALTA 2018 shared
task - Classifying Patent Applications. The goal of the task is to use
computational methods to categorize patent applications according to a
coarse-grained taxonomy of eight classes based on the International Patent
Classification (IPC). We tested a variety of approaches for this task and the
best results, 0.778 micro-averaged F1-Score, were achieved by SVM ensembles
using a combination of words and characters as features. Our team, BMZ, was
ranked first among 14 teams in the competition.
| 2,018 | Computation and Language |
CUNI System for the WMT18 Multimodal Translation Task | We present our submission to the WMT18 Multimodal Translation Task. The main
feature of our submission is applying a self-attentive network instead of a
recurrent neural network. We evaluate two methods of incorporating the visual
features in the model: first, we include the image representation as another
input to the network; second, we train the model to predict the visual features
and use it as an auxiliary objective. For our submission, we acquired both
textual and multimodal additional data. Both of the proposed methods yield
significant improvements over recurrent networks and self-attentive textual
baselines.
| 2,018 | Computation and Language |
Analyzing deep CNN-based utterance embeddings for acoustic model
adaptation | We explore why deep convolutional neural networks (CNNs) with small
two-dimensional kernels, primarily used for modeling spatial relations in
images, are also effective in speech recognition. We analyze the
representations learned by deep CNNs and compare them with deep neural network
(DNN) representations and i-vectors, in the context of acoustic model
adaptation. To explore whether interpretable information can be decoded from
the learned representations we evaluate their ability to discriminate between
speakers, acoustic conditions, noise type, and gender using the Aurora-4
dataset. We extract both whole model embeddings (to capture the information
learned across the whole network) and layer-specific embeddings which enable
understanding of the flow of information across the network. We also use
learned representations as the additional input for a time-delay neural network
(TDNN) for the Aurora-4 and MGB-3 English datasets. We find that deep CNN
embeddings outperform DNN embeddings for acoustic model adaptation and
auxiliary features based on deep CNN embeddings result in similar word error
rates to i-vectors.
| 2,018 | Computation and Language |
Input Combination Strategies for Multi-Source Transformer Decoder | In multi-source sequence-to-sequence tasks, the attention mechanism can be
modeled in several ways. This topic has been thoroughly studied on recurrent
architectures. In this paper, we extend the previous work to the
encoder-decoder attention in the Transformer architecture. We propose four
different input combination strategies for the encoder-decoder attention:
serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of
multimodal translation and translation with multiple source languages. The
experiments show that the models are able to use multiple sources and improve
over single source baselines.
| 2,018 | Computation and Language |
End-to-End Non-Autoregressive Neural Machine Translation with
Connectionist Temporal Classification | Autoregressive decoding is the only part of sequence-to-sequence models that
prevents them from massive parallelization at inference time.
Non-autoregressive models enable the decoder to generate all output symbols
independently in parallel. We present a novel non-autoregressive architecture
based on connectionist temporal classification and evaluate it on the task of
neural machine translation. Unlike other non-autoregressive methods which
operate in several steps, our model can be trained end-to-end. We conduct
experiments on the WMT English-Romanian and English-German datasets. Our models
achieve a significant speedup over the autoregressive models, keeping the
translation quality comparable to other non-autoregressive models.
| 2,018 | Computation and Language |
Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a
Deep Neural Architecture for SRL? | Do unsupervised methods for learning rich, contextualized token
representations obviate the need for explicit modeling of linguistic structure
in neural network models for semantic role labeling (SRL)? We address this
question by incorporating the massively successful ELMo embeddings (Peters et
al., 2018) into LISA (Strubell et al., 2018), a strong, linguistically-informed
neural network architecture for SRL. In experiments on the CoNLL-2005 shared
task we find that though ELMo out-performs typical word embeddings, beginning
to close the gap in F1 between LISA with predicted and gold syntactic parses,
syntactically-informed models still out-perform syntax-free models when both
use ELMo, especially on out-of-domain data. Our results suggest that linguistic
structures are indeed still relevant in this golden age of deep learning for
NLP.
| 2,018 | Computation and Language |
CQASUMM: Building References for Community Question Answering
Summarization Corpora | Community Question Answering forums such as Quora, Stackoverflow are rich
knowledge resources, often catering to information on topics overlooked by
major search engines. Answers submitted to these forums are often elaborated,
contain spam, are marred by slurs and business promotions. It is difficult for
a reader to go through numerous such answers to gauge community opinion. As a
result summarization becomes a prioritized task for CQA forums. While a number
of efforts have been made to summarize factoid CQA, little work exists in
summarizing non-factoid CQA. We believe this is due to the lack of a
considerably large, annotated dataset for CQA summarization. We create CQASUMM,
the first huge annotated CQA summarization dataset by filtering the 4.4 million
Yahoo! Answers L6 dataset. We sample threads where the best answer can double
up as a reference summary and build hundred word summaries from them. We treat
other answers as candidates documents for summarization. We provide a script to
generate the dataset and introduce the new task of Community Question Answering
Summarization. Multi document summarization has been widely studied with news
article datasets, especially in the DUC and TAC challenges using news corpora.
However documents in CQA have higher variance, contradicting opinion and lesser
amount of overlap. We compare the popular multi document summarization
techniques and evaluate their performance on our CQA corpora. We look into the
state-of-the-art and understand the cases where existing multi document
summarizers (MDS) fail. We find that most MDS workflows are built for the
entirely factual news corpora, whereas our corpus has a fair share of opinion
based instances too. We therefore introduce OpinioSumm, a new MDS which
outperforms the best baseline by 4.6% w.r.t ROUGE-1 score.
| 2,018 | Computation and Language |
Multi-encoder multi-resolution framework for end-to-end speech
recognition | Attention-based methods and Connectionist Temporal Classification (CTC)
network have been promising research directions for end-to-end Automatic Speech
Recognition (ASR). The joint CTC/Attention model has achieved great success by
utilizing both architectures during multi-task training and joint decoding. In
this work, we present a novel Multi-Encoder Multi-Resolution (MEMR) framework
based on the joint CTC/Attention model. Two heterogeneous encoders with
different architectures, temporal resolutions and separate CTC networks work in
parallel to extract complimentary acoustic information. A hierarchical
attention mechanism is then used to combine the encoder-level information. To
demonstrate the effectiveness of the proposed model, experiments are conducted
on Wall Street Journal (WSJ) and CHiME-4, resulting in relative Word Error Rate
(WER) reduction of 18.0-32.1%. Moreover, the proposed MEMR model achieves 3.6%
WER in the WSJ eval92 test set, which is the best WER reported for an
end-to-end system on this benchmark.
| 2,018 | Computation and Language |
Stream attention-based multi-array end-to-end speech recognition | Automatic Speech Recognition (ASR) using multiple microphone arrays has
achieved great success in the far-field robustness. Taking advantage of all the
information that each array shares and contributes is crucial in this task.
Motivated by the advances of joint Connectionist Temporal Classification
(CTC)/attention mechanism in the End-to-End (E2E) ASR, a stream attention-based
multi-array framework is proposed in this work. Microphone arrays, acting as
information streams, are activated by separate encoders and decoded under the
instruction of both CTC and attention networks. In terms of attention, a
hierarchical structure is adopted. On top of the regular attention networks,
stream attention is introduced to steer the decoder toward the most informative
encoders. Experiments have been conducted on AMI and DIRHA multi-array corpora
using the encoder-decoder architecture. Compared with the best single-array
results, the proposed framework has achieved relative Word Error Rates (WERs)
reduction of 3.7% and 9.7% in the two datasets, respectively, which is better
than conventional strategies as well.
| 2,019 | Computation and Language |
Unseen Word Representation by Aligning Heterogeneous Lexical Semantic
Spaces | Word embedding techniques heavily rely on the abundance of training data for
individual words. Given the Zipfian distribution of words in natural language
texts, a large number of words do not usually appear frequently or at all in
the training data. In this paper we put forward a technique that exploits the
knowledge encoded in lexical resources, such as WordNet, to induce embeddings
for unseen words. Our approach adapts graph embedding and cross-lingual vector
space transformation techniques in order to merge lexical knowledge encoded in
ontologies with that derived from corpus statistics. We show that the approach
can provide consistent performance improvements across multiple evaluation
benchmarks: in-vitro, on multiple rare word similarity datasets, and in-vivo,
in two downstream text classification tasks.
| 2,018 | Computation and Language |
Improved Dynamic Memory Network for Dialogue Act Classification with
Adversarial Training | Dialogue Act (DA) classification is a challenging problem in dialogue
interpretation, which aims to attach semantic labels to utterances and
characterize the speaker's intention. Currently, many existing approaches
formulate the DA classification problem ranging from multi-classification to
structured prediction, which suffer from two limitations: a) these methods are
either handcrafted feature-based or have limited memories. b) adversarial
examples can't be correctly classified by traditional training methods. To
address these issues, in this paper we first cast the problem into a question
and answering problem and proposed an improved dynamic memory networks with
hierarchical pyramidal utterance encoder. Moreover, we apply adversarial
training to train our proposed model. We evaluate our model on two public
datasets, i.e., Switchboard dialogue act corpus and the MapTask corpus.
Extensive experiments show that our proposed model is not only robust, but also
achieves better performance when compared with some state-of-the-art baselines.
| 2,018 | Computation and Language |
A Unified Model for Opinion Target Extraction and Target Sentiment
Prediction | Target-based sentiment analysis involves opinion target extraction and target
sentiment classification. However, most of the existing works usually studied
one of these two sub-tasks alone, which hinders their practical use. This paper
aims to solve the complete task of target-based sentiment analysis in an
end-to-end fashion, and presents a novel unified model which applies a unified
tagging scheme. Our framework involves two stacked recurrent neural networks:
The upper one predicts the unified tags to produce the final output results of
the primary target-based sentiment analysis; The lower one performs an
auxiliary target boundary prediction aiming at guiding the upper network to
improve the performance of the primary task. To explore the inter-task
dependency, we propose to explicitly model the constrained transitions from
target boundaries to target sentiment polarities. We also propose to maintain
the sentiment consistency within an opinion target via a gate mechanism which
models the relation between the features for the current word and the previous
word. We conduct extensive experiments on three benchmark datasets and our
framework achieves consistently superior results.
| 2,019 | Computation and Language |
Domain Agnostic Real-Valued Specificity Prediction | Sentence specificity quantifies the level of detail in a sentence,
characterizing the organization of information in discourse. While this
information is useful for many downstream applications, specificity prediction
systems predict very coarse labels (binary or ternary) and are trained on and
tailored toward specific domains (e.g., news). The goal of this work is to
generalize specificity prediction to domains where no labeled data is available
and output more nuanced real-valued specificity ratings.
We present an unsupervised domain adaptation system for sentence specificity
prediction, specifically designed to output real-valued estimates from binary
training labels. To calibrate the values of these predictions appropriately, we
regularize the posterior distribution of the labels towards a reference
distribution. We show that our framework generalizes well to three different
domains with 50%~68% mean absolute error reduction than the current
state-of-the-art system trained for news sentence specificity. We also
demonstrate the potential of our work in improving the quality and
informativeness of dialogue generation systems.
| 2,019 | Computation and Language |
Exploring RNN-Transducer for Chinese Speech Recognition | End-to-end approaches have drawn much attention recently for significantly
simplifying the construction of an automatic speech recognition (ASR) system.
RNN transducer (RNN-T) is one of the popular end-to-end methods. Previous
studies have shown that RNN-T is difficult to train and a very complex training
process is needed for a reasonable performance. In this paper, we explore RNN-T
for a Chinese large vocabulary continuous speech recognition (LVCSR) task and
aim to simplify the training process while maintaining performance. First, a
new strategy of learning rate decay is proposed to accelerate the model
convergence. Second, we find that adding convolutional layers at the beginning
of the network and using ordered data can discard the pre-training process of
the encoder without loss of performance. Besides, we design experiments to find
a balance among the usage of GPU memory, training circle and model performance.
Finally, we achieve 16.9% character error rate (CER) on our test set which is
2% absolute improvement from a strong BLSTM CE system with language model
trained on the same text corpus.
| 2,019 | Computation and Language |
Modeling Local Dependence in Natural Language with Multi-channel
Recurrent Neural Networks | Recurrent Neural Networks (RNNs) have been widely used in processing natural
language tasks and achieve huge success. Traditional RNNs usually treat each
token in a sentence uniformly and equally. However, this may miss the rich
semantic structure information of a sentence, which is useful for understanding
natural languages. Since semantic structures such as word dependence patterns
are not parameterized, it is a challenge to capture and leverage structure
information. In this paper, we propose an improved variant of RNN,
Multi-Channel RNN (MC-RNN), to dynamically capture and leverage local semantic
structure information. Concretely, MC-RNN contains multiple channels, each of
which represents a local dependence pattern at a time. An attention mechanism
is introduced to combine these patterns at each step, according to the semantic
information. Then we parameterize structure information by adaptively selecting
the most appropriate connection structures among channels. In this way, diverse
local structures and dependence patterns in sentences can be well captured by
MC-RNN. To verify the effectiveness of MC-RNN, we conduct extensive experiments
on typical natural language processing tasks, including neural machine
translation, abstractive summarization, and language modeling. Experimental
results on these tasks all show significant improvements of MC-RNN over current
top systems.
| 2,018 | Computation and Language |
Hate Speech Detection from Code-mixed Hindi-English Tweets Using Deep
Learning Models | This paper reports an increment to the state-of-the-art in hate speech
detection for English-Hindi code-mixed tweets. We compare three typical deep
learning models using domain-specific embeddings. On experimenting with a
benchmark dataset of English-Hindi code-mixed tweets, we observe that using
domain-specific embeddings results in an improved representation of target
groups, and an improved F-score.
| 2,018 | Computation and Language |
A Multi-layer LSTM-based Approach for Robot Command Interaction Modeling | As the first robotic platforms slowly approach our everyday life, we can
imagine a near future where service robots will be easily accessible by
non-expert users through vocal interfaces. The capability of managing natural
language would indeed speed up the process of integrating such platform in the
ordinary life. Semantic parsing is a fundamental task of the Natural Language
Understanding process, as it allows extracting the meaning of a user utterance
to be used by a machine. In this paper, we present a preliminary study to
semantically parse user vocal commands for a House Service robot, using a
multi-layer Long-Short Term Memory neural network with attention mechanism. The
system is trained on the Human Robot Interaction Corpus, and it is
preliminarily compared with previous approaches.
| 2,018 | Computation and Language |
An Online Attention-based Model for Speech Recognition | Attention-based end-to-end models such as Listen, Attend and Spell (LAS),
simplify the whole pipeline of traditional automatic speech recognition (ASR)
systems and become popular in the field of speech recognition. In previous
work, researchers have shown that such architectures can acquire comparable
results to state-of-the-art ASR systems, especially when using a bidirectional
encoder and global soft attention (GSA) mechanism. However, bidirectional
encoder and GSA are two obstacles for real-time speech recognition. In this
work, we aim to stream LAS baseline by removing the above two obstacles. On the
encoder side, we use a latency-controlled (LC) bidirectional structure to
reduce the delay of forward computation. Meanwhile, an adaptive monotonic
chunk-wise attention (AMoChA) mechanism is proposed to replace GSA for the
calculation of attention weight distribution. Furthermore, we propose two
methods to alleviate the huge performance degradation when combining LC and
AMoChA. Finally, we successfully acquire an online LAS model, LC-AMoChA, which
has only 3.5% relative performance reduction to LAS baseline on our internal
Mandarin corpus.
| 2,019 | Computation and Language |
Modality Attention for End-to-End Audio-visual Speech Recognition | Audio-visual speech recognition (AVSR) system is thought to be one of the
most promising solutions for robust speech recognition, especially in noisy
environment. In this paper, we propose a novel multimodal attention based
method for audio-visual speech recognition which could automatically learn the
fused representation from both modalities based on their importance. Our method
is realized using state-of-the-art sequence-to-sequence (Seq2seq)
architectures. Experimental results show that relative improvements from 2% up
to 36% over the auditory modality alone are obtained depending on the different
signal-to-noise-ratio (SNR). Compared to the traditional feature concatenation
methods, our proposed approach can achieve better recognition performance under
both clean and noisy conditions. We believe modality attention based end-to-end
method can be easily generalized to other multimodal tasks with correlated
information.
| 2,019 | Computation and Language |
Predicting Distresses using Deep Learning of Text Segments in Annual
Reports | Corporate distress models typically only employ the numerical financial
variables in the firms' annual reports. We develop a model that employs the
unstructured textual data in the reports as well, namely the auditors' reports
and managements' statements. Our model consists of a convolutional recurrent
neural network which, when concatenated with the numerical financial variables,
learns a descriptive representation of the text that is suited for corporate
distress prediction. We find that the unstructured data provides a
statistically significant enhancement of the distress prediction performance,
in particular for large firms where accurate predictions are of the utmost
importance. Furthermore, we find that auditors' reports are more informative
than managements' statements and that a joint model including both managements'
statements and auditors' reports displays no enhancement relative to a model
including only auditors' reports. Our model demonstrates a direct improvement
over existing state-of-the-art models.
| 2,018 | Computation and Language |
Unsupervised Transfer Learning for Spoken Language Understanding in
Intelligent Agents | User interaction with voice-powered agents generates large amounts of
unlabeled utterances. In this paper, we explore techniques to efficiently
transfer the knowledge from these unlabeled utterances to improve model
performance on Spoken Language Understanding (SLU) tasks. We use Embeddings
from Language Model (ELMo) to take advantage of unlabeled data by learning
contextualized word representations. Additionally, we propose ELMo-Light
(ELMoL), a faster and simpler unsupervised pre-training method for SLU. Our
findings suggest unsupervised pre-training on a large corpora of unlabeled
utterances leads to significantly better SLU performance compared to training
from scratch and it can even outperform conventional supervised transfer.
Additionally, we show that the gains from unsupervised transfer techniques can
be further improved by supervised transfer. The improvements are more
pronounced in low resource settings and when using only 1000 labeled in-domain
samples, our techniques match the performance of training from scratch on
10-15x more labeled in-domain data.
| 2,018 | Computation and Language |
Multi-task learning for Joint Language Understanding and Dialogue State
Tracking | This paper presents a novel approach for multi-task learning of language
understanding (LU) and dialogue state tracking (DST) in task-oriented dialogue
systems. Multi-task training enables the sharing of the neural network layers
responsible for encoding the user utterance for both LU and DST and improves
performance while reducing the number of network parameters. In our proposed
framework, DST operates on a set of candidate values for each slot that has
been mentioned so far. These candidate sets are generated using LU slot
annotations for the current user utterance, dialogue acts corresponding to the
preceding system utterance and the dialogue state estimated for the previous
turn, enabling DST to handle slots with a large or unbounded set of possible
values and deal with slot values not seen during training. Furthermore, to
bridge the gap between training and inference, we investigate the use of
scheduled sampling on LU output for the current user utterance as well as the
DST output for the preceding turn.
| 2,018 | Computation and Language |
Towards Neural Machine Translation for African Languages | Given that South African education is in crisis, strategies for improvement
and sustainability of high-quality, up-to-date education must be explored. In
the migration of education online, inclusion of machine translation for
low-resourced local languages becomes necessary. This paper aims to spur the
use of current neural machine translation (NMT) techniques for low-resourced
local languages. The paper demonstrates state-of-the-art performance on
English-to-Setswana translation using the Autshumato dataset. The use of the
Transformer architecture beat previous techniques by 5.33 BLEU points. This
demonstrates the promise of using current NMT techniques for African languages.
| 2,018 | Computation and Language |
Few-shot Learning for Named Entity Recognition in Medical Text | Deep neural network models have recently achieved state-of-the-art
performance gains in a variety of natural language processing (NLP) tasks
(Young, Hazarika, Poria, & Cambria, 2017). However, these gains rely on the
availability of large amounts of annotated examples, without which
state-of-the-art performance is rarely achievable. This is especially
inconvenient for the many NLP fields where annotated examples are scarce, such
as medical text. To improve NLP models in this situation, we evaluate five
improvements on named entity recognition (NER) tasks when only ten annotated
examples are available: (1) layer-wise initialization with pre-trained weights,
(2) hyperparameter tuning, (3) combining pre-training data, (4) custom word
embeddings, and (5) optimizing out-of-vocabulary (OOV) words. Experimental
results show that the F1 score of 69.3% achievable by state-of-the-art models
can be improved to 78.87%.
| 2,018 | Computation and Language |
Native Language Identification using i-vector | The task of determining a speaker's native language based only on his
speeches in a second language is known as Native Language Identification or
NLI. Due to its increasing applications in various domains of speech signal
processing, this has emerged as an important research area in recent times. In
this paper we have proposed an i-vector based approach to develop an automatic
NLI system using MFCC and GFCC features. For evaluation of our approach, we
have tested our framework on the 2016 ComParE Native language sub-challenge
dataset which has English language speakers from 11 different native language
backgrounds. Our proposed method outperforms the baseline system with an
improvement in accuracy by 21.95% for the MFCC feature based i-vector framework
and 22.81% for the GFCC feature based i-vector framework.
| 2,018 | Computation and Language |
Extractive Summary as Discrete Latent Variables | In this paper, we compare various methods to compress a text using a neural
model. We find that extracting tokens as latent variables significantly
outperforms the state-of-the-art discrete latent variable models such as
VQ-VAE. Furthermore, we compare various extractive compression schemes. There
are two best-performing methods that perform equally. One method is to simply
choose the tokens with the highest tf-idf scores. Another is to train a
bidirectional language model similar to ELMo and choose the tokens with the
highest loss. If we consider any subsequence of a text to be a text in a
broader sense, we conclude that language is a strong compression code of
itself. Our finding justifies the high quality of generation achieved with
hierarchical method, as their latent variables are nothing but natural language
summary. We also conclude that there is a hierarchy in language such that an
entire text can be predicted much more easily based on a sequence of a small
number of keywords, which can be easily found by classical methods as tf-idf.
We speculate that this extraction process may be useful for unsupervised
hierarchical text generation.
| 2,019 | Computation and Language |
An Introductory Survey on Attention Mechanisms in NLP Problems | First derived from human intuition, later adapted to machine translation for
automatic token alignment, attention mechanism, a simple method that can be
used for encoding sequence data based on the importance score each element is
assigned, has been widely applied to and attained significant improvement in
various tasks in natural language processing, including sentiment
classification, text summarization, question answering, dependency parsing,
etc. In this paper, we survey through recent works and conduct an introductory
summary of the attention mechanism in different NLP problems, aiming to provide
our readers with basic knowledge on this widely used method, discuss its
different variants for different tasks, explore its association with other
techniques in machine learning, and examine methods for evaluating its
performance.
| 2,018 | Computation and Language |
Discourse in Multimedia: A Case Study in Information Extraction | To ensure readability, text is often written and presented with due
formatting. These text formatting devices help the writer to effectively convey
the narrative. At the same time, these help the readers pick up the structure
of the discourse and comprehend the conveyed information. There have been a
number of linguistic theories on discourse structure of text. However, these
theories only consider unformatted text. Multimedia text contains rich
formatting features which can be leveraged for various NLP tasks. In this
paper, we study some of these discourse features in multimedia text and what
communicative function they fulfil in the context. We examine how these
multimedia discourse features can be used to improve an information extraction
system. We show that the discourse and text layout features provide information
that is complementary to lexical semantic information commonly used for
information extraction. As a case study, we use these features to harvest
structured subject knowledge of geometry from textbooks. We show that the
harvested structured knowledge can be used to improve an existing solver for
geometry problems, making it more accurate as well as more explainable.
| 2,018 | Computation and Language |
An Analysis of the Semantic Annotation Task on the Linked Data Cloud | Semantic annotation, the process of identifying key-phrases in texts and
linking them to concepts in a knowledge base, is an important basis for
semantic information retrieval and the Semantic Web uptake. Despite the
emergence of semantic annotation systems, very few comparative studies have
been published on their performance. In this paper, we provide an evaluation of
the performance of existing systems over three tasks: full semantic annotation,
named entity recognition, and keyword detection. More specifically, the
spotting capability (recognition of relevant surface forms in text) is
evaluated for all three tasks, whereas the disambiguation (correctly
associating an entity from Wikipedia or DBpedia to the spotted surface forms)
is evaluated only for the first two tasks. Our evaluation is twofold: First, we
compute standard precision and recall on the output of semantic annotators on
diverse datasets, each best suited for one of the identified tasks. Second, we
build a statistical model using logistic regression to identify significant
performance differences. Our results show that systems that provide full
annotation perform better than named entities annotators and keyword
extractors, for all three tasks. However, there is still much room for
improvement for the identification of the most relevant entities described in a
text.
| 2,018 | Computation and Language |
Corpus Phonetics Tutorial | Corpus phonetics has become an increasingly popular method of research in
linguistic analysis. With advances in speech technology and computational
power, large scale processing of speech data has become a viable technique.
This tutorial introduces the speech scientist and engineer to various automatic
speech processing tools. These include acoustic model creation and forced
alignment using the Kaldi Automatic Speech Recognition Toolkit (Povey et al.,
2011), forced alignment using FAVE-align (Rosenfelder et al., 2014), the
Montreal Forced Aligner (McAuliffe et al., 2017), and the Penn Phonetics Lab
Forced Aligner (Yuan & Liberman, 2008), as well as stop consonant burst
alignment using AutoVOT (Keshet et al., 2014). The tutorial provides a general
overview of each program, step-by-step instructions for running the program, as
well as several tips and tricks.
| 2,018 | Computation and Language |
Text Assisted Insight Ranking Using Context-Aware Memory Network | Extracting valuable facts or informative summaries from multi-dimensional
tables, i.e. insight mining, is an important task in data analysis and business
intelligence. However, ranking the importance of insights remains a challenging
and unexplored task. The main challenge is that explicitly scoring an insight
or giving it a rank requires a thorough understanding of the tables and costs a
lot of manual efforts, which leads to the lack of available training data for
the insight ranking problem. In this paper, we propose an insight ranking model
that consists of two parts: A neural ranking model explores the data
characteristics, such as the header semantics and the data statistical
features, and a memory network model introduces table structure and context
information into the ranking process. We also build a dataset with text
assistance. Experimental results show that our approach largely improves the
ranking precision as reported in multi evaluation metrics.
| 2,018 | Computation and Language |
Cross-lingual Short-text Matching with Deep Learning | The problem of short text matching is formulated as follows: given a pair of
sentences or questions, a matching model determines whether the input pair mean
the same or not. Models that can automatically identify questions with the same
meaning have a wide range of applications in question answering sites and
modern chatbots. In this article, we describe the approach by team hahu to
solve this problem in the context of the "CIKM AnalytiCup 2018 - Cross-lingual
Short-text Matching of Question Pairs" that is sponsored by Alibaba. Our
solution is an end-to-end system based on current advances in deep learning
which avoids heavy feature-engineering and achieves improved performance over
traditional machine-learning approaches. The log-loss scores for the first and
second rounds of the contest are 0.35 and 0.39 respectively. The team was
ranked 7th from 1027 teams in the overall ranking scheme by the organizers that
consisted of the two contest scores as well as: innovation and system
integrity, understanding data as well as practicality of the solution for
business.
| 2,018 | Computation and Language |
Improving Distantly Supervised Relation Extraction with Neural Noise
Converter and Conditional Optimal Selector | Distant supervised relation extraction has been successfully applied to large
corpus with thousands of relations. However, the inevitable wrong labeling
problem by distant supervision will hurt the performance of relation
extraction. In this paper, we propose a method with neural noise converter to
alleviate the impact of noisy data, and a conditional optimal selector to make
proper prediction. Our noise converter learns the structured transition matrix
on logit level and captures the property of distant supervised relation
extraction dataset. The conditional optimal selector on the other hand helps to
make proper prediction decision of an entity pair even if the group of
sentences is overwhelmed by no-relation sentences. We conduct experiments on a
widely used dataset and the results show significant improvement over
competitive baseline methods.
| 2,018 | Computation and Language |
Translating a Math Word Problem to an Expression Tree | Sequence-to-sequence (SEQ2SEQ) models have been successfully applied to
automatic math word problem solving. Despite its simplicity, a drawback still
remains: a math word problem can be correctly solved by more than one
equations. This non-deterministic transduction harms the performance of maximum
likelihood estimation. In this paper, by considering the uniqueness of
expression tree, we propose an equation normalization method to normalize the
duplicated equations. Moreover, we analyze the performance of three popular
SEQ2SEQ models on the math word problem solving. We find that each model has
its own specialty in solving problems, consequently an ensemble model is then
proposed to combine their advantages. Experiments on dataset Math23K show that
the ensemble model with equation normalization significantly outperforms the
previous state-of-the-art methods.
| 2,018 | Computation and Language |
Modeling Coherence for Discourse Neural Machine Translation | Discourse coherence plays an important role in the translation of one text.
However, the previous reported models most focus on improving performance over
individual sentence while ignoring cross-sentence links and dependencies, which
affects the coherence of the text. In this paper, we propose to use discourse
context and reward to refine the translation quality from the discourse
perspective. In particular, we generate the translation of individual sentences
at first. Next, we deliberate the preliminary produced translations, and train
the model to learn the policy that produces discourse coherent text by a reward
teacher. Practical results on multiple discourse test datasets indicate that
our model significantly improves the translation quality over the
state-of-the-art baseline system by +1.23 BLEU score. Moreover, our model
generates more discourse coherent text and obtains +2.2 BLEU improvements when
evaluated by discourse metrics.
| 2,019 | Computation and Language |
Leveraging Aspect Phrase Embeddings for Cross-Domain Review Rating
Prediction | Online review platforms are a popular way for users to post reviews by
expressing their opinions towards a product or service, as well as they are
valuable for other users and companies to find out the overall opinions of
customers. These reviews tend to be accompanied by a rating, where the star
rating has become the most common approach for users to give their feedback in
a quantitative way, generally as a likert scale of 1-5 stars. In other social
media platforms like Facebook or Twitter, an automated review rating prediction
system can be useful to determine the rating that a user would have given to
the product or service. Existing work on review rating prediction focuses on
specific domains, such as restaurants or hotels. This, however, ignores the
fact that some review domains which are less frequently rated, such as
dentists, lack sufficient data to build a reliable prediction model. In this
paper, we experiment on 12 datasets pertaining to 12 different review domains
of varying level of popularity to assess the performance of predictions across
different domains. We introduce a model that leverages aspect phrase embeddings
extracted from the reviews, which enables the development of both in-domain and
cross-domain review rating prediction systems. Our experiments show that both
of our review rating prediction systems outperform all other baselines. The
cross-domain review rating prediction system is particularly significant for
the least popular review domains, where leveraging training data from other
domains leads to remarkable improvements in performance. The in-domain review
rating prediction system is instead more suitable for popular review domains,
provided that a model built from training data pertaining to the target domain
is more suitable when this data is abundant.
| 2,018 | Computation and Language |
Generating Multiple Diverse Responses for Short-Text Conversation | Neural generative models have become popular and achieved promising
performance on short-text conversation tasks. They are generally trained to
build a 1-to-1 mapping from the input post to its output response. However, a
given post is often associated with multiple replies simultaneously in real
applications. Previous research on this task mainly focuses on improving the
relevance and informativeness of the top one generated response for each post.
Very few works study generating multiple accurate and diverse responses for the
same post. In this paper, we propose a novel response generation model, which
considers a set of responses jointly and generates multiple diverse responses
simultaneously. A reinforcement learning algorithm is designed to solve our
model. Experiments on two short-text conversation tasks validate that the
multiple responses generated by our model obtain higher quality and larger
diversity compared with various state-of-the-art generative models.
| 2,019 | Computation and Language |
Plan-And-Write: Towards Better Automatic Storytelling | Automatic storytelling is challenging since it requires generating long,
coherent natural language to describes a sensible sequence of events. Despite
considerable efforts on automatic story generation in the past, prior work
either is restricted in plot planning, or can only generate stories in a narrow
domain. In this paper, we explore open-domain story generation that writes
stories given a title (topic) as input. We propose a plan-and-write
hierarchical generation framework that first plans a storyline, and then
generates a story based on the storyline. We compare two planning strategies.
The dynamic schema interweaves story planning and its surface realization in
text, while the static schema plans out the entire storyline before generating
stories. Experiments show that with explicit storyline planning, the generated
stories are more diverse, coherent, and on topic than those generated without
creating a full plan, according to both automatic and human evaluations.
| 2,019 | Computation and Language |
From Free Text to Clusters of Content in Health Records: An Unsupervised
Graph Partitioning Approach | Electronic Healthcare records contain large volumes of unstructured data in
different forms. Free text constitutes a large portion of such data, yet this
source of richly detailed information often remains under-used in practice
because of a lack of suitable methodologies to extract interpretable content in
a timely manner. Here we apply network-theoretical tools to the analysis of
free text in Hospital Patient Incident reports in the English National Health
Service, to find clusters of reports in an unsupervised manner and at different
levels of resolution based directly on the free text descriptions contained
within them. To do so, we combine recently developed deep neural network
text-embedding methodologies based on paragraph vectors with multi-scale Markov
Stability community detection applied to a similarity graph of documents
obtained from sparsified text vector similarities. We showcase the approach
with the analysis of incident reports submitted in Imperial College Healthcare
NHS Trust, London. The multiscale community structure reveals levels of meaning
with different resolution in the topics of the dataset, as shown by relevant
descriptive terms extracted from the groups of records, as well as by comparing
a posteriori against hand-coded categories assigned by healthcare personnel.
Our content communities exhibit good correspondence with well-defined
hand-coded categories, yet our results also provide further medical detail in
certain areas as well as revealing complementary descriptors of incidents
beyond the external classification. We also discuss how the method can be used
to monitor reports over time and across different healthcare providers, and to
detect emerging trends that fall outside of pre-existing categories.
| 2,019 | Computation and Language |
A Deterministic Algorithm for Bridging Anaphora Resolution | Previous work on bridging anaphora resolution (Poesio et al., 2004; Hou et
al., 2013b) use syntactic preposition patterns to calculate word relatedness.
However, such patterns only consider NPs' head nouns and hence do not fully
capture the semantics of NPs. Recently, Hou (2018) created word embeddings
(embeddings_PP) to capture associative similarity (ie, relatedness) between
nouns by exploring the syntactic structure of noun phrases. But embeddings_PP
only contains word representations for nouns. In this paper, we create new word
vectors by combining embeddings_PP with GloVe. This new word embeddings
(embeddings_bridging) are a more general lexical knowledge resource for
bridging and allow us to represent the meaning of an NP beyond its head easily.
We therefore develop a deterministic approach for bridging anaphora resolution,
which represents the semantics of an NP based on its head noun and
modifications. We show that this simple approach achieves the competitive
results compared to the best system in Hou et al.(2013b) which explores Markov
Logic Networks to model the problem. Additionally, we further improve the
results for bridging anaphora resolution reported in Hou (2018) by combining
our simple deterministic approach with Hou et al.(2013b)'s best system MLN II.
| 2,018 | Computation and Language |
Neural Based Statement Classification for Biased Language | Biased language commonly occurs around topics which are of controversial
nature, thus, stirring disagreement between the different involved parties of a
discussion. This is due to the fact that for language and its use,
specifically, the understanding and use of phrases, the stances are cohesive
within the particular groups. However, such cohesiveness does not hold across
groups.
In collaborative environments or environments where impartial language is
desired (e.g. Wikipedia, news media), statements and the language therein
should represent equally the involved parties and be neutrally phrased. Biased
language is introduced through the presence of inflammatory words or phrases,
or statements that may be incorrect or one-sided, thus violating such
consensus.
In this work, we focus on the specific case of phrasing bias, which may be
introduced through specific inflammatory words or phrases in a statement. For
this purpose, we propose an approach that relies on a recurrent neural networks
in order to capture the inter-dependencies between words in a phrase that
introduced bias.
We perform a thorough experimental evaluation, where we show the advantages
of a neural based approach over competitors that rely on word lexicons and
other hand-crafted features in detecting biased language. We are able to
distinguish biased statements with a precision of P=0.92, thus significantly
outperforming baseline models with an improvement of over 30%. Finally, we
release the largest corpus of statements annotated for biased language.
| 2,018 | Computation and Language |
Parser Extraction of Triples in Unstructured Text | The web contains vast repositories of unstructured text. We investigate the
opportunity for building a knowledge graph from these text sources. We generate
a set of triples which can be used in knowledge gathering and integration. We
define the architecture of a language compiler for processing
subject-predicate-object triples using the OpenNLP parser. We implement a
depth-first search traversal on the POS tagged syntactic tree appending
predicate and object information. A parser enables higher precision and higher
recall extractions of syntactic relationships across conjunction boundaries. We
are able to extract 2-2.5 times the correct extractions of ReVerb. The
extractions are used in a variety of semantic web applications and question
answering. We verify extraction of 50,000 triples on the ClueWeb dataset.
| 2,017 | Computation and Language |
Internal Wiring of Cartesian Verbs and Prepositions | Categorical compositional distributional semantics (CCDS) allows one to
compute the meaning of phrases and sentences from the meaning of their
constituent words. A type-structure carried over from the traditional
categorial model of grammar a la Lambek becomes a 'wire-structure' that
mediates the interaction of word meanings. However, CCDS has a much richer
logical structure than plain categorical semantics in that certain words can
also be given an 'internal wiring' that either provides their entire meaning or
reduces the size their meaning space. Previous examples of internal wiring
include relative pronouns and intersective adjectives. Here we establish the
same for a large class of well-behaved transitive verbs to which we refer as
Cartesian verbs, and reduce the meaning space from a ternary tensor to a unary
one. Some experimental evidence is also provided.
| 2,018 | Computation and Language |
Fake Comment Detection Based on Sentiment Analysis | With the development of the E-commerce and reviews website, the comment
information is influencing people's life. More and more users share their
consumption experience and evaluate the quality of commodity by comment. When
people make a decision, they will refer these comments. The dependency of the
comments make the fake comment appear. The fake comment is that for profit and
other bad motivation, business fabricate untrue consumption experience and they
preach or slander some products. The fake comment is easy to mislead users'
opinion and decision. The accuracy of humans identifying fake comment is low.
It's meaningful to detect fake comment using natural language processing
technology for people getting true comment information. This paper uses the
sentimental analysis to detect fake comment.
| 2,018 | Computation and Language |
Char2char Generation with Reranking for the E2E NLG Challenge | This paper describes our submission to the E2E NLG Challenge. Recently,
neural seq2seq approaches have become mainstream in NLG, often resorting to
pre- (respectively post-) processing delexicalization (relexicalization) steps
at the word-level to handle rare words. By contrast, we train a simple
character level seq2seq model, which requires no pre/post-processing
(delexicalization, tokenization or even lowercasing), with surprisingly good
results. For further improvement, we explore two re-ranking approaches for
scoring candidates. We also introduce a synthetic dataset creation procedure,
which opens up a new way of creating artificial datasets for Natural Language
Generation.
| 2,018 | Computation and Language |
Jointly identifying opinion mining elements and fuzzy measurement of
opinion intensity to analyze product features | Opinion mining mainly involves three elements: feature and feature-of
relations, opinion expressions and the related opinion attributes (e.g.
Polarity), and feature-opinion relations. Although many works have emerged to
achieve its aim of gaining information, the previous researches typically
handled each of the three elements in isolation, which cannot give sufficient
information extraction results; hence, the complexity and the running time of
information extraction is increased. In this paper, we propose an opinion
mining extraction algorithm to jointly discover the main opinion mining
elements. Specifically, the algorithm automatically builds kernels to combine
closely related words into new terms from word level to phrase level based on
dependency relations; and we ensure the accuracy of opinion expressions and
polarity based on: fuzzy measurements, opinion degree intensifiers, and opinion
patterns. The 3458 analyzed reviews show that the proposed algorithm can
effectively identify the main elements simultaneously and outperform the
baseline methods. The proposed algorithm is used to analyze the features among
heterogeneous products in the same category. The feature-by-feature comparison
can help to select the weaker features and recommend the correct specifications
from the beginning life of a product. From this comparison, some interesting
observations are revealed. For example, the negative polarity of video
dimension is higher than the product usability dimension for a product. Yet,
enhancing the dimension of product usability can more effectively improve the
product (C) 2015 Elsevier Ltd. All rights reserved.
| 2,016 | Computation and Language |
Dependency Grammar Induction with a Neural Variational Transition-based
Parser | Dependency grammar induction is the task of learning dependency syntax
without annotated training data. Traditional graph-based models with global
inference achieve state-of-the-art results on this task but they require
$O(n^3)$ run time. Transition-based models enable faster inference with $O(n)$
time complexity, but their performance still lags behind. In this work, we
propose a neural transition-based parser for dependency grammar induction,
whose inference procedure utilizes rich neural features with $O(n)$ time
complexity. We train the parser with an integration of variational inference,
posterior regularization and variance reduction techniques. The resulting
framework outperforms previous unsupervised transition-based dependency parsers
and achieves performance comparable to graph-based models, both on the English
Penn Treebank and on the Universal Dependency Treebank. In an empirical
comparison, we show that our approach substantially increases parsing speed
over graph-based models.
| 2,018 | Computation and Language |
The ADAPT System Description for the IWSLT 2018 Basque to English
Translation Task | In this paper we present the ADAPT system built for the Basque to English Low
Resource MT Evaluation Campaign. Basque is a low-resourced,
morphologically-rich language. This poses a challenge for Neural Machine
Translation models which usually achieve better performance when trained with
large sets of data.
Accordingly, we used synthetic data to improve the translation quality
produced by a model built using only authentic data. Our proposal uses
back-translated data to: (a) create new sentences, so the system can be trained
with more data; and (b) translate sentences that are close to the test set, so
the model can be fine-tuned to the document to be translated.
| 2,018 | Computation and Language |
Jointly Learning to Label Sentences and Tokens | Learning to construct text representations in end-to-end systems can be
difficult, as natural languages are highly compositional and task-specific
annotated datasets are often limited in size. Methods for directly supervising
language composition can allow us to guide the models based on existing
knowledge, regularizing them towards more robust and interpretable
representations. In this paper, we investigate how objectives at different
granularities can be used to learn better language representations and we
propose an architecture for jointly learning to label sentences and tokens. The
predictions at each level are combined together using an attention mechanism,
with token-level labels also acting as explicit supervision for composing
sentence-level representations. Our experiments show that by learning to
perform these tasks jointly on multiple levels, the model achieves substantial
improvements for both sentence classification and sequence labeling.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.