Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Lost in Space: Geolocation in Event Data | Extracting the "correct" location information from text data, i.e.,
determining the place of event, has long been a goal for automated text
processing. To approximate human-like coding schema, we introduce a supervised
machine learning algorithm that classifies each location word to be either
correct or incorrect. We use news articles collected from around the world
(Integrated Crisis Early Warning System [ICEWS] data and Open Event Data
Alliance [OEDA] data) to test our algorithm that consists of two stages. In the
feature selection stage, we extract contextual information from texts, namely,
the N-gram patterns for location words, the frequency of mention, and the
context of the sentences containing location words. In the classification
stage, we use three classifiers to estimate the model parameters in the
training set and then to predict whether a location word in the test set news
articles is the place of the event. The validation results show that our
algorithm improves the accuracy rate of the current geolocation methods of
dictionary approach by as much as 25%.
| 2,019 | Computation and Language |
Quantitative Entropy Study of Language Complexity | We study the entropy of Chinese and English texts, based on characters in
case of Chinese texts and based on words for both languages. Significant
differences are found between the languages and between different personal
styles of debating partners. The entropy analysis points in the direction of
lower entropy, that is of higher complexity. Such a text analysis would be
applied for individuals of different styles, a single individual at different
age, as well as different groups of the population.
| 2,017 | Computation and Language |
Interpreting the Syntactic and Social Elements of the Tweet
Representations via Elementary Property Prediction Tasks | Research in social media analysis is experiencing a recent surge with a large
number of works applying representation learning models to solve high-level
syntactico-semantic tasks such as sentiment analysis, semantic textual
similarity computation, hashtag prediction and so on. Although the performance
of the representation learning models are better than the traditional baselines
for the tasks, little is known about the core properties of a tweet encoded
within the representations. Understanding these core properties would empower
us in making generalizable conclusions about the quality of representations.
Our work presented here constitutes the first step in opening the black-box of
vector embedding for social media posts, with emphasis on tweets in particular.
In order to understand the core properties encoded in a tweet representation,
we evaluate the representations to estimate the extent to which it can model
each of those properties such as tweet length, presence of words, hashtags,
mentions, capitalization, and so on. This is done with the help of multiple
classifiers which take the representation as input. Essentially, each
classifier evaluates one of the syntactic or social properties which are
arguably salient for a tweet. This is also the first holistic study on
extensively analysing the ability to encode these properties for a wide variety
of tweet representation models including the traditional unsupervised methods
(BOW, LDA), unsupervised representation learning methods (Siamese CBOW,
Tweet2Vec) as well as supervised methods (CNN, BLSTM).
| 2,016 | Computation and Language |
Neural Machine Translation with Pivot Languages | While recent neural machine translation approaches have delivered
state-of-the-art performance for resource-rich language pairs, they suffer from
the data scarcity problem for resource-scarce language pairs. Although this
problem can be alleviated by exploiting a pivot language to bridge the source
and target languages, the source-to-pivot and pivot-to-target translation
models are usually independently trained. In this work, we introduce a joint
training algorithm for pivot-based neural machine translation. We propose three
methods to connect the two models and enable them to interact with each other
during training. Experiments on Europarl and WMT corpora show that joint
training of source-to-pivot and pivot-to-target models leads to significant
improvements over independent training across various languages.
| 2,017 | Computation and Language |
End-to-End Neural Sentence Ordering Using Pointer Network | Sentence ordering is one of important tasks in NLP. Previous works mainly
focused on improving its performance by using pair-wise strategy. However, it
is nontrivial for pair-wise models to incorporate the contextual sentence
information. In addition, error prorogation could be introduced by using the
pipeline strategy in pair-wise models. In this paper, we propose an end-to-end
neural approach to address the sentence ordering problem, which uses the
pointer network (Ptr-Net) to alleviate the error propagation problem and
utilize the whole contextual information. Experimental results show the
effectiveness of the proposed model. Source codes and dataset of this paper are
available.
| 2,016 | Computation and Language |
Recurrent Neural Network based Part-of-Speech Tagger for Code-Mixed
Social Media Text | This paper describes Centre for Development of Advanced Computing's (CDACM)
submission to the shared task-'Tool Contest on POS tagging for Code-Mixed
Indian Social Media (Facebook, Twitter, and Whatsapp) Text', collocated with
ICON-2016. The shared task was to predict Part of Speech (POS) tag at word
level for a given text. The code-mixed text is generated mostly on social media
by multilingual users. The presence of the multilingual words,
transliterations, and spelling variations make such content linguistically
complex. In this paper, we propose an approach to POS tag code-mixed social
media text using Recurrent Neural Network Language Model (RNN-LM) architecture.
We submitted the results for Hindi-English (hi-en), Bengali-English (bn-en),
and Telugu-English (te-en) code-mixed data.
| 2,016 | Computation and Language |
A Way out of the Odyssey: Analyzing and Combining Recent Insights for
LSTMs | LSTMs have become a basic building block for many deep NLP models. In recent
years, many improvements and variations have been proposed for deep sequence
models in general, and LSTMs in particular. We propose and analyze a series of
augmentations and modifications to LSTM networks resulting in improved
performance for text classification datasets. We observe compounding
improvements on traditional LSTMs using Monte Carlo test-time model averaging,
average pooling, and residual connections, along with four other suggested
modifications. Our analysis provides a simple, reliable, and high quality
baseline model.
| 2,016 | Computation and Language |
How to do lexical quality estimation of a large OCRed historical Finnish
newspaper collection with scarce resources | The National Library of Finland has digitized the historical newspapers
published in Finland between 1771 and 1910. This collection contains
approximately 1.95 million pages in Finnish and Swedish. Finnish part of the
collection consists of about 2.40 billion words. The National Library's Digital
Collections are offered via the digi.kansalliskirjasto.fi web service, also
known as Digi. Part of the newspaper material (from 1771 to 1874) is also
available freely downloadable in The Language Bank of Finland provided by the
FINCLARIN consortium. The collection can also be accessed through the Korp
environment that has been developed by Spr{\aa}kbanken at the University of
Gothenburg and extended by FINCLARIN team at the University of Helsinki to
provide concordances of text resources. A Cranfield style information retrieval
test collection has also been produced out of a small part of the Digi
newspaper material at the University of Tampere.
Quality of OCRed collections is an important topic in digital humanities, as
it affects general usability and searchability of collections. There is no
single available method to assess quality of large collections, but different
methods can be used to approximate quality. This paper discusses different
corpus analysis style methods to approximate overall lexical quality of the
Finnish part of the Digi collection. Methods include usage of parallel samples
and word error rates, usage of morphological analyzers, frequency analysis of
words and comparisons to comparable edited lexical data. Our aim in the quality
analysis is twofold: firstly to analyze the present state of the lexical data
and secondly, to establish a set of assessment methods that build up a compact
procedure for quality assessment after e.g. new OCRing or post correction of
the material. In the discussion part of the paper we shall synthesize results
of our different analyses.
| 2,019 | Computation and Language |
The Life of Lazarillo de Tormes and of His Machine Learning Adversities | Summit work of the Spanish Golden Age and forefather of the so-called
picaresque novel, The Life of Lazarillo de Tormes and of His Fortunes and
Adversities still remains an anonymous text. Although distinguished scholars
have tried to attribute it to different authors based on a variety of criteria,
a consensus has yet to be reached. The list of candidates is long and not all
of them enjoy the same support within the scholarly community. Analyzing their
works from a data-driven perspective and applying machine learning techniques
for style and text fingerprinting, we shed light on the authorship of the
Lazarillo. As in a state-of-the-art survey, we discuss the methods used and how
they perform in our specific case. According to our methodology, the most
likely author seems to be Juan Arce de Ot\'alora, closely followed by Alfonso
de Vald\'es. The method states that not certain attribution can be made with
the given corpus.
| 2,016 | Computation and Language |
A Feature-Enriched Neural Model for Joint Chinese Word Segmentation and
Part-of-Speech Tagging | Recently, neural network models for natural language processing tasks have
been increasingly focused on for their ability of alleviating the burden of
manual feature engineering. However, the previous neural models cannot extract
the complicated feature compositions as the traditional methods with discrete
features. In this work, we propose a feature-enriched neural model for joint
Chinese word segmentation and part-of-speech tagging task. Specifically, to
simulate the feature templates of traditional discrete feature based models, we
use different filters to model the complex compositional features with
convolutional and pooling layer, and then utilize long distance dependency
information with recurrent layer. Experimental results on five different
datasets show the effectiveness of our proposed model.
| 2,017 | Computation and Language |
Automatic Node Selection for Deep Neural Networks using Group Lasso
Regularization | We examine the effect of the Group Lasso (gLasso) regularizer in selecting
the salient nodes of Deep Neural Network (DNN) hidden layers by applying a
DNN-HMM hybrid speech recognizer to TED Talks speech data. We test two types of
gLasso regularization, one for outgoing weight vectors and another for incoming
weight vectors, as well as two sizes of DNNs: 2048 hidden layer nodes and 4096
nodes. Furthermore, we compare gLasso and L2 regularizers. Our experiment
results demonstrate that our DNN training, in which the gLasso regularizer was
embedded, successfully selected the hidden layer nodes that are necessary and
sufficient for achieving high classification power.
| 2,016 | Computation and Language |
What Do Recurrent Neural Network Grammars Learn About Syntax? | Recurrent neural network grammars (RNNG) are a recently proposed
probabilistic generative modeling family for natural language. They show
state-of-the-art language modeling and parsing performance. We investigate what
information they learn, from a linguistic perspective, through various
ablations to the model and the data, and by augmenting the model with an
attention mechanism (GA-RNNG) to enable closer inspection. We find that
explicit modeling of composition is crucial for achieving the best performance.
Through the attention mechanism, we find that headedness plays a central role
in phrasal representation (with the model's latent attention largely agreeing
with predictions made by hand-crafted head rules, albeit with some important
differences). By training grammars without nonterminal labels, we find that
phrasal representations depend minimally on nonterminals, providing support for
the endocentricity hypothesis.
| 2,017 | Computation and Language |
Word and Document Embeddings based on Neural Network Approaches | Data representation is a fundamental task in machine learning. The
representation of data affects the performance of the whole machine learning
system. In a long history, the representation of data is done by feature
engineering, and researchers aim at designing better features for specific
tasks. Recently, the rapid development of deep learning and representation
learning has brought new inspiration to various domains.
In natural language processing, the most widely used feature representation
is the Bag-of-Words model. This model has the data sparsity problem and cannot
keep the word order information. Other features such as part-of-speech tagging
or more complex syntax features can only fit for specific tasks in most cases.
This thesis focuses on word representation and document representation. We
compare the existing systems and present our new model.
First, for generating word embeddings, we make comprehensive comparisons
among existing word embedding models. In terms of theory, we figure out the
relationship between the two most important models, i.e., Skip-gram and GloVe.
In our experiments, we analyze three key points in generating word embeddings,
including the model construction, the training corpus and parameter design. We
evaluate word embeddings with three types of tasks, and we argue that they
cover the existing use of word embeddings. Through theory and practical
experiments, we present some guidelines for how to generate a good word
embedding.
Second, in Chinese character or word representation. We introduce the joint
training of Chinese character and word. ...
Third, for document representation, we analyze the existing document
representation models, including recursive NNs, recurrent NNs and convolutional
NNs. We point out the drawbacks of these models and present our new model, the
recurrent convolutional neural networks. ...
| 2,016 | Computation and Language |
Visualizing and Understanding Curriculum Learning for Long Short-Term
Memory Networks | Curriculum Learning emphasizes the order of training instances in a
computational learning setup. The core hypothesis is that simpler instances
should be learned early as building blocks to learn more complex ones. Despite
its usefulness, it is still unknown how exactly the internal representation of
models are affected by curriculum learning. In this paper, we study the effect
of curriculum learning on Long Short-Term Memory (LSTM) networks, which have
shown strong competency in many Natural Language Processing (NLP) problems. Our
experiments on sentiment analysis task and a synthetic task similar to sequence
prediction tasks in NLP show that curriculum learning has a positive effect on
the LSTM's internal states by biasing the model towards building constructive
representations i.e. the internal representation at the previous timesteps are
used as building blocks for the final prediction. We also find that smaller
models significantly improves when they are trained with curriculum learning.
Lastly, we show that curriculum learning helps more when the amount of training
data is limited.
| 2,016 | Computation and Language |
Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure.
| 2,016 | Computation and Language |
Tracking Words in Chinese Poetry of Tang and Song Dynasties with the
China Biographical Database | Large-scale comparisons between the poetry of Tang and Song dynasties shed
light on how words, collocations, and expressions were used and shared among
the poets. That some words were used only in the Tang poetry and some only in
the Song poetry could lead to interesting research in linguistics. That the
most frequent colors are different in the Tang and Song poetry provides a trace
of the changing social circumstances in the dynasties. Results of the current
work link to research topics of lexicography, semantics, and social
transitions. We discuss our findings and present our algorithms for efficient
comparisons among the poems, which are crucial for completing billion times of
comparisons within acceptable time.
| 2,017 | Computation and Language |
Incorporating Pass-Phrase Dependent Background Models for Text-Dependent
Speaker Verification | In this paper, we propose pass-phrase dependent background models (PBMs) for
text-dependent (TD) speaker verification (SV) to integrate the pass-phrase
identification process into the conventional TD-SV system, where a PBM is
derived from a text-independent background model through adaptation using the
utterances of a particular pass-phrase. During training, pass-phrase specific
target speaker models are derived from the particular PBM using the training
data for the respective target model. While testing, the best PBM is first
selected for the test utterance in the maximum likelihood (ML) sense and the
selected PBM is then used for the log likelihood ratio (LLR) calculation with
respect to the claimant model. The proposed method incorporates the pass-phrase
identification step in the LLR calculation, which is not considered in
conventional standalone TD-SV systems. The performance of the proposed method
is compared to conventional text-independent background model based TD-SV
systems using either Gaussian mixture model (GMM)-universal background model
(UBM) or Hidden Markov model (HMM)-UBM or i-vector paradigms. In addition, we
consider two approaches to build PBMs: speaker-independent and
speaker-dependent. We show that the proposed method significantly reduces the
error rates of text-dependent speaker verification for the non-target types:
target-wrong and imposter-wrong while it maintains comparable TD-SV performance
when imposters speak a correct utterance with respect to the conventional
system. Experiments are conducted on the RedDots challenge and the RSR2015
databases that consist of short utterances.
| 2,018 | Computation and Language |
Visualizing Linguistic Shift | Neural network based models are a very powerful tool for creating word
embeddings, the objective of these models is to group similar words together.
These embeddings have been used as features to improve results in various
applications such as document classification, named entity recognition, etc.
Neural language models are able to learn word representations which have been
used to capture semantic shifts across time and geography. The objective of
this paper is to first identify and then visualize how words change meaning in
different text corpus. We will train a neural language model on texts from a
diverse set of disciplines philosophy, religion, fiction etc. Each text will
alter the embeddings of the words to represent the meaning of the word inside
that text. We will present a computational technique to detect words that
exhibit significant linguistic shift in meaning and usage. We then use enhanced
scatterplots and storyline visualization to visualize the linguistic shift.
| 2,016 | Computation and Language |
Text Classification Improved by Integrating Bidirectional LSTM with
Two-dimensional Max Pooling | Recurrent Neural Network (RNN) is one of the most popular architectures used
in Natural Language Processsing (NLP) tasks because its recurrent structure is
very suitable to process variable-length text. RNN can utilize distributed
representations of words by first converting the tokens comprising each text
into vectors, which form a matrix. And this matrix includes two dimensions: the
time-step dimension and the feature vector dimension. Then most existing models
usually utilize one-dimensional (1D) max pooling operation or attention-based
operation only on the time-step dimension to obtain a fixed-length vector.
However, the features on the feature vector dimension are not mutually
independent, and simply applying 1D pooling operation over the time-step
dimension independently may destroy the structure of the feature
representation. On the other hand, applying two-dimensional (2D) pooling
operation over the two dimensions may sample more meaningful features for
sequence modeling tasks. To integrate the features on both dimensions of the
matrix, this paper explores applying 2D max pooling operation to obtain a
fixed-length representation of the text. This paper also utilizes 2D
convolution to sample more meaningful information of the matrix. Experiments
are conducted on six text classification tasks, including sentiment analysis,
question classification, subjectivity classification and newsgroup
classification. Compared with the state-of-the-art models, the proposed models
achieve excellent performance on 4 out of 6 tasks. Specifically, one of the
proposed models achieves highest accuracy on Stanford Sentiment Treebank binary
classification and fine-grained classification tasks.
| 2,016 | Computation and Language |
Ontology Driven Disease Incidence Detection on Twitter | In this work we address the issue of generic automated disease incidence
monitoring on twitter. We employ an ontology of disease related concepts and
use it to obtain a conceptual representation of tweets. Unlike previous key
word based systems and topic modeling approaches, our ontological approach
allows us to apply more stringent criteria for determining which messages are
relevant such as spatial and temporal characteristics whilst giving a stronger
guarantee that the resulting models will perform well on new data that may be
lexically divergent. We achieve this by training learners on concepts rather
than individual words. For training we use a dataset containing mentions of
influenza and Listeria and use the learned models to classify datasets
containing mentions of an arbitrary selection of other diseases. We show that
our ontological approach achieves good performance on this task using a variety
of Natural Language Processing Techniques. We also show that word vectors can
be learned directly from our concepts to achieve even better results.
| 2,016 | Computation and Language |
False-Friend Detection and Entity Matching via Unsupervised
Transliteration | Transliterations play an important role in multilingual entity reference
resolution, because proper names increasingly travel between languages in news
and social media. Previous work associated with machine translation targets
transliteration only single between language pairs, focuses on specific classes
of entities (such as cities and celebrities) and relies on manual curation,
which limits the expression power of transliteration in multilingual
environment.
By contrast, we present an unsupervised transliteration model covering 69
major languages that can generate good transliterations for arbitrary strings
between any language pair. Our model yields top-(1, 20, 100) averages of
(32.85%, 60.44%, 83.20%) in matching gold standard transliteration compared to
results from a recently-published system of (26.71%, 50.27%, 72.79%). We also
show the quality of our model in detecting true and false friends from
Wikipedia high frequency lexicons. Our method indicates a strong signal of
pronunciation similarity and boosts the probability of finding true friends in
68 out of 69 languages.
| 2,016 | Computation and Language |
Bidirectional Tree-Structured LSTM with Head Lexicalization | Sequential LSTM has been extended to model tree structures, giving
competitive results for a number of tasks. Existing methods model constituent
trees by bottom-up combinations of constituent nodes, making direct use of
input word information only for leaf nodes. This is different from sequential
LSTMs, which contain reference to input words for each node. In this paper, we
propose a method for automatic head-lexicalization for tree-structure LSTMs,
propagating head words from leaf nodes to every constituent node. In addition,
enabled by head lexicalization, we build a tree LSTM in the top-down direction,
which corresponds to bidirectional sequential LSTM structurally. Experiments
show that both extensions give better representations of tree structures. Our
final model gives the best results on the Standford Sentiment Treebank and
highly competitive results on the TREC question type classification task.
| 2,016 | Computation and Language |
Robust end-to-end deep audiovisual speech recognition | Speech is one of the most effective ways of communication among humans. Even
though audio is the most common way of transmitting speech, very important
information can be found in other modalities, such as vision. Vision is
particularly useful when the acoustic signal is corrupted. Multi-modal speech
recognition however has not yet found wide-spread use, mostly because the
temporal alignment and fusion of the different information sources is
challenging.
This paper presents an end-to-end audiovisual speech recognizer (AVSR), based
on recurrent neural networks (RNN) with a connectionist temporal classification
(CTC) loss function. CTC creates sparse "peaky" output activations, and we
analyze the differences in the alignments of output targets (phonemes or
visemes) between audio-only, video-only, and audio-visual feature
representations. We present the first such experiments on the large vocabulary
IBM ViaVoice database, which outperform previously published approaches on
phone accuracy in clean and noisy conditions.
| 2,016 | Computation and Language |
Coherent Dialogue with Attention-based Language Models | We model coherent conversation continuation via RNN-based dialogue models
equipped with a dynamic attention mechanism. Our attention-RNN language model
dynamically increases the scope of attention on the history as the conversation
continues, as opposed to standard attention (or alignment) models with a fixed
input scope in a sequence-to-sequence model. This allows each generated word to
be associated with the most relevant words in its corresponding conversation
history. We evaluate the model on two popular dialogue datasets, the
open-domain MovieTriples dataset and the closed-domain Ubuntu Troubleshoot
dataset, and achieve significant improvements over the state-of-the-art and
baselines on several metrics, including complementary diversity-based metrics,
human evaluation, and qualitative visualizations. We also show that a vanilla
RNN with dynamic attention outperforms more complex memory models (e.g., LSTM
and GRU) by allowing for flexible, long-distance memory. We promote further
coherence via topic modeling-based reranking.
| 2,016 | Computation and Language |
Deep Recurrent Convolutional Neural Network: Improving Performance For
Speech Recognition | A deep learning approach has been widely applied in sequence modeling
problems. In terms of automatic speech recognition (ASR), its performance has
significantly been improved by increasing large speech corpus and deeper neural
network. Especially, recurrent neural network and deep convolutional neural
network have been applied in ASR successfully. Given the arising problem of
training speed, we build a novel deep recurrent convolutional network for
acoustic modeling and then apply deep residual learning to it. Our experiments
show that it has not only faster convergence speed but better recognition
accuracy over traditional deep convolutional recurrent network. In the
experiments, we compare the convergence speed of our novel deep recurrent
convolutional networks and traditional deep convolutional recurrent networks.
With faster convergence speed, our novel deep recurrent convolutional networks
can reach the comparable performance. We further show that applying deep
residual learning can boost the convergence speed of our novel deep recurret
convolutional networks. Finally, we evaluate all our experimental networks by
phoneme error rate (PER) with our proposed bidirectional statistical n-gram
language model. Our evaluation results show that our newly proposed deep
recurrent convolutional network applied with deep residual learning can reach
the best PER of 17.33\% with the fastest convergence speed on TIMIT database.
The outstanding performance of our novel deep recurrent convolutional neural
network with deep residual learning indicates that it can be potentially
adopted in other sequential problems.
| 2,016 | Computation and Language |
Learning to Distill: The Essence Vector Modeling Framework | In the context of natural language processing, representation learning has
emerged as a newly active research subject because of its excellent performance
in many applications. Learning representations of words is a pioneering study
in this school of research. However, paragraph (or sentence and document)
embedding learning is more suitable/reasonable for some tasks, such as
sentiment classification and document summarization. Nevertheless, as far as we
are aware, there is relatively less work focusing on the development of
unsupervised paragraph embedding methods. Classic paragraph embedding methods
infer the representation of a given paragraph by considering all of the words
occurring in the paragraph. Consequently, those stop or function words that
occur frequently may mislead the embedding learning process to produce a misty
paragraph representation. Motivated by these observations, our major
contributions in this paper are twofold. First, we propose a novel unsupervised
paragraph embedding method, named the essence vector (EV) model, which aims at
not only distilling the most representative information from a paragraph but
also excluding the general background information to produce a more informative
low-dimensional vector representation for the paragraph. Second, in view of the
increasing importance of spoken content processing, an extension of the EV
model, named the denoising essence vector (D-EV) model, is proposed. The D-EV
model not only inherits the advantages of the EV model but also can infer a
more robust representation for a given spoken paragraph against imperfect
speech recognition.
| 2,016 | Computation and Language |
Compositional Learning of Relation Path Embedding for Knowledge Base
Completion | Large-scale knowledge bases have currently reached impressive sizes; however,
these knowledge bases are still far from complete. In addition, most of the
existing methods for knowledge base completion only consider the direct links
between entities, ignoring the vital impact of the consistent semantics of
relation paths. In this paper, we study the problem of how to better embed
entities and relations of knowledge bases into different low-dimensional spaces
by taking full advantage of the additional semantics of relation paths, and we
propose a compositional learning model of relation path embedding (RPE).
Specifically, with the corresponding relation and path projections, RPE can
simultaneously embed each entity into two types of latent spaces. It is also
proposed that type constraints could be extended from traditional
relation-specific constraints to the new proposed path-specific constraints.
The results of experiments show that the proposed model achieves significant
and consistent improvements compared with the state-of-the-art algorithms.
| 2,017 | Computation and Language |
ATR4S: Toolkit with State-of-the-art Automatic Terms Recognition Methods
in Scala | Automatically recognized terminology is widely used for various
domain-specific texts processing tasks, such as machine translation,
information retrieval or sentiment analysis. However, there is still no
agreement on which methods are best suited for particular settings and,
moreover, there is no reliable comparison of already developed methods. We
believe that one of the main reasons is the lack of state-of-the-art methods
implementations, which are usually non-trivial to recreate. In order to address
these issues, we present ATR4S, an open-source software written in Scala that
comprises more than 15 methods for automatic terminology recognition (ATR) and
implements the whole pipeline from text document preprocessing, to term
candidates collection, term candidates scoring, and finally, term candidates
ranking. It is highly scalable, modular and configurable tool with support of
automatic caching. We also compare 10 state-of-the-art methods on 7 open
datasets by average precision and processing time. Experimental comparison
reveals that no single method demonstrates best average precision for all
datasets and that other available tools for ATR do not contain the best
methods.
| 2,016 | Computation and Language |
Learning Generic Sentence Representations Using Convolutional Neural
Networks | We propose a new encoder-decoder approach to learn distributed sentence
representations that are applicable to multiple purposes. The model is learned
by using a convolutional neural network as an encoder to map an input sentence
into a continuous vector, and using a long short-term memory recurrent neural
network as a decoder. Several tasks are considered, including sentence
reconstruction and future sentence prediction. Further, a hierarchical
encoder-decoder model is proposed to encode a sentence to predict multiple
future sentences. By training our models on a large collection of novels, we
obtain a highly generic convolutional sentence encoder that performs well in
practice. Experimental results on several benchmark datasets, and across a
broad range of applications, demonstrate the superiority of the proposed model
over competing methods.
| 2,017 | Computation and Language |
Emergent Predication Structure in Hidden State Vectors of Neural Readers | A significant number of neural architectures for reading comprehension have
recently been developed and evaluated on large cloze-style datasets. We present
experiments supporting the emergence of "predication structure" in the hidden
state vectors of these readers. More specifically, we provide evidence that the
hidden state vectors represent atomic formulas $\Phi[c]$ where $\Phi$ is a
semantic property (predicate) and $c$ is a constant symbol entity identifier.
| 2,017 | Computation and Language |
Scalable Bayesian Learning of Recurrent Neural Networks for Language
Modeling | Recurrent neural networks (RNNs) have shown promising performance for
language modeling. However, traditional training of RNNs using back-propagation
through time often suffers from overfitting. One reason for this is that
stochastic optimization (used for large training sets) does not provide good
estimates of model uncertainty. This paper leverages recent advances in
stochastic gradient Markov Chain Monte Carlo (also appropriate for large
training sets) to learn weight uncertainty in RNNs. It yields a principled
Bayesian learning algorithm, adding gradient noise during training (enhancing
exploration of the model-parameter space) and model averaging when testing.
Extensive experiments on various RNN models and across a broad range of
applications demonstrate the superiority of the proposed approach over
stochastic optimization.
| 2,017 | Computation and Language |
Kannada Spell Checker with Sandhi Splitter | Spelling errors are introduced in text either during typing, or when the user
does not know the correct phoneme or grapheme. If a language contains complex
words like sandhi where two or more morphemes join based on some rules, spell
checking becomes very tedious. In such situations, having a spell checker with
sandhi splitter which alerts the user by flagging the errors and providing
suggestions is very useful. A novel algorithm of sandhi splitting is proposed
in this paper. The sandhi splitter can split about 7000 most common sandhi
words in Kannada language used as test samples. The sandhi splitter was
integrated with a Kannada spell checker and a mechanism for generating
suggestions was added. A comprehensive, platform independent, standalone spell
checker with sandhi splitter application software was thus developed and tested
extensively for its efficiency and correctness. A comparative analysis of this
spell checker with sandhi splitter was made and results concluded that the
Kannada spell checker with sandhi splitter has an improved performance. It is
twice as fast, 200 times more space efficient, and it is 90% accurate in case
of complex nouns and 50% accurate for complex verbs. Such a spell checker with
sandhi splitter will be of foremost significance in machine translation
systems, voice processing, etc. This is the first sandhi splitter in Kannada
and the advantage of the novel algorithm is that, it can be extended to all
Indian languages.
| 2,016 | Computation and Language |
Neural Machine Translation with Latent Semantic of Image and Text | Although attention-based Neural Machine Translation have achieved great
success, attention-mechanism cannot capture the entire meaning of the source
sentence because the attention mechanism generates a target word depending
heavily on the relevant parts of the source sentence. The report of earlier
studies has introduced a latent variable to capture the entire meaning of
sentence and achieved improvement on attention-based Neural Machine
Translation. We follow this approach and we believe that the capturing meaning
of sentence benefits from image information because human beings understand the
meaning of language not only from textual information but also from perceptual
information such as that gained from vision. As described herein, we propose a
neural machine translation model that introduces a continuous latent variable
containing an underlying semantic extracted from texts and images. Our model,
which can be trained end-to-end, requires image information only when training.
Experiments conducted with an English--German translation task show that our
model outperforms over the baseline.
| 2,016 | Computation and Language |
A Simple, Fast Diverse Decoding Algorithm for Neural Generation | In this paper, we propose a simple, fast decoding algorithm that fosters
diversity in neural generation. The algorithm modifies the standard beam search
algorithm by adding an inter-sibling ranking penalty, favoring choosing
hypotheses from diverse parents. We evaluate the proposed model on the tasks of
dialogue response generation, abstractive summarization and machine
translation. We find that diverse decoding helps across all tasks, especially
those for which reranking is needed.
We further propose a variation that is capable of automatically adjusting its
diversity decoding rates for different inputs using reinforcement learning
(RL). We observe a further performance boost from this RL technique. This paper
includes material from the unpublished script "Mutual Information and Diverse
Decoding Improve Neural Machine Translation" (Li and Jurafsky, 2016).
| 2,016 | Computation and Language |
Attention-based Memory Selection Recurrent Network for Language Modeling | Recurrent neural networks (RNNs) have achieved great success in language
modeling. However, since the RNNs have fixed size of memory, their memory
cannot store all the information about the words it have seen before in the
sentence, and thus the useful long-term information may be ignored when
predicting the next words. In this paper, we propose Attention-based Memory
Selection Recurrent Network (AMSRN), in which the model can review the
information stored in the memory at each previous time step and select the
relevant information to help generate the outputs. In AMSRN, the attention
mechanism finds the time steps storing the relevant information in the memory,
and memory selection determines which dimensions of the memory are involved in
computing the attention weights and from which the information is extracted.In
the experiments, AMSRN outperformed long short-term memory (LSTM) based
language models on both English and Chinese corpora. Moreover, we investigate
using entropy as a regularizer for attention weights and visualize how the
attention mechanism helps language modeling.
| 2,016 | Computation and Language |
Knowledge Graph Representation with Jointly Structural and Textual
Encoding | The objective of knowledge graph embedding is to encode both entities and
relations of knowledge graphs into continuous low-dimensional vector spaces.
Previously, most works focused on symbolic representation of knowledge graph
with structure information, which can not handle new entities or entities with
few facts well. In this paper, we propose a novel deep architecture to utilize
both structural and textual information of entities. Specifically, we introduce
three neural models to encode the valuable information from text description of
entity, among which an attentive model can select related information as
needed. Then, a gating mechanism is applied to integrate representations of
structure and text into a unified architecture. Experiments show that our
models outperform baseline by margin on link prediction and triplet
classification tasks. Source codes of this paper will be available on Github.
| 2,016 | Computation and Language |
Fill it up: Exploiting partial dependency annotations in a minimum
spanning tree parser | Unsupervised models of dependency parsing typically require large amounts of
clean, unlabeled data plus gold-standard part-of-speech tags. Adding indirect
supervision (e.g. language universals and rules) can help, but we show that
obtaining small amounts of direct supervision - here, partial dependency
annotations - provides a strong balance between zero and full supervision. We
adapt the unsupervised ConvexMST dependency parser to learn from partial
dependencies expressed in the Graph Fragment Language. With less than 24 hours
of total annotation, we obtain 7% and 17% absolute improvement in unlabeled
dependency scores for English and Spanish, respectively, compared to the same
parser using only universal grammar constraints.
| 2,016 | Computation and Language |
The polysemy of the words that children learn over time | Here we study polysemy as a potential learning bias in vocabulary learning in
children. Words of low polysemy could be preferred as they reduce the
disambiguation effort for the listener. However, such preference could be a
side-effect of another bias: the preference of children for nouns in
combination with the lower polysemy of nouns with respect to other
part-of-speech categories. Our results show that mean polysemy in children
increases over time in two phases, i.e. a fast growth till the 31st month
followed by a slower tendency towards adult speech. In contrast, this evolution
is not found in adults interacting with children. This suggests that children
have a preference for non-polysemous words in their early stages of vocabulary
acquisition. Interestingly, the evolutionary pattern described above weakens
when controlling for syntactic category (noun, verb, adjective or adverb) but
it does not disappear completely, suggesting that it could result from
acombination of a standalone bias for low polysemy and a preference for nouns.
| 2,018 | Computation and Language |
Semi Supervised Preposition-Sense Disambiguation using Multilingual Data | Prepositions are very common and very ambiguous, and understanding their
sense is critical for understanding the meaning of the sentence. Supervised
corpora for the preposition-sense disambiguation task are small, suggesting a
semi-supervised approach to the task. We show that signals from unannotated
multilingual data can be used to improve supervised preposition-sense
disambiguation. Our approach pre-trains an LSTM encoder for predicting the
translation of a preposition, and then incorporates the pre-trained encoder as
a component in a supervised classification system, and fine-tunes it for the
task. The multilingual signals consistently improve results on two
preposition-sense datasets.
| 2,016 | Computation and Language |
Learning a Natural Language Interface with Neural Programmer | Learning a natural language interface for database tables is a challenging
task that involves deep language understanding and multi-step reasoning. The
task is often approached by mapping natural language queries to logical forms
or programs that provide the desired response when executed on the database. To
our knowledge, this paper presents the first weakly supervised, end-to-end
neural network model to induce such programs on a real-world dataset. We
enhance the objective function of Neural Programmer, a neural network with
built-in discrete operations, and apply it on WikiTableQuestions, a natural
language question-answering dataset. The model is trained end-to-end with weak
supervision of question-answer pairs, and does not require domain-specific
grammars, rules, or annotations that are key elements in previous approaches to
program induction. The main experimental result in this paper is that a single
Neural Programmer model achieves 34.2% accuracy using only 10,000 examples with
weak supervision. An ensemble of 15 models, with a trivial combination
technique, achieves 37.7% accuracy, which is competitive to the current
state-of-the-art accuracy of 37.1% obtained by a traditional natural language
semantic parser.
| 2,017 | Computation and Language |
Exploiting Unlabeled Data for Neural Grammatical Error Detection | Identifying and correcting grammatical errors in the text written by
non-native writers has received increasing attention in recent years. Although
a number of annotated corpora have been established to facilitate data-driven
grammatical error detection and correction approaches, they are still limited
in terms of quantity and coverage because human annotation is labor-intensive,
time-consuming, and expensive. In this work, we propose to utilize unlabeled
data to train neural network based grammatical error detection models. The
basic idea is to cast error detection as a binary classification problem and
derive positive and negative training examples from unlabeled data. We
introduce an attention-based neural network to capture long-distance
dependencies that influence the word being detected. Experiments show that the
proposed approach significantly outperforms SVMs and convolutional networks
with fixed-size context window.
| 2,016 | Computation and Language |
Developing a cardiovascular disease risk factor annotated corpus of
Chinese electronic medical records | Cardiovascular disease (CVD) has become the leading cause of death in China,
and most of the cases can be prevented by controlling risk factors. The goal of
this study was to build a corpus of CVD risk factor annotations based on
Chinese electronic medical records (CEMRs). This corpus is intended to be used
to develop a risk factor information extraction system that, in turn, can be
applied as a foundation for the further study of the progress of risk factors
and CVD. We designed a light annotation task to capture CVD risk factors with
indicators, temporal attributes and assertions that were explicitly or
implicitly displayed in the records. The task included: 1) preparing data; 2)
creating guidelines for capturing annotations (these were created with the help
of clinicians); 3) proposing an annotation method including building the
guidelines draft, training the annotators and updating the guidelines, and
corpus construction. Then, a risk factor annotated corpus based on
de-identified discharge summaries and progress notes from 600 patients was
developed. Built with the help of clinicians, this corpus has an
inter-annotator agreement (IAA) F1-measure of 0.968, indicating a high
reliability. To the best of our knowledge, this is the first annotated corpus
concerning CVD risk factors in CEMRs and the guidelines for capturing CVD risk
factor annotations from CEMRs were proposed. The obtained document-level
annotations can be applied in future studies to monitor risk factors and CVD
over the long term.
| 2,017 | Computation and Language |
Learning to Compose Words into Sentences with Reinforcement Learning | We use reinforcement learning to learn tree-structured neural networks for
computing representations of natural language sentences. In contrast with prior
work on tree-structured models in which the trees are either provided as input
or predicted using supervision from explicit treebank annotations, the tree
structures in this work are optimized to improve performance on a downstream
task. Experiments demonstrate the benefit of learning task-specific composition
orders, outperforming both sequential encoders and recursive encoders based on
treebank annotations. We analyze the induced trees and show that while they
discover some linguistically intuitive structures (e.g., noun phrases, simple
verb phrases), they are different than conventional English syntactic
structures.
| 2,016 | Computation and Language |
AutoMOS: Learning a non-intrusive assessor of naturalness-of-speech | Developers of text-to-speech synthesizers (TTS) often make use of human
raters to assess the quality of synthesized speech. We demonstrate that we can
model human raters' mean opinion scores (MOS) of synthesized speech using a
deep recurrent neural network whose inputs consist solely of a raw waveform.
Our best models provide utterance-level estimates of MOS only moderately
inferior to sampled human ratings, as shown by Pearson and Spearman
correlations. When multiple utterances are scored and averaged, a scenario
common in synthesizer quality assessment, AutoMOS achieves correlations
approaching those of human raters. The AutoMOS model has a number of
applications, such as the ability to explore the parameter space of a speech
synthesizer without requiring a human-in-the-loop.
| 2,016 | Computation and Language |
Joint Copying and Restricted Generation for Paraphrase | Many natural language generation tasks, such as abstractive summarization and
text simplification, are paraphrase-orientated. In these tasks, copying and
rewriting are two main writing modes. Most previous sequence-to-sequence
(Seq2Seq) models use a single decoder and neglect this fact. In this paper, we
develop a novel Seq2Seq model to fuse a copying decoder and a restricted
generative decoder. The copying decoder finds the position to be copied based
on a typical attention model. The generative decoder produces words limited in
the source-specific vocabulary. To combine the two decoders and determine the
final output, we develop a predictor to predict the mode of copying or
rewriting. This predictor can be guided by the actual writing mode in the
training data. We conduct extensive experiments on two different paraphrase
datasets. The result shows that our model outperforms the state-of-the-art
approaches in terms of both informativeness and language quality.
| 2,016 | Computation and Language |
Improving Multi-Document Summarization via Text Classification | Developed so far, multi-document summarization has reached its bottleneck due
to the lack of sufficient training data and diverse categories of documents.
Text classification just makes up for these deficiencies. In this paper, we
propose a novel summarization system called TCSum, which leverages plentiful
text classification data to improve the performance of multi-document
summarization. TCSum projects documents onto distributed representations which
act as a bridge between text classification and summarization. It also utilizes
the classification results to produce summaries of different styles. Extensive
experiments on DUC generic multi-document summarization datasets show that,
TCSum can achieve the state-of-the-art performance without using any
hand-crafted features and has the capability to catch the variations of summary
styles with respect to different text categories.
| 2,016 | Computation and Language |
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | We introduce a large scale MAchine Reading COmprehension dataset, which we
name MS MARCO. The dataset comprises of 1,010,916 anonymized
questions---sampled from Bing's search query logs---each with a human generated
answer and 182,669 completely human rewritten generated answers. In addition,
the dataset contains 8,841,823 passages---extracted from 3,563,535 web
documents retrieved by Bing---that provide the information necessary for
curating the natural language answers. A question in the MS MARCO dataset may
have multiple answers or no answers at all. Using this dataset, we propose
three different tasks with varying levels of difficulty: (i) predict if a
question is answerable given a set of context passages, and extract and
synthesize the answer as a human would (ii) generate a well-formed answer (if
possible) based on the context passages that can be understood with the
question and passage context, and finally (iii) rank a set of retrieved
passages given a question. The size of the dataset and the fact that the
questions are derived from real user search queries distinguishes MS MARCO from
other well-known publicly available datasets for machine reading comprehension
and question-answering. We believe that the scale and the real-world nature of
this dataset makes it attractive for benchmarking machine reading comprehension
and question-answering models.
| 2,018 | Computation and Language |
Dense Prediction on Sequences with Time-Dilated Convolutions for Speech
Recognition | In computer vision pixelwise dense prediction is the task of predicting a
label for each pixel in the image. Convolutional neural networks achieve good
performance on this task, while being computationally efficient. In this paper
we carry these ideas over to the problem of assigning a sequence of labels to a
set of speech frames, a task commonly known as framewise classification. We
show that dense prediction view of framewise classification offers several
advantages and insights, including computational efficiency and the ability to
apply batch normalization. When doing dense prediction we pay specific
attention to strided pooling in time and introduce an asymmetric dilated
convolution, called time-dilated convolution, that allows for efficient and
elegant implementation of pooling in time. We show results using time-dilated
convolutions in a very deep VGG-style CNN with batch normalization on the Hub5
Switchboard-2000 benchmark task. With a big n-gram language model, we achieve
7.7% WER which is the best single model single-pass performance reported so
far.
| 2,016 | Computation and Language |
An End-to-End Architecture for Keyword Spotting and Voice Activity
Detection | We propose a single neural network architecture for two tasks: on-line
keyword spotting and voice activity detection. We develop novel inference
algorithms for an end-to-end Recurrent Neural Network trained with the
Connectionist Temporal Classification loss function which allow our model to
achieve high accuracy on both keyword spotting and voice activity detection
without retraining. In contrast to prior voice activity detection models, our
architecture does not require aligned training data and uses the same
parameters as the keyword spotting model. This allows us to deploy a high
quality voice activity detector with no additional memory or maintenance
requirements.
| 2,016 | Computation and Language |
Sentiment Analysis for Twitter : Going Beyond Tweet Text | Analysing sentiment of tweets is important as it helps to determine the
users' opinion. Knowing people's opinion is crucial for several purposes
starting from gathering knowledge about customer base, e-governance,
campaigning and many more. In this report, we aim to develop a system to detect
the sentiment from tweets. We employ several linguistic features along with
some other external sources of information to detect the sentiment of a tweet.
We show that augmenting the 140 character-long tweet with information harvested
from external urls shared in the tweet as well as Social Media features
enhances the sentiment prediction accuracy significantly.
| 2,016 | Computation and Language |
Semantic Parsing of Mathematics by Context-based Learning from Aligned
Corpora and Theorem Proving | We study methods for automated parsing of informal mathematical expressions
into formal ones, a main prerequisite for deep computer understanding of
informal mathematical texts. We propose a context-based parsing approach that
combines efficient statistical learning of deep parse trees with their semantic
pruning by type checking and large-theory automated theorem proving. We show
that the methods very significantly improve on previous results in parsing
theorems from the Flyspeck corpus.
| 2,016 | Computation and Language |
Geometry of Compositionality | This paper proposes a simple test for compositionality (i.e., literal usage)
of a word or phrase in a context-specific way. The test is computationally
simple, relying on no external resources and only uses a set of trained word
vectors. Experiments show that the proposed method is competitive with state of
the art and displays high accuracy in context-specific compositionality
detection of a variety of natural language phenomena (idiomaticity, sarcasm,
metaphor) for different datasets in multiple languages. The key insight is to
connect compositionality to a curious geometric property of word embeddings,
which is of independent interest.
| 2,016 | Computation and Language |
NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA.
| 2,017 | Computation and Language |
Identity-sensitive Word Embedding through Heterogeneous Networks | Most existing word embedding approaches do not distinguish the same words in
different contexts, therefore ignoring their contextual meanings. As a result,
the learned embeddings of these words are usually a mixture of multiple
meanings. In this paper, we acknowledge multiple identities of the same word in
different contexts and learn the \textbf{identity-sensitive} word embeddings.
Based on an identity-labeled text corpora, a heterogeneous network of words and
word identities is constructed to model different-levels of word
co-occurrences. The heterogeneous network is further embedded into a
low-dimensional space through a principled network embedding approach, through
which we are able to obtain the embeddings of words and the embeddings of word
identities. We study three different types of word identities including topics,
sentiments and categories. Experimental results on real-world data sets show
that the identity-sensitive word embeddings learned by our approach indeed
capture different meanings of words and outperforms competitive methods on
tasks including text classification and word similarity computation.
| 2,016 | Computation and Language |
Context-aware Natural Language Generation with Recurrent Neural Networks | This paper studied generating natural languages at particular contexts or
situations. We proposed two novel approaches which encode the contexts into a
continuous semantic representation and then decode the semantic representation
into text sequences with recurrent neural networks. During decoding, the
context information are attended through a gating mechanism, addressing the
problem of long-range dependency caused by lengthy sequences. We evaluate the
effectiveness of the proposed approaches on user review data, in which rich
contexts are available and two informative contexts, sentiments and products,
are selected for evaluation. Experiments show that the fake reviews generated
by our approaches are very natural. Results of fake review detection with human
judges show that more than 50\% of the fake reviews are misclassified as the
real reviews, and more than 90\% are misclassified by existing state-of-the-art
fake review detection algorithm.
| 2,016 | Computation and Language |
Towards Accurate Word Segmentation for Chinese Patents | A patent is a property right for an invention granted by the government to
the inventor. An invention is a solution to a specific technological problem.
So patents often have a high concentration of scientific and technical terms
that are rare in everyday language. The Chinese word segmentation model trained
on currently available everyday language data sets performs poorly because it
cannot effectively recognize these scientific and technical terms. In this
paper we describe a pragmatic approach to Chinese word segmentation on patents
where we train a character-based semi-supervised sequence labeling model by
extracting features from a manually segmented corpus of 142 patents, enhanced
with information extracted from the Chinese TreeBank. Experiments show that the
accuracy of our model reached 95.08% (F1 score) on a held-out test set and
96.59% on development set, compared with an F1 score of 91.48% on development
set if the model is trained on the Chinese TreeBank. We also experimented with
some existing domain adaptation techniques, the results show that the amount of
target domain data and the selected features impact the performance of the
domain adaptation techniques.
| 2,016 | Computation and Language |
Deep encoding of etymological information in TEI | This paper aims to provide a comprehensive modeling and representation of
etymological data in digital dictionaries. The purpose is to integrate in one
coherent framework both digital representations of legacy dictionaries, and
also born-digital lexical databases that are constructed manually or
semi-automatically. We want to propose a systematic and coherent set of
modeling principles for a variety of etymological phenomena that may contribute
to the creation of a continuum between existing and future lexical constructs,
where anyone interested in tracing the history of words and their meanings will
be able to seamlessly query lexical resources.Instead of designing an ad hoc
model and representation language for digital etymological data, we will focus
on identifying all the possibilities offered by the TEI guidelines for the
representation of lexical information.
| 2,016 | Computation and Language |
Anchored Correlation Explanation: Topic Modeling with Minimal Domain
Knowledge | While generative models such as Latent Dirichlet Allocation (LDA) have proven
fruitful in topic modeling, they often require detailed assumptions and careful
specification of hyperparameters. Such model complexity issues only compound
when trying to generalize generative models to incorporate human input. We
introduce Correlation Explanation (CorEx), an alternative approach to topic
modeling that does not assume an underlying generative model, and instead
learns maximally informative topics through an information-theoretic framework.
This framework naturally generalizes to hierarchical and semi-supervised
extensions with no additional modeling assumptions. In particular, word-level
domain knowledge can be flexibly incorporated within CorEx through anchor
words, allowing topic separability and representation to be promoted with
minimal human intervention. Across a variety of datasets, metrics, and
experiments, we demonstrate that CorEx produces topics that are comparable in
quality to those produced by unsupervised and semi-supervised variants of LDA.
| 2,017 | Computation and Language |
Domain Adaptation for Named Entity Recognition in Online Media with Word
Embeddings | Content on the Internet is heterogeneous and arises from various domains like
News, Entertainment, Finance and Technology. Understanding such content
requires identifying named entities (persons, places and organizations) as one
of the key steps. Traditionally Named Entity Recognition (NER) systems have
been built using available annotated datasets (like CoNLL, MUC) and demonstrate
excellent performance. However, these models fail to generalize onto other
domains like Sports and Finance where conventions and language use can differ
significantly. Furthermore, several domains do not have large amounts of
annotated labeled data for training robust Named Entity Recognition models. A
key step towards this challenge is to adapt models learned on domains where
large amounts of annotated training data are available to domains with scarce
annotated data.
In this paper, we propose methods to effectively adapt models learned on one
domain onto other domains using distributed word representations. First we
analyze the linguistic variation present across domains to identify key
linguistic insights that can boost performance across domains. We propose
methods to capture domain specific semantics of word usage in addition to
global semantics. We then demonstrate how to effectively use such domain
specific knowledge to learn NER models that outperform previous baselines in
the domain adaptation setting.
| 2,016 | Computation and Language |
Multilingual Multiword Expressions | The project aims to provide a semi-supervised approach to identify Multiword
Expressions in a multilingual context consisting of English and most of the
major Indian languages. Multiword expressions are a group of words which refers
to some conventional or regional way of saying things. If they are literally
translated from one language to another the expression will lose its inherent
meaning.
To automatically extract multiword expressions from a corpus, an extraction
pipeline have been constructed which consist of a combination of rule based and
statistical approaches. There are several types of multiword expressions which
differ from each other widely by construction. We employ different methods to
detect different types of multiword expressions. Given a POS tagged corpus in
English or any Indian language the system initially applies some regular
expression filters to narrow down the search space to certain patterns (like,
reduplication, partial reduplication, compound nouns, compound verbs, conjunct
verbs etc.). The word sequences matching the required pattern are subjected to
a series of linguistic tests which include verb filtering, named entity
filtering and hyphenation filtering test to exclude false positives. The
candidates are then checked for semantic relationships among themselves (using
Wordnet). In order to detect partial reduplication we make use of Wordnet as a
lexical database as well as a tool for lemmatising. We detect complex
predicates by investigating the features of the constituent words. Statistical
methods are applied to detect collocations. Finally, lexicographers examine the
list of automatically extracted candidates to validate whether they are true
multiword expressions or not and add them to the multiword dictionary
accordingly.
| 2,016 | Computation and Language |
Bootstrapping incremental dialogue systems: using linguistic knowledge
to learn from minimal data | We present a method for inducing new dialogue systems from very small amounts
of unannotated dialogue data, showing how word-level exploration using
Reinforcement Learning (RL), combined with an incremental and semantic grammar
- Dynamic Syntax (DS) - allows systems to discover, generate, and understand
many new dialogue variants. The method avoids the use of expensive and
time-consuming dialogue act annotations, and supports more natural
(incremental) dialogues than turn-based systems. Here, language generation and
dialogue management are treated as a joint decision/optimisation problem, and
the MDP model for RL is constructed automatically. With an implemented system,
we show that this method enables a wide range of dialogue variations to be
automatically captured, even when the system is trained from only a single
dialogue. The variants include question-answer pairs, over- and
under-answering, self- and other-corrections, clarification interaction,
split-utterances, and ellipsis. This generalisation property results from the
structural knowledge and constraints present within the DS grammar, and
highlights some limitations of recent systems built using machine learning
techniques only.
| 2,016 | Computation and Language |
Piecewise Latent Variables for Neural Variational Text Processing | Advances in neural variational inference have facilitated the learning of
powerful directed graphical models with continuous latent variables, such as
variational autoencoders. The hope is that such models will learn to represent
rich, multi-modal latent factors in real-world data, such as natural language
text. However, current models often assume simplistic priors on the latent
variables - such as the uni-modal Gaussian distribution - which are incapable
of representing complex latent factors efficiently. To overcome this
restriction, we propose the simple, but highly flexible, piecewise constant
distribution. This distribution has the capacity to represent an exponential
number of modes of a latent target distribution, while remaining mathematically
tractable. Our results demonstrate that incorporating this new latent
distribution into different models yields substantial improvements in natural
language processing tasks such as document modeling and natural language
generation for dialogue.
| 2,017 | Computation and Language |
Definition Modeling: Learning to define word embeddings in natural
language | Distributed representations of words have been shown to capture lexical
semantics, as demonstrated by their effectiveness in word similarity and
analogical relation tasks. But, these tasks only evaluate lexical semantics
indirectly. In this paper, we study whether it is possible to utilize
distributed representations to generate dictionary definitions of words, as a
more direct and transparent representation of the embeddings' semantics. We
introduce definition modeling, the task of generating a definition for a given
word and its embedding. We present several definition model architectures based
on recurrent neural networks, and experiment with the models over multiple data
sets. Our results show that a model that controls dependencies between the word
being defined and the definition words performs significantly better, and that
a character-level convolution layer designed to leverage morphology can
complement word-level embeddings. Finally, an error analysis suggests that the
errors made by a definition model may provide insight into the shortcomings of
word embeddings.
| 2,016 | Computation and Language |
Neural Document Embeddings for Intensive Care Patient Mortality
Prediction | We present an automatic mortality prediction scheme based on the unstructured
textual content of clinical notes. Proposing a convolutional document embedding
approach, our empirical investigation using the MIMIC-III intensive care
database shows significant performance gains compared to previously employed
methods such as latent topic distributions or generic doc2vec embeddings. These
improvements are especially pronounced for the difficult problem of
post-discharge mortality prediction.
| 2,016 | Computation and Language |
Shift-Reduce Constituent Parsing with Neural Lookahead Features | Transition-based models can be fast and accurate for constituent parsing.
Compared with chart-based models, they leverage richer features by extracting
history information from a parser stack, which spans over non-local
constituents. On the other hand, during incremental parsing, constituent
information on the right hand side of the current word is not utilized, which
is a relative weakness of shift-reduce parsing. To address this limitation, we
leverage a fast neural model to extract lookahead features. In particular, we
build a bidirectional LSTM model, which leverages the full sentence information
to predict the hierarchy of constituents that each word starts and ends. The
results are then passed to a strong transition-based constituent parser as
lookahead features. The resulting parser gives 1.3% absolute improvement in WSJ
and 2.3% in CTB compared to the baseline, given the highest reported accuracies
for fully-supervised parsing.
| 2,016 | Computation and Language |
Alleviating Overfitting for Polysemous Words for Word Representation
Estimation Using Lexicons | Though there are some works on improving distributed word representations
using lexicons, the improper overfitting of the words that have multiple
meanings is a remaining issue deteriorating the learning when lexicons are
used, which needs to be solved. An alternative method is to allocate a vector
per sense instead of a vector per word. However, the word representations
estimated in the former way are not as easy to use as the latter one. Our
previous work uses a probabilistic method to alleviate the overfitting, but it
is not robust with a small corpus. In this paper, we propose a new neural
network to estimate distributed word representations using a lexicon and a
corpus. We add a lexicon layer in the continuous bag-of-words model and a
threshold node after the output of the lexicon layer. The threshold rejects the
unreliable outputs of the lexicon layer that are less likely to be the same
with their inputs. In this way, it alleviates the overfitting of the polysemous
words. The proposed neural network can be trained using negative sampling,
which maximizing the log probabilities of target words given the context words,
by distinguishing the target words from random noises. We compare the proposed
neural network with the continuous bag-of-words model, the other works
improving it, and the previous works estimating distributed word
representations using both a lexicon and a corpus. The experimental results
show that the proposed neural network is more efficient and balanced for both
semantic tasks and syntactic tasks than the previous works, and robust to the
size of the corpus.
| 2,017 | Computation and Language |
ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA | Long Short-Term Memory (LSTM) is widely used in speech recognition. In order
to achieve higher prediction accuracy, machine learning scientists have built
larger and larger models. Such large model is both computation intensive and
memory intensive. Deploying such bulky model results in high power consumption
and leads to high total cost of ownership (TCO) of a data center. In order to
speedup the prediction and make it energy efficient, we first propose a
load-balance-aware pruning method that can compress the LSTM model size by 20x
(10x from pruning and 2x from quantization) with negligible loss of the
prediction accuracy. The pruned model is friendly for parallel processing.
Next, we propose scheduler that encodes and partitions the compressed model to
each PE for parallelism, and schedule the complicated LSTM data flow. Finally,
we design the hardware architecture, named Efficient Speech Recognition Engine
(ESE) that works directly on the compressed model. Implemented on Xilinx
XCKU060 FPGA running at 200MHz, ESE has a performance of 282 GOPS working
directly on the compressed LSTM network, corresponding to 2.52 TOPS on the
uncompressed one, and processes a full LSTM for speech recognition with a power
dissipation of 41 Watts. Evaluated on the LSTM for speech recognition
benchmark, ESE is 43x and 3x faster than Core i7 5930k CPU and Pascal Titan X
GPU implementations. It achieves 40x and 11.5x higher energy efficiency
compared with the CPU and GPU respectively.
| 2,017 | Computation and Language |
Automated assessment of non-native learner essays: Investigating the
role of linguistic features | Automatic essay scoring (AES) refers to the process of scoring free text
responses to given prompts, considering human grader scores as the gold
standard. Writing such essays is an essential component of many language and
aptitude exams. Hence, AES became an active and established area of research,
and there are many proprietary systems used in real life applications today.
However, not much is known about which specific linguistic features are useful
for prediction and how much of this is consistent across datasets. This article
addresses that by exploring the role of various linguistic features in
automatic essay scoring using two publicly available datasets of non-native
English essays written in test taking scenarios. The linguistic properties are
modeled by encoding lexical, syntactic, discourse and error types of learner
language in the feature set. Predictive models are then developed using these
features on both datasets and the most predictive features are compared. While
the results show that the feature set used results in good predictive models
with both datasets, the question "what are the most predictive features?" has a
different answer for each dataset.
| 2,017 | Computation and Language |
Creating a Real-Time, Reproducible Event Dataset | The generation of political event data has remained much the same since the
mid-1990s, both in terms of data acquisition and the process of coding text
into data. Since the 1990s, however, there have been significant improvements
in open-source natural language processing software and in the availability of
digitized news content. This paper presents a new, next-generation event
dataset, named Phoenix, that builds from these and other advances. This dataset
includes improvements in the underlying news collection process and event
coding software, along with the creation of a general processing pipeline
necessary to produce daily-updated data. This paper provides a face validity
checks by briefly examining the data for the conflict in Syria, and a
comparison between Phoenix and the Integrated Crisis Early Warning System data.
| 2,016 | Computation and Language |
End-to-End Joint Learning of Natural Language Understanding and Dialogue
Manager | Natural language understanding and dialogue policy learning are both
essential in conversational systems that predict the next system actions in
response to a current user utterance. Conventional approaches aggregate
separate models of natural language understanding (NLU) and system action
prediction (SAP) as a pipeline that is sensitive to noisy outputs of
error-prone NLU. To address the issues, we propose an end-to-end deep recurrent
neural network with limited contextual dialogue memory by jointly training NLU
and SAP on DSTC4 multi-domain human-human dialogues. Experiments show that our
proposed model significantly outperforms the state-of-the-art pipeline models
for both NLU and SAP, which indicates that our joint model is capable of
mitigating the affects of noisy NLU outputs, and NLU model can be refined by
error flows backpropagating from the extra supervised signals of system
actions.
| 2,017 | Computation and Language |
Unit Dependency Graph and its Application to Arithmetic Word Problem
Solving | Math word problems provide a natural abstraction to a range of natural
language understanding problems that involve reasoning about quantities, such
as interpreting election results, news about casualties, and the financial
section of a newspaper. Units associated with the quantities often provide
information that is essential to support this reasoning. This paper proposes a
principled way to capture and reason about units and shows how it can benefit
an arithmetic word problem solver. This paper presents the concept of Unit
Dependency Graphs (UDGs), which provides a compact representation of the
dependencies between units of numbers mentioned in a given problem. Inducing
the UDG alleviates the brittleness of the unit extraction system and allows for
a natural way to leverage domain knowledge about unit compatibility, for word
problem solving. We introduce a decomposed model for inducing UDGs with minimal
additional annotations, and use it to augment the expressions used in the
arithmetic word problem solver of (Roy and Roth 2015) via a constrained
inference framework. We show that introduction of UDGs reduces the error of the
solver by over 10 %, surpassing all existing systems for solving arithmetic
word problems. In addition, it also makes the system more robust to adaptation
to new vocabulary and equation forms .
| 2,016 | Computation and Language |
CER: Complementary Entity Recognition via Knowledge Expansion on Large
Unlabeled Product Reviews | Product reviews contain a lot of useful information about product features
and customer opinions. One important product feature is the complementary
entity (products) that may potentially work together with the reviewed product.
Knowing complementary entities of the reviewed product is very important
because customers want to buy compatible products and avoid incompatible ones.
In this paper, we address the problem of Complementary Entity Recognition
(CER). Since no existing method can solve this problem, we first propose a
novel unsupervised method to utilize syntactic dependency paths to recognize
complementary entities. Then we expand category-level domain knowledge about
complementary entities using only a few general seed verbs on a large amount of
unlabeled reviews. The domain knowledge helps the unsupervised method to adapt
to different products and greatly improves the precision of the CER task. The
advantage of the proposed method is that it does not require any labeled data
for training. We conducted experiments on 7 popular products with about 1200
reviews in total to demonstrate that the proposed approach is effective.
| 2,016 | Computation and Language |
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with
Weak Supervision (Short Version) | Extending the success of deep neural networks to natural language
understanding and symbolic reasoning requires complex operations and external
memory. Recent neural program induction approaches have attempted to address
this problem, but are typically limited to differentiable memory, and
consequently cannot scale beyond small synthetic tasks. In this work, we
propose the Manager-Programmer-Computer framework, which integrates neural
networks with non-differentiable memory to support abstract, scalable and
precise operations through a friendly neural computer interface. Specifically,
we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence
neural "programmer", and a non-differentiable "computer" that is a Lisp
interpreter with code assist. To successfully apply REINFORCE for training, we
augment it with approximate gold programs found by an iterative maximum
likelihood training process. NSM is able to learn a semantic parser from weak
supervision over a large knowledge base. It achieves new state-of-the-art
performance on WebQuestionsSP, a challenging semantic parsing dataset, with
weak supervision. Compared to previous approaches, NSM is end-to-end, therefore
does not rely on feature engineering or domain specific knowledge.
| 2,016 | Computation and Language |
We used Neural Networks to Detect Clickbaits: You won't believe what
happened Next! | Online content publishers often use catchy headlines for their articles in
order to attract users to their websites. These headlines, popularly known as
clickbaits, exploit a user's curiosity gap and lure them to click on links that
often disappoint them. Existing methods for automatically detecting clickbaits
rely on heavy feature engineering and domain knowledge. Here, we introduce a
neural network architecture based on Recurrent Neural Networks for detecting
clickbaits. Our model relies on distributed word representations learned from a
large unannotated corpora, and character embeddings learned via Convolutional
Neural Networks. Experimental results on a dataset of news headlines show that
our model outperforms existing techniques for clickbait detection with an
accuracy of 0.98 with F1-score of 0.98 and ROC-AUC of 0.99.
| 2,019 | Computation and Language |
Mapping the Dialog Act Annotations of the LEGO Corpus into the
Communicative Functions of ISO 24617-2 | In this paper we present strategies for mapping the dialog act annotations of
the LEGO corpus into the communicative functions of the ISO 24617-2 standard.
Using these strategies, we obtained an additional 347 dialogs annotated
according to the standard. This is particularly important given the reduced
amount of existing data in those conditions due to the recency of the standard.
Furthermore, these are dialogs from a widely explored corpus for dialog related
tasks. However, its dialog annotations have been neglected due to their high
domain-dependency, which renders them unuseful outside the context of the
corpus. Thus, through our mapping process, we both obtain more data annotated
according to a recent standard and provide useful dialog act annotations for a
widely explored corpus in the context of dialog research.
| 2,016 | Computation and Language |
The Evolution of Sentiment Analysis - A Review of Research Topics,
Venues, and Top Cited Papers | Sentiment analysis is one of the fastest growing research areas in computer
science, making it challenging to keep track of all the activities in the area.
We present a computer-assisted literature review, where we utilize both text
mining and qualitative coding, and analyze 6,996 papers from Scopus. We find
that the roots of sentiment analysis are in the studies on public opinion
analysis at the beginning of 20th century and in the text subjectivity analysis
performed by the computational linguistics community in 1990's. However, the
outbreak of computer-based sentiment analysis only occurred with the
availability of subjective texts on the Web. Consequently, 99% of the papers
have been published after 2004. Sentiment analysis papers are scattered to
multiple publication venues, and the combined number of papers in the top-15
venues only represent ca. 30% of the papers in total. We present the top-20
cited papers from Google Scholar and Scopus and a taxonomy of research topics.
In recent years, sentiment analysis has shifted from analyzing online product
reviews to social media texts from Twitter and Facebook. Many topics beyond
product reviews like stock markets, elections, disasters, medicine, software
engineering and cyberbullying extend the utilization of sentiment analysis
| 2,018 | Computation and Language |
Sequential Matching Network: A New Architecture for Multi-turn Response
Selection in Retrieval-based Chatbots | We study response selection for multi-turn conversation in retrieval-based
chatbots. Existing work either concatenates utterances in context or matches a
response with a highly abstract context vector finally, which may lose
relationships among utterances or important contextual information. We propose
a sequential matching network (SMN) to address both problems. SMN first matches
a response with each utterance in the context on multiple levels of
granularity, and distills important matching information from each pair as a
vector with convolution and pooling operations. The vectors are then
accumulated in a chronological order through a recurrent neural network (RNN)
which models relationships among utterances. The final matching score is
calculated with the hidden states of the RNN. An empirical study on two public
data sets shows that SMN can significantly outperform state-of-the-art methods
for response selection in multi-turn conversation.
| 2,017 | Computation and Language |
Listen and Translate: A Proof of Concept for End-to-End Speech-to-Text
Translation | This paper proposes a first attempt to build an end-to-end speech-to-text
translation system, which does not use source language transcription during
learning or decoding. We propose a model for direct speech-to-text translation,
which gives promising results on a small French-English synthetic corpus.
Relaxing the need for source language transcription would drastically change
the data collection methodology in speech translation, especially in
under-resourced scenarios. For instance, in the former project DARPA TRANSTAC
(speech translation from spoken Arabic dialects), a large effort was devoted to
the collection of speech transcripts (and a prerequisite to obtain transcripts
was often a detailed transcription guide for languages with little standardized
spelling). Now, if end-to-end approaches for speech-to-text translation are
successful, one might consider collecting data by asking bilingual speakers to
directly utter speech in the source language from target language text
utterances. Such an approach has the advantage to be applicable to any
unwritten (source) language.
| 2,016 | Computation and Language |
Condensed Memory Networks for Clinical Diagnostic Inferencing | Diagnosis of a clinical condition is a challenging task, which often requires
significant medical investigation. Previous work related to diagnostic
inferencing problems mostly consider multivariate observational data (e.g.
physiological signals, lab tests etc.). In contrast, we explore the problem
using free-text medical notes recorded in an electronic health record (EHR).
Complex tasks like these can benefit from structured knowledge bases, but those
are not scalable. We instead exploit raw text from Wikipedia as a knowledge
source. Memory networks have been demonstrated to be effective in tasks which
require comprehension of free-form text. They use the final iteration of the
learned representation to predict probable classes. We introduce condensed
memory neural networks (C-MemNNs), a novel model with iterative condensation of
memory representations that preserves the hierarchy of features in the memory.
Experiments on the MIMIC-III dataset show that the proposed model outperforms
other variants of memory networks to predict the most probable diagnoses given
a complex clinical scenario.
| 2,017 | Computation and Language |
Invariant Representations for Noisy Speech Recognition | Modern automatic speech recognition (ASR) systems need to be robust under
acoustic variability arising from environmental, speaker, channel, and
recording conditions. Ensuring such robustness to variability is a challenge in
modern day neural network-based ASR systems, especially when all types of
variability are not seen during training. We attempt to address this problem by
encouraging the neural network acoustic model to learn invariant feature
representations. We use ideas from recent research on image generation using
Generative Adversarial Networks and domain adaptation ideas extending
adversarial gradient-based training. A recent work from Ganin et al. proposes
to use adversarial training for image domain adaptation by using an
intermediate representation from the main target classification network to
deteriorate the domain classifier performance through a separate neural
network. Our work focuses on investigating neural architectures which produce
representations invariant to noise conditions for ASR. We evaluate the proposed
architecture on the Aurora-4 task, a popular benchmark for noise robust ASR. We
show that our method generalizes better than the standard multi-condition
training especially when only a few noise categories are seen during training.
| 2,016 | Computation and Language |
When is multitask learning effective? Semantic sequence prediction under
varying data conditions | Multitask learning has been applied successfully to a range of tasks, mostly
morphosyntactic. However, little is known on when MTL works and whether there
are data characteristics that help to determine its success. In this paper we
evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine
different auxiliary tasks, amongst which a novel setup, and correlate their
impact to data-dependent conditions. Our results show that MTL is not always
effective, significant improvements are obtained only for 1 out of 5 tasks.
When successful, auxiliary tasks with compact and more uniform label
distributions are preferable.
| 2,017 | Computation and Language |
Improving the Performance of Neural Machine Translation Involving
Morphologically Rich Languages | The advent of the attention mechanism in neural machine translation models
has improved the performance of machine translation systems by enabling
selective lookup into the source sentence. In this paper, the efficiencies of
translation using bidirectional encoder attention decoder models were studied
with respect to translation involving morphologically rich languages. The
English - Tamil language pair was selected for this analysis. First, the use of
Word2Vec embedding for both the English and Tamil words improved the
translation results by 0.73 BLEU points over the baseline RNNSearch model with
4.84 BLEU score. The use of morphological segmentation before word
vectorization to split the morphologically rich Tamil words into their
respective morphemes before the translation, caused a reduction in the target
vocabulary size by a factor of 8. Also, this model (RNNMorph) improved the
performance of neural machine translation by 7.05 BLEU points over the
RNNSearch model used over the same corpus. Since the BLEU evaluation of the
RNNMorph model might be unreliable due to an increase in the number of matching
tokens per sentence, the performances of the translations were also compared by
means of human evaluation metrics of adequacy, fluency and relative ranking.
Further, the use of morphological segmentation also improved the efficacy of
the attention mechanism.
| 2,017 | Computation and Language |
Embedding Words and Senses Together via Joint Knowledge-Enhanced
Training | Word embeddings are widely used in Natural Language Processing, mainly due to
their success in capturing semantic information from massive corpora. However,
their creation process does not allow the different meanings of a word to be
automatically separated, as it conflates them into a single vector. We address
this issue by proposing a new model which learns word and sense embeddings
jointly. Our model exploits large corpora and knowledge from semantic networks
in order to produce a unified vector space of word and sense embeddings. We
evaluate the main features of our approach both qualitatively and
quantitatively in a variety of tasks, highlighting the advantages of the
proposed method in comparison to state-of-the-art word- and sense-based models.
| 2,017 | Computation and Language |
Entity Identification as Multitasking | Standard approaches in entity identification hard-code boundary detection and
type prediction into labels (e.g., John/B-PER Smith/I-PER) and then perform
Viterbi. This has two disadvantages: 1. the runtime complexity grows
quadratically in the number of types, and 2. there is no natural segment-level
representation. In this paper, we propose a novel neural architecture that
addresses these disadvantages. We frame the problem as multitasking, separating
boundary detection and type prediction but optimizing them jointly. Despite its
simplicity, this architecture performs competitively with fully structured
models such as BiLSTM-CRFs while scaling linearly in the number of types.
Furthermore, by construction, the model induces type-disambiguating embeddings
of predicted mentions.
| 2,017 | Computation and Language |
Discovering Conversational Dependencies between Messages in Dialogs | We investigate the task of inferring conversational dependencies between
messages in one-on-one online chat, which has become one of the most popular
forms of customer service. We propose a novel probabilistic classifier that
leverages conversational, lexical and semantic information. The approach is
evaluated empirically on a set of customer service chat logs from a Chinese
e-commerce website. It outperforms heuristic baselines.
| 2,016 | Computation and Language |
Evaluating Creative Language Generation: The Case of Rap Lyric
Ghostwriting | Language generation tasks that seek to mimic human ability to use language
creatively are difficult to evaluate, since one must consider creativity,
style, and other non-trivial aspects of the generated text. The goal of this
paper is to develop evaluation methods for one such task, ghostwriting of rap
lyrics, and to provide an explicit, quantifiable foundation for the goals and
future directions of this task. Ghostwriting must produce text that is similar
in style to the emulated artist, yet distinct in content. We develop a novel
evaluation methodology that addresses several complementary aspects of this
task, and illustrate how such evaluation can be used to meaningfully analyze
system performance. We provide a corpus of lyrics for 13 rap artists, annotated
for stylistic similarity, which allows us to assess the feasibility of manual
evaluation for generated verse.
| 2,016 | Computation and Language |
#HashtagWars: Learning a Sense of Humor | In this work, we present a new dataset for computational humor, specifically
comparative humor ranking, which attempts to eschew the ubiquitous binary
approach to humor detection. The dataset consists of tweets that are humorous
responses to a given hashtag. We describe the motivation for this new dataset,
as well as the collection process, which includes a description of our
semi-automated system for data collection. We also present initial experiments
for this dataset using both unsupervised and supervised approaches. Our best
supervised system achieved 63.7% accuracy, suggesting that this task is much
more difficult than comparable humor detection tasks. Initial experiments
indicate that a character-level model is more suitable for this task than a
token-level model, likely due to a large amount of puns that can be captured by
a character-level model.
| 2,017 | Computation and Language |
Active Learning for Speech Recognition: the Power of Gradients | In training speech recognition systems, labeling audio clips can be
expensive, and not all data is equally valuable. Active learning aims to label
only the most informative samples to reduce cost. For speech recognition,
confidence scores and other likelihood-based active learning methods have been
shown to be effective. Gradient-based active learning methods, however, are
still not well-understood. This work investigates the Expected Gradient Length
(EGL) approach in active learning for end-to-end speech recognition. We justify
EGL from a variance reduction perspective, and observe that EGL's measure of
informativeness picks novel samples uncorrelated with confidence scores.
Experimentally, we show that EGL can reduce word errors by 11\%, or
alternatively, reduce the number of samples to label by 50\%, when compared to
random sampling.
| 2,016 | Computation and Language |
A Character-Word Compositional Neural Language Model for Finnish | Inspired by recent research, we explore ways to model the highly
morphological Finnish language at the level of characters while maintaining the
performance of word-level models. We propose a new
Character-to-Word-to-Character (C2W2C) compositional language model that uses
characters as input and output while still internally processing word level
embeddings. Our preliminary experiments, using the Finnish Europarl V7 corpus,
indicate that C2W2C can respond well to the challenges of morphologically rich
languages such as high out of vocabulary rates, the prediction of novel words,
and growing vocabulary size. Notably, the model is able to correctly score
inflectional forms that are not present in the training data and sample
grammatically and semantically correct Finnish sentences character by
character.
| 2,016 | Computation and Language |
Reading Comprehension using Entity-based Memory Network | This paper introduces a novel neural network model for question answering,
the \emph{entity-based memory network}. It enhances neural networks' ability of
representing and calculating information over a long period by keeping records
of entities contained in text. The core component is a memory pool which
comprises entities' states. These entities' states are continuously updated
according to the input text. Questions with regard to the input text are used
to search the memory pool for related entities and answers are further
predicted based on the states of retrieved entities. Compared with previous
memory network models, the proposed model is capable of handling fine-grained
information and more sophisticated relations based on entities. We formulated
several different tasks as question answering problems and tested the proposed
model. Experiments reported satisfying results.
| 2,024 | Computation and Language |
FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy.
| 2,016 | Computation and Language |
Unraveling reported dreams with text analytics | We investigate what distinguishes reported dreams from other personal
narratives. The continuity hypothesis, stemming from psychological dream
analysis work, states that most dreams refer to a person's daily life and
personal concerns, similar to other personal narratives such as diary entries.
Differences between the two texts may reveal the linguistic markers of dream
text, which could be the basis for new dream analysis work and for the
automatic detection of dream descriptions. We used three text analytics
methods: text classification, topic modeling, and text coherence analysis, and
applied these methods to a balanced set of texts representing dreams, diary
entries, and other personal stories. We observed that dream texts could be
distinguished from other personal narratives nearly perfectly, mostly based on
the presence of uncertainty markers and descriptions of scenes. Important
markers for non-dream narratives are specific time expressions and
conversational expressions. Dream texts also exhibit a lower discourse
coherence than other personal narratives.
| 2,016 | Computation and Language |
From narrative descriptions to MedDRA: automagically encoding adverse
drug reactions | The collection of narrative spontaneous reports is an irreplaceable source
for the prompt detection of suspected adverse drug reactions (ADRs): qualified
domain experts manually revise a huge amount of narrative descriptions and then
encode texts according to MedDRA standard terminology. The manual annotation of
narrative documents with medical terminology is a subtle and expensive task,
since the number of reports is growing up day-by-day. MagiCoder, a Natural
Language Processing algorithm, is proposed for the automatic encoding of
free-text descriptions into MedDRA terms. MagiCoder procedure is efficient in
terms of computational complexity (in particular, it is linear in the size of
the narrative input and the terminology). We tested it on a large dataset of
about 4500 manually revised reports, by performing an automated comparison
between human and MagiCoder revisions. For the current base version of
MagiCoder, we measured: on short descriptions, an average recall of $86\%$ and
an average precision of $88\%$; on medium-long descriptions (up to 255
characters), an average recall of $64\%$ and an average precision of $63\%$.
From a practical point of view, MagiCoder reduces the time required for
encoding ADR reports. Pharmacologists have simply to review and validate the
MagiCoder terms proposed by the application, instead of choosing the right
terms among the 70K low level terms of MedDRA. Such improvement in the
efficiency of pharmacologists' work has a relevant impact also on the quality
of the subsequent data analysis. We developed MagiCoder for the Italian
pharmacovigilance language. However, our proposal is based on a general
approach, not depending on the considered language nor the term dictionary.
| 2,016 | Computation and Language |
Context-aware Sentiment Word Identification: sentiword2vec | Traditional sentiment analysis often uses sentiment dictionary to extract
sentiment information in text and classify documents. However, emerging
informal words and phrases in user generated content call for analysis aware to
the context. Usually, they have special meanings in a particular context.
Because of its great performance in representing inter-word relation, we use
sentiment word vectors to identify the special words. Based on the distributed
language model word2vec, in this paper we represent a novel method about
sentiment representation of word under particular context, to be detailed, to
identify the words with abnormal sentiment polarity in long answers. Result
shows the improved model shows better performance in representing the words
with special meaning, while keep doing well in representing special idiomatic
pattern. Finally, we will discuss the meaning of vectors representing in the
field of sentiment, which may be different from general object-based
conditions.
| 2,016 | Computation and Language |
Neural Machine Translation by Minimising the Bayes-risk with Respect to
Syntactic Translation Lattices | We present a novel scheme to combine neural machine translation (NMT) with
traditional statistical machine translation (SMT). Our approach borrows ideas
from linearised lattice minimum Bayes-risk decoding for SMT. The NMT score is
combined with the Bayes-risk of the translation according the SMT lattice. This
makes our approach much more flexible than $n$-best list or lattice rescoring
as the neural decoder is not restricted to the SMT search space. We show an
efficient and simple way to integrate risk estimation into the NMT decoder
which is suitable for word-level as well as subword-unit-level NMT. We test our
method on English-German and Japanese-English and report significant gains over
lattice rescoring on several data sets for both single and ensembled NMT. The
MBR decoder produces entirely new hypotheses far beyond simply rescoring the
SMT search space or fixing UNKs in the NMT output.
| 2,017 | Computation and Language |
Deep Active Learning for Dialogue Generation | We propose an online, end-to-end, neural generative conversational model for
open-domain dialogue. It is trained using a unique combination of offline
two-phase supervised learning and online human-in-the-loop active learning.
While most existing research proposes offline supervision or hand-crafted
reward functions for online reinforcement, we devise a novel interactive
learning mechanism based on hamming-diverse beam search for response generation
and one-character user-feedback at each step. Experiments show that our model
inherently promotes the generation of semantically relevant and interesting
responses, and can be used to train agents with customized personas, moods and
conversational styles.
| 2,017 | Computation and Language |
Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass.
| 2,017 | Computation and Language |
ConceptNet 5.5: An Open Multilingual Graph of General Knowledge | Machine learning about language can be improved by supplying it with specific
knowledge and sources of external information. We present here a new version of
the linked open data resource ConceptNet that is particularly well suited to be
used with modern NLP techniques such as word embeddings.
ConceptNet is a knowledge graph that connects words and phrases of natural
language with labeled edges. Its knowledge is collected from many sources that
include expert-created resources, crowd-sourcing, and games with a purpose. It
is designed to represent the general knowledge involved in understanding
language, improving natural language applications by allowing the application
to better understand the meanings behind the words people use.
When ConceptNet is combined with word embeddings acquired from distributional
semantics (such as word2vec), it provides applications with understanding that
they would not acquire from distributional semantics alone, nor from narrower
resources such as WordNet or DBPedia. We demonstrate this with state-of-the-art
results on intrinsic evaluations of word relatedness that translate into
improvements on applications of word vectors, including solving SAT-style
analogies.
| 2,017 | Computation and Language |
Evaluating Automatic Speech Recognition Systems in Comparison With Human
Perception Results Using Distinctive Feature Measures | This paper describes methods for evaluating automatic speech recognition
(ASR) systems in comparison with human perception results, using measures
derived from linguistic distinctive features. Error patterns in terms of
manner, place and voicing are presented, along with an examination of confusion
matrices via a distinctive-feature-distance metric. These evaluation methods
contrast with conventional performance criteria that focus on the phone or word
level, and are intended to provide a more detailed profile of ASR system
performance,as well as a means for direct comparison with human perception
results at the sub-phonemic level.
| 2,017 | Computation and Language |
Performance Improvements of Probabilistic Transcript-adapted ASR with
Recurrent Neural Network and Language-specific Constraints | Mismatched transcriptions have been proposed as a mean to acquire
probabilistic transcriptions from non-native speakers of a language.Prior work
has demonstrated the value of these transcriptions by successfully adapting
cross-lingual ASR systems for different tar-get languages. In this work, we
describe two techniques to refine these probabilistic transcriptions: a
noisy-channel model of non-native phone misperception is trained using a
recurrent neural net-work, and decoded using minimally-resourced
language-dependent pronunciation constraints. Both innovations improve quality
of the transcript, and both innovations reduce phone error rate of a
trainedASR, by 7% and 9% respectively
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.