Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
End-to-End Task-Completion Neural Dialogue Systems | One of the major drawbacks of modularized task-completion dialogue systems is
that each module is trained individually, which presents several challenges.
For example, downstream modules are affected by earlier modules, and the
performance of the entire system is not robust to the accumulated errors. This
paper presents a novel end-to-end learning framework for task-completion
dialogue systems to tackle such issues. Our neural dialogue system can directly
interact with a structured database to assist users in accessing information
and accomplishing certain tasks. The reinforcement learning based dialogue
manager offers robust capabilities to handle noises caused by other components
of the dialogue system. Our experiments in a movie-ticket booking domain show
that our end-to-end system not only outperforms modularized dialogue system
baselines for both objective and subjective evaluation, but also is robust to
noises as demonstrated by several systematic experiments with different error
granularity and rates specific to the language understanding module.
| 2,018 | Computation and Language |
Exponential Moving Average Model in Parallel Speech Recognition Training | As training data rapid growth, large-scale parallel training with multi-GPUs
cluster is widely applied in the neural network model learning currently.We
present a new approach that applies exponential moving average method in
large-scale parallel training of neural network model. It is a non-interference
strategy that the exponential moving average model is not broadcasted to
distributed workers to update their local models after model synchronization in
the training process, and it is implemented as the final model of the training
system. Fully-connected feed-forward neural networks (DNNs) and deep
unidirectional Long short-term memory (LSTM) recurrent neural networks (RNNs)
are successfully trained with proposed method for large vocabulary continuous
speech recognition on Shenma voice search data in Mandarin. The character error
rate (CER) of Mandarin speech recognition further degrades than
state-of-the-art approaches of parallel training.
| 2,017 | Computation and Language |
Lexical Resources for Hindi Marathi MT | In this paper we describe some ways to utilize various lexical resources to
improve the quality of statistical machine translation system. We have
augmented the training corpus with various lexical resources such as
IndoWordnet semantic relation set, function words, kridanta pairs and verb
phrases etc. Our research on the usage of lexical resources mainly focused on
two ways such as augmenting parallel corpus with more vocabulary and augmenting
with various word forms. We have described case studies, evaluations and
detailed error analysis for both Marathi to Hindi and Hindi to Marathi machine
translation systems. From the evaluations we observed that, there is an
incremental growth in the quality of machine translation as the usage of
various lexical resources increases. Moreover usage of various lexical
resources helps to improve the coverage and quality of machine translation
where limited parallel corpus is available.
| 2,017 | Computation and Language |
Neural Machine Translation and Sequence-to-sequence Models: A Tutorial | This tutorial introduces a new and powerful set of techniques variously
called "neural machine translation" or "neural sequence-to-sequence models".
These techniques have been used in a number of tasks regarding the handling of
human language, and can be a powerful tool in the toolbox of anyone who wants
to model sequential data of some sort. The tutorial assumes that the reader
knows the basics of math and programming, but does not assume any particular
experience with neural networks or natural language processing. It attempts to
explain the intuition behind the various methods covered, then delves into them
with enough mathematical detail to understand them concretely, and culiminates
with a suggestion for an implementation exercise, where readers can test that
they understood the content in practice.
| 2,017 | Computation and Language |
Word forms - not just their lengths- are optimized for efficient
communication | The inverse relationship between the length of a word and the frequency of
its use, first identified by G.K. Zipf in 1935, is a classic empirical law that
holds across a wide range of human languages. We demonstrate that length is one
aspect of a much more general property of words: how distinctive they are with
respect to other words in a language. Distinctiveness plays a critical role in
recognizing words in fluent speech, in that it reflects the strength of
potential competitors when selecting the best candidate for an ambiguous
signal. Phonological information content, a measure of a word's string
probability under a statistical model of a language's sound or character
sequences, concisely captures distinctiveness. Examining large-scale corpora
from 13 languages, we find that distinctiveness significantly outperforms word
length as a predictor of frequency. This finding provides evidence that
listeners' processing constraints shape fine-grained aspects of word forms
across languages.
| 2,017 | Computation and Language |
Sound-Word2Vec: Learning Word Representations Grounded in Sounds | To be able to interact better with humans, it is crucial for machines to
understand sound - a primary modality of human perception. Previous works have
used sound to learn embeddings for improved generic textual similarity
assessment. In this work, we treat sound as a first-class citizen, studying
downstream textual tasks which require aural grounding. To this end, we propose
sound-word2vec - a new embedding scheme that learns specialized word embeddings
grounded in sounds. For example, we learn that two seemingly (semantically)
unrelated concepts, like leaves and paper are similar due to the similar
rustling sounds they make. Our embeddings prove useful in textual tasks
requiring aural reasoning like text-based sound retrieval and discovering foley
sound effects (used in movies). Moreover, our embedding space captures
interesting dependencies between words and onomatopoeia and outperforms prior
work on aurally-relevant word relatedness datasets such as AMEN and ASLex.
| 2,017 | Computation and Language |
A Novel Comprehensive Approach for Estimating Concept Semantic
Similarity in WordNet | Computation of semantic similarity between concepts is an important
foundation for many research works. This paper focuses on IC computing methods
and IC measures, which estimate the semantic similarities between concepts by
exploiting the topological parameters of the taxonomy. Based on analyzing
representative IC computing methods and typical semantic similarity measures,
we propose a new hybrid IC computing method. Through adopting the parameter
dhyp and lch, we utilize the new IC computing method and propose a novel
comprehensive measure of semantic similarity between concepts. An experiment
based on WordNet "is a" taxonomy has been designed to test representative
measures and our measure on benchmark dataset R&G, and the results show that
our measure can obviously improve the similarity accuracy. We evaluate the
proposed approach by comparing the correlation coefficients between five
measures and the artificial data. The results show that our proposal
outperforms the previous measures.
| 2,017 | Computation and Language |
Performing Stance Detection on Twitter Data using Computational
Linguistics Techniques | As humans, we can often detect from a persons utterances if he or she is in
favor of or against a given target entity (topic, product, another person,
etc). But from the perspective of a computer, we need means to automatically
deduce the stance of the tweeter, given just the tweet text. In this paper, we
present our results of performing stance detection on twitter data using a
supervised approach. We begin by extracting bag-of-words to perform
classification using TIMBL, then try and optimize the features to improve
stance detection accuracy, followed by extending the dataset with two sets of
lexicons - arguing, and MPQA subjectivity; next we explore the MALT parser and
construct features using its dependency triples, finally we perform analysis
using Scikit-learn Random Forest implementation.
| 2,017 | Computation and Language |
Random vector generation of a semantic space | We show how random vectors and random projection can be implemented in the
usual vector space model to construct a Euclidean semantic space from a French
synonym dictionary. We evaluate theoretically the resulting noise and show the
experimental distribution of the similarities of terms in a neighborhood
according to the choice of parameters. We also show that the Schmidt
orthogonalization process is applicable and can be used to separate homonyms
with distinct semantic meanings. Neighboring terms are easily arranged into
semantically significant clusters which are well suited to the generation of
realistic lists of synonyms and to such applications as word selection for
automatic text generation. This process, applicable to any language, can easily
be extended to collocations, is extremely fast and can be updated in real time,
whenever new synonyms are proposed.
| 2,017 | Computation and Language |
English Conversational Telephone Speech Recognition by Humans and
Machines | One of the most difficult speech recognition tasks is accurate recognition of
human to human communication. Advances in deep learning over the last few years
have produced major speech recognition improvements on the representative
Switchboard conversational corpus. Word error rates that just a few years ago
were 14% have dropped to 8.0%, then 6.6% and most recently 5.8%, and are now
believed to be within striking range of human performance. This then raises two
issues - what IS human performance, and how far down can we still drive speech
recognition error rates? A recent paper by Microsoft suggests that we have
already achieved human performance. In trying to verify this statement, we
performed an independent set of human performance measurements on two
conversational tasks and found that human performance may be considerably
better than what was earlier reported, giving the community a significantly
harder goal to achieve. We also report on our own efforts in this area,
presenting a set of acoustic and language modeling techniques that lowered the
word error rate of our own English conversational telephone LVCSR system to the
level of 5.5%/10.3% on the Switchboard/CallHome subsets of the Hub5 2000
evaluation, which - at least at the writing of this paper - is a new
performance milestone (albeit not at what we measure to be human performance!).
On the acoustic side, we use a score fusion of three models: one LSTM with
multiple feature inputs, a second LSTM trained with speaker-adversarial
multi-task learning and a third residual net (ResNet) with 25 convolutional
layers and time-dilated convolutions. On the language modeling side, we use
word and character LSTMs and convolutional WaveNet-style language models.
| 2,017 | Computation and Language |
Building a Syllable Database to Solve the Problem of Khmer Word
Segmentation | Word segmentation is a basic problem in natural language processing. With the
languages having the complex writing system like the Khmer language in Southern
of Vietnam, this problem really very intractable, posing the significant
challenges. Although there are some experts in Vietnam as well as international
having deeply researched this problem, there are still no reasonable results
meeting the demand, in particular, no treated thoroughly the ambiguous
phenomenon, in the process of Khmer language processing so far. This paper
present a solution based on the syllable division into component clusters using
two syllable models proposed, thereby building a Khmer syllable database, is
still not actually available. This method using a lexical database updated from
the online Khmer dictionaries and some supported dictionaries serving role of
training data and complementary linguistic characteristics. Each component
cluster is labelled and located by the first and last letter to identify
entirety a syllable. This approach is workable and the test results achieve
high accuracy, eliminate the ambiguity, contribute to solving the problem of
word segmentation and applying efficiency in Khmer language processing.
| 2,017 | Computation and Language |
Leveraging Large Amounts of Weakly Supervised Data for Multi-Language
Sentiment Classification | This paper presents a novel approach for multi-lingual sentiment
classification in short texts. This is a challenging task as the amount of
training data in languages other than English is very limited. Previously
proposed multi-lingual approaches typically require to establish a
correspondence to English for which powerful classifiers are already available.
In contrast, our method does not require such supervision. We leverage large
amounts of weakly-supervised data in various languages to train a multi-layer
convolutional network and demonstrate the importance of using pre-training of
such networks. We thoroughly evaluate our approach on various multi-lingual
datasets, including the recent SemEval-2016 sentiment prediction benchmark
(Task 4), where we achieved state-of-the-art performance. We also compare the
performance of our model trained individually for each language to a variant
trained for all languages at once. We show that the latter model reaches
slightly worse - but still acceptable - performance when compared to the single
language model, while benefiting from better generalization properties across
languages.
| 2,017 | Computation and Language |
Unsupervised Learning of Sentence Embeddings using Compositional n-Gram
Features | The recent tremendous success of unsupervised word embeddings in a multitude
of applications raises the obvious question if similar methods could be derived
to improve embeddings (i.e. semantic representations) of word sequences as
well. We present a simple but efficient unsupervised objective to train
distributed representations of sentences. Our method outperforms the
state-of-the-art unsupervised models on most benchmark tasks, highlighting the
robustness of the produced general-purpose sentence embeddings.
| 2,018 | Computation and Language |
Learning opacity in Stratal Maximum Entropy Grammar | Opaque phonological patterns are sometimes claimed to be difficult to learn;
specific hypotheses have been advanced about the relative difficulty of
particular kinds of opaque processes (Kiparsky 1971, 1973), and the kind of
data that will be helpful in learning an opaque pattern (Kiparsky 2000). In
this paper, we present a computationally implemented learning theory for one
grammatical theory of opacity: a Maximum Entropy version of Stratal OT
(Berm\'udez-Otero 1999, Kiparsky 2000), and test it on simplified versions of
opaque French tense-lax vowel alternations and the opaque interaction of
diphthong raising and flapping in Canadian English. We find that the difficulty
of opacity can be influenced by evidence for stratal affiliation: the Canadian
English case is easier if the learner encounters application of raising outside
the flapping context, or non-application of raising between words (i.e., <life>
with a raised vowel; <lie for> with a non-raised vowel).
| 2,017 | Computation and Language |
Linguistic Knowledge as Memory for Recurrent Neural Networks | Training recurrent neural networks to model long term dependencies is
difficult. Hence, we propose to use external linguistic knowledge as an
explicit signal to inform the model which memories it should utilize.
Specifically, external knowledge is used to augment a sequence with typed edges
between arbitrarily distant elements, and the resulting graph is decomposed
into directed acyclic subgraphs. We introduce a model that encodes such graphs
as explicit memory in recurrent neural networks, and use it to model
coreference relations in text. We apply our model to several text comprehension
tasks and achieve new state-of-the-art results on all considered benchmarks,
including CNN, bAbi, and LAMBADA. On the bAbi QA tasks, our model solves 15 out
of the 20 tasks with only 1000 training examples per task. Analysis of the
learned representations further demonstrates the ability of our model to encode
fine-grained entity information across a document.
| 2,017 | Computation and Language |
A World of Difference: Divergent Word Interpretations among People | Divergent word usages reflect differences among people. In this paper, we
present a novel angle for studying word usage divergence -- word
interpretations. We propose an approach that quantifies semantic differences in
interpretations among different groups of people. The effectiveness of our
approach is validated by quantitative evaluations. Experiment results indicate
that divergences in word interpretations exist. We further apply the approach
to two well studied types of differences between people -- gender and region.
The detected words with divergent interpretations reveal the unique features of
specific groups of people. For gender, we discover that certain different
interests, social attitudes, and characters between males and females are
reflected in their divergent interpretations of many words. For region, we find
that specific interpretations of certain words reveal the geographical and
cultural features of different regions.
| 2,017 | Computation and Language |
Spice up Your Chat: The Intentions and Sentiment Effects of Using Emoji | Emojis, as a new way of conveying nonverbal cues, are widely adopted in
computer-mediated communications. In this paper, first from a message sender
perspective, we focus on people's motives in using four types of emojis --
positive, neutral, negative, and non-facial. We compare the willingness levels
of using these emoji types for seven typical intentions that people usually
apply nonverbal cues for in communication. The results of extensive statistical
hypothesis tests not only report the popularities of the intentions, but also
uncover the subtle differences between emoji types in terms of intended uses.
Second, from a perspective of message recipients, we further study the
sentiment effects of emojis, as well as their duplications, on verbal messages.
Different from previous studies in emoji sentiment, we study the sentiments of
emojis and their contexts as a whole. The experiment results indicate that the
powers of conveying sentiment are different between four emoji types, and the
sentiment effects of emojis vary in the contexts of different valences.
| 2,017 | Computation and Language |
Deep Learning applied to NLP | Convolutional Neural Network (CNNs) are typically associated with Computer
Vision. CNNs are responsible for major breakthroughs in Image Classification
and are the core of most Computer Vision systems today. More recently CNNs have
been applied to problems in Natural Language Processing and gotten some
interesting results. In this paper, we will try to explain the basics of CNNs,
its different variations and how they have been applied to NLP.
| 2,017 | Computation and Language |
Information Extraction in Illicit Domains | Extracting useful entities and attribute values from illicit domains such as
human trafficking is a challenging problem with the potential for widespread
social impact. Such domains employ atypical language models, have `long tails'
and suffer from the problem of concept drift. In this paper, we propose a
lightweight, feature-agnostic Information Extraction (IE) paradigm specifically
designed for such domains. Our approach uses raw, unlabeled text from an
initial corpus, and a few (12-120) seed annotations per domain-specific
attribute, to learn robust IE models for unobserved pages and websites.
Empirically, we demonstrate that our approach can outperform feature-centric
Conditional Random Field baselines by over 18\% F-Measure on five annotated
sets of real-world human trafficking datasets in both low-supervision and
high-supervision settings. We also show that our approach is demonstrably
robust to concept drift, and can be efficiently bootstrapped even in a serial
computing environment.
| 2,017 | Computation and Language |
A Structured Self-attentive Sentence Embedding | This paper proposes a new model for extracting an interpretable sentence
embedding by introducing self-attention. Instead of using a vector, we use a
2-D matrix to represent the embedding, with each row of the matrix attending on
a different part of the sentence. We also propose a self-attention mechanism
and a special regularization term for the model. As a side effect, the
embedding comes with an easy way of visualizing what specific parts of the
sentence are encoded into the embedding. We evaluate our model on 3 different
tasks: author profiling, sentiment classification, and textual entailment.
Results show that our model yields a significant performance gain compared to
other sentence embedding methods in all of the 3 tasks.
| 2,017 | Computation and Language |
Detecting Sockpuppets in Deceptive Opinion Spam | This paper explores the problem of sockpuppet detection in deceptive opinion
spam using authorship attribution and verification approaches. Two methods are
explored. The first is a feature subsampling scheme that uses the KL-Divergence
on stylistic language models of an author to find discriminative features. The
second is a transduction scheme, spy induction that leverages the diversity of
authors in the unlabeled test set by sending a set of spies (positive samples)
from the training set to retrieve hidden samples in the unlabeled test set
using nearest and farthest neighbors. Experiments using ground truth sockpuppet
data show the effectiveness of the proposed schemes.
| 2,017 | Computation and Language |
Turkish PoS Tagging by Reducing Sparsity with Morpheme Tags in Small
Datasets | Sparsity is one of the major problems in natural language processing. The
problem becomes even more severe in agglutinating languages that are highly
prone to be inflected. We deal with sparsity in Turkish by adopting
morphological features for part-of-speech tagging. We learn inflectional and
derivational morpheme tags in Turkish by using conditional random fields (CRF)
and we employ the morpheme tags in part-of-speech (PoS) tagging by using hidden
Markov models (HMMs) to mitigate sparsity. Results show that using morpheme
tags in PoS tagging helps alleviate the sparsity in emission probabilities. Our
model outperforms other hidden Markov model based PoS tagging models for small
training datasets in Turkish. We obtain an accuracy of 94.1% in morpheme
tagging and 89.2% in PoS tagging on a 5K training dataset.
| 2,017 | Computation and Language |
The cognitive roots of regularization in language | Regularization occurs when the output a learner produces is less variable
than the linguistic data they observed. In an artificial language learning
experiment, we show that there exist at least two independent sources of
regularization bias in cognition: a domain-general source based on cognitive
load and a domain-specific source triggered by linguistic stimuli. Both of
these factors modulate how frequency information is encoded and produced, but
only the production-side modulations result in regularization (i.e. cause
learners to eliminate variation from the observed input). We formalize the
definition of regularization as the reduction of entropy and find that entropy
measures are better at identifying regularization behavior than frequency-based
analyses. Using our experimental data and a model of cultural transmission, we
generate predictions for the amount of regularity that would develop in each
experimental condition if the artificial language were transmitted over several
generations of learners. Here we find that the effect of cognitive constraints
can become more complex when put into the context of cultural evolution:
although learning biases certainly carry information about the course of
language evolution, we should not expect a one-to-one correspondence between
the micro-level processes that regularize linguistic datasets and the
macro-level evolution of linguistic regularity.
| 2,018 | Computation and Language |
A Study of Metrics of Distance and Correlation Between Ranked Lists for
Compositionality Detection | Compositionality in language refers to how much the meaning of some phrase
can be decomposed into the meaning of its constituents and the way these
constituents are combined. Based on the premise that substitution by synonyms
is meaning-preserving, compositionality can be approximated as the semantic
similarity between a phrase and a version of that phrase where words have been
replaced by their synonyms. Different ways of representing such phrases exist
(e.g., vectors [1] or language models [2]), and the choice of representation
affects the measurement of semantic similarity.
We propose a new compositionality detection method that represents phrases as
ranked lists of term weights. Our method approximates the semantic similarity
between two ranked list representations using a range of well-known distance
and correlation metrics. In contrast to most state-of-the-art approaches in
compositionality detection, our method is completely unsupervised. Experiments
with a publicly available dataset of 1048 human-annotated phrases shows that,
compared to strong supervised baselines, our approach provides superior
measurement of compositionality using any of the distance and correlation
metrics considered.
| 2,017 | Computation and Language |
Comparison of SMT and RBMT; The Requirement of Hybridization for
Marathi-Hindi MT | We present in this paper our work on comparison between Statistical Machine
Translation (SMT) and Rule-based machine translation for translation from
Marathi to Hindi. Rule Based systems although robust take lots of time to
build. On the other hand statistical machine translation systems are easier to
create, maintain and improve upon. We describe the development of a basic
Marathi-Hindi SMT system and evaluate its performance. Through a detailed error
analysis, we, point out the relative strengths and weaknesses of both systems.
Effectively, we shall see that even with a small amount of training corpus a
statistical machine translation system has many advantages for high quality
domain specific machine translation over that of a rule-based counterpart.
| 2,017 | Computation and Language |
Applying the Wizard-of-Oz Technique to Multimodal Human-Robot Dialogue | Our overall program objective is to provide more natural ways for soldiers to
interact and communicate with robots, much like how soldiers communicate with
other soldiers today. We describe how the Wizard-of-Oz (WOz) method can be
applied to multimodal human-robot dialogue in a collaborative exploration task.
While the WOz method can help design robot behaviors, traditional approaches
place the burden of decisions on a single wizard. In this work, we consider two
wizards to stand in for robot navigation and dialogue management software
components. The scenario used to elicit data is one in which a human-robot team
is tasked with exploring an unknown environment: a human gives verbal
instructions from a remote location and the robot follows them, clarifying
possible misunderstandings as needed via dialogue. We found the division of
labor between wizards to be workable, which holds promise for future software
development.
| 2,017 | Computation and Language |
Coping with Construals in Broad-Coverage Semantic Annotation of
Adpositions | We consider the semantics of prepositions, revisiting a broad-coverage
annotation scheme used for annotating all 4,250 preposition tokens in a 55,000
word corpus of English. Attempts to apply the scheme to adpositions and case
markers in other languages, as well as some problematic cases in English, have
led us to reconsider the assumption that a preposition's lexical contribution
is equivalent to the role/relation that it mediates. Our proposal is to embrace
the potential for construal in adposition use, expressing such phenomena
directly at the token level to manage complexity and avoid sense proliferation.
We suggest a framework to represent both the scene role and the adposition's
lexical function so they can be annotated at scale---supporting automatic,
statistical processing of domain-general language---and sketch how this
representation would inform a constructional analysis.
| 2,017 | Computation and Language |
Effects of Limiting Memory Capacity on the Behaviour of Exemplar
Dynamics | Exemplar models are a popular class of models used to describe language
change. Here we study how limiting the memory capacity of an individual in
these models affects the system's behaviour. In particular we demonstrate the
effect this change has on the extinction of categories. Previous work in
exemplar dynamics has not addressed this question. In order to investigate
this, we will inspect a simplified exemplar model. We will prove for the
simplified model that all the sound categories but one will always become
extinct, whether memory storage is limited or not. However, computer
simulations show that changing the number of stored memories alters how fast
categories become extinct.
| 2,017 | Computation and Language |
Massive Exploration of Neural Machine Translation Architectures | Neural Machine Translation (NMT) has shown remarkable progress over the past
few years with production systems now being deployed to end-users. One major
drawback of current architectures is that they are expensive to train,
typically requiring days to weeks of GPU time to converge. This makes
exhaustive hyperparameter search, as is commonly done with other neural network
architectures, prohibitively expensive. In this work, we present the first
large-scale analysis of NMT architecture hyperparameters. We report empirical
results and variance numbers for several hundred experimental runs,
corresponding to over 250,000 GPU hours on the standard WMT English to German
translation task. Our experiments lead to novel insights and practical advice
for building and extending NMT architectures. As part of this contribution, we
release an open-source NMT framework that enables researchers to easily
experiment with novel techniques and reproduce state of the art results.
| 2,017 | Computation and Language |
Ask Me Even More: Dynamic Memory Tensor Networks (Extended Model) | We examine Memory Networks for the task of question answering (QA), under
common real world scenario where training examples are scarce and under weakly
supervised scenario, that is only extrinsic labels are available for training.
We propose extensions for the Dynamic Memory Network (DMN), specifically within
the attention mechanism, we call the resulting Neural Architecture as Dynamic
Memory Tensor Network (DMTN). Ultimately, we see that our proposed extensions
results in over 80% improvement in the number of task passed against the
baselined standard DMN and 20% more task passed compared to state-of-the-art
End-to-End Memory Network for Facebook's single task weakly trained 1K bAbi
dataset.
| 2,017 | Computation and Language |
Language Use Matters: Analysis of the Linguistic Structure of Question
Texts Can Characterize Answerability in Quora | Quora is one of the most popular community Q&A sites of recent times.
However, many question posts on this Q&A site often do not get answered. In
this paper, we quantify various linguistic activities that discriminates an
answered question from an unanswered one. Our central finding is that the way
users use language while writing the question text can be a very effective
means to characterize answerability. This characterization helps us to predict
early if a question remaining unanswered for a specific time period t will
eventually be answered or not and achieve an accuracy of 76.26% (t = 1 month)
and 68.33% (t = 3 months). Notably, features representing the language use
patterns of the users are most discriminative and alone account for an accuracy
of 74.18%. We also compare our method with some of the similar works (Dror et
al., Yang et al.) achieving a maximum improvement of ~39% in terms of accuracy.
| 2,017 | Computation and Language |
Automated Hate Speech Detection and the Problem of Offensive Language | A key challenge for automatic hate-speech detection on social media is the
separation of hate speech from other instances of offensive language. Lexical
detection methods tend to have low precision because they classify all messages
containing particular terms as hate speech and previous work using supervised
learning has failed to distinguish between the two categories. We used a
crowd-sourced hate speech lexicon to collect tweets containing hate speech
keywords. We use crowd-sourcing to label a sample of these tweets into three
categories: those containing hate speech, only offensive language, and those
with neither. We train a multi-class classifier to distinguish between these
different categories. Close analysis of the predictions and the errors shows
when we can reliably separate hate speech from other offensive language and
when this differentiation is more difficult. We find that racist and homophobic
tweets are more likely to be classified as hate speech but that sexist tweets
are generally classified as offensive. Tweets without explicit hate keywords
are also more difficult to classify.
| 2,017 | Computation and Language |
Why we have switched from building full-fledged taxonomies to simply
detecting hypernymy relations | The study of taxonomies and hypernymy relations has been extensive on the
Natural Language Processing (NLP) literature. However, the evaluation of
taxonomy learning approaches has been traditionally troublesome, as it mainly
relies on ad-hoc experiments which are hardly reproducible and manually
expensive. Partly because of this, current research has been lately focusing on
the hypernymy detection task. In this paper we reflect on this trend, analyzing
issues related to current evaluation procedures. Finally, we propose three
potential avenues for future work so that is-a relations and resources based on
them play a more important role in downstream NLP applications.
| 2,017 | Computation and Language |
MetaPAD: Meta Pattern Discovery from Massive Text Corpora | Mining textual patterns in news, tweets, papers, and many other kinds of text
corpora has been an active theme in text mining and NLP research. Previous
studies adopt a dependency parsing-based pattern discovery approach. However,
the parsing results lose rich context around entities in the patterns, and the
process is costly for a corpus of large scale. In this study, we propose a
novel typed textual pattern structure, called meta pattern, which is extended
to a frequent, informative, and precise subsequence pattern in certain context.
We propose an efficient framework, called MetaPAD, which discovers meta
patterns from massive corpora with three techniques: (1) it develops a
context-aware segmentation method to carefully determine the boundaries of
patterns with a learnt pattern quality assessment function, which avoids costly
dependency parsing and generates high-quality patterns; (2) it identifies and
groups synonymous meta patterns from multiple facets---their types, contexts,
and extractions; and (3) it examines type distributions of entities in the
instances extracted by each group of patterns, and looks for appropriate type
levels to make discovered patterns precise. Experiments demonstrate that our
proposed framework discovers high-quality typed textual patterns efficiently
from different genres of massive corpora and facilitates information
extraction.
| 2,017 | Computation and Language |
Story Cloze Ending Selection Baselines and Data Examination | This paper describes two supervised baseline systems for the Story Cloze Test
Shared Task (Mostafazadeh et al., 2016a). We first build a classifier using
features based on word embeddings and semantic similarity computation. We
further implement a neural LSTM system with different encoding strategies that
try to model the relation between the story and the provided endings. Our
experiments show that a model using representation features based on average
word embedding vectors over the given story words and the candidate ending
sentences words, joint with similarity features between the story and candidate
ending representations performed better than the neural models. Our best model
achieves an accuracy of 72.42, ranking 3rd in the official evaluation.
| 2,017 | Computation and Language |
Nematus: a Toolkit for Neural Machine Translation | We present Nematus, a toolkit for Neural Machine Translation. The toolkit
prioritizes high translation accuracy, usability, and extensibility. Nematus
has been used to build top-performing submissions to shared translation tasks
at WMT and IWSLT, and has been used to train systems for production
environments.
| 2,017 | Computation and Language |
El Lenguaje Natural como Lenguaje Formal | Formal languages theory is useful for the study of natural language. In
particular, it is of interest to study the adequacy of the grammatical
formalisms to express syntactic phenomena present in natural language. First,
it helps to draw hypothesis about the nature and complexity of the
speaker-hearer linguistic competence, a fundamental question in linguistics and
other cognitive sciences. Moreover, from an engineering point of view, it
allows the knowledge of practical limitations of applications based on those
formalisms. In this article I introduce the adequacy problem of grammatical
formalisms for natural language, also introducing some formal language theory
concepts required for this discussion. Then, I review the formalisms that have
been proposed in history, and the arguments that have been given to support or
reject their adequacy.
-----
La teor\'ia de lenguajes formales es \'util para el estudio de los lenguajes
naturales. En particular, resulta de inter\'es estudiar la adecuaci\'on de los
formalismos gramaticales para expresar los fen\'omenos sint\'acticos presentes
en el lenguaje natural. Primero, ayuda a trazar hip\'otesis acerca de la
naturaleza y complejidad de las competencias ling\"u\'isticas de los
hablantes-oyentes del lenguaje, un interrogante fundamental de la
ling\"u\'istica y otras ciencias cognitivas. Adem\'as, desde el punto de vista
de la ingenier\'ia, permite conocer limitaciones pr\'acticas de las
aplicaciones basadas en dichos formalismos. En este art\'iculo hago una
introducci\'on al problema de la adecuaci\'on de los formalismos gramaticales
para el lenguaje natural, introduciendo tambi\'en algunos conceptos de la
teor\'ia de lenguajes formales necesarios para esta discusi\'on. Luego, hago un
repaso de los formalismos que han sido propuestos a lo largo de la historia, y
de los argumentos que se han dado para sostener o refutar su adecuaci\'on.
| 2,017 | Computation and Language |
DRAGNN: A Transition-based Framework for Dynamically Connected Neural
Networks | In this work, we present a compact, modular framework for constructing novel
recurrent neural architectures. Our basic module is a new generic unit, the
Transition Based Recurrent Unit (TBRU). In addition to hidden layer
activations, TBRUs have discrete state dynamics that allow network connections
to be built dynamically as a function of intermediate activations. By
connecting multiple TBRUs, we can extend and combine commonly used
architectures such as sequence-to-sequence, attention mechanisms, and
re-cursive tree-structured models. A TBRU can also serve as both an encoder for
downstream tasks and as a decoder for its own task simultaneously, resulting in
more accurate multi-task learning. We call our approach Dynamic Recurrent
Acyclic Graphical Neural Networks, or DRAGNN. We show that DRAGNN is
significantly more accurate and efficient than seq2seq with attention for
syntactic dependency parsing and yields more accurate multi-task learning for
extractive summarization tasks.
| 2,017 | Computation and Language |
Geometrical morphology | We explore inflectional morphology as an example of the relationship of the
discrete and the continuous in linguistics. The grammar requests a form of a
lexeme by specifying a set of feature values, which corresponds to a corner M
of a hypercube in feature value space. The morphology responds to that request
by providing a morpheme, or a set of morphemes, whose vector sum is
geometrically closest to the corner M. In short, the chosen morpheme $\mu$ is
the morpheme (or set of morphemes) that maximizes the inner product of $\mu$
and M.
| 2,017 | Computation and Language |
Reinforcement Learning for Transition-Based Mention Detection | This paper describes an application of reinforcement learning to the mention
detection task. We define a novel action-based formulation for the mention
detection task, in which a model can flexibly revise past labeling decisions by
grouping together tokens and assigning partial mention labels. We devise a
method to create mention-level episodes and we train a model by rewarding
correctly labeled complete mentions, irrespective of the inner structure
created. The model yields results which are on par with a competitive
supervised counterpart while being more flexible in terms of achieving targeted
behavior through reward modeling and generating internal mention structure,
especially on longer mentions.
| 2,017 | Computation and Language |
Exploring Question Understanding and Adaptation in Neural-Network-Based
Question Answering | The last several years have seen intensive interest in exploring
neural-network-based models for machine comprehension (MC) and question
answering (QA). In this paper, we approach the problems by closely modelling
questions in a neural network framework. We first introduce syntactic
information to help encode questions. We then view and model different types of
questions and the information shared among them as an adaptation task and
proposed adaptation models for them. On the Stanford Question Answering Dataset
(SQuAD), we show that these approaches can help attain better results over a
competitive baseline.
| 2,017 | Computation and Language |
Joint Learning of Correlated Sequence Labelling Tasks Using
Bidirectional Recurrent Neural Networks | The stream of words produced by Automatic Speech Recognition (ASR) systems is
typically devoid of punctuations and formatting. Most natural language
processing applications expect segmented and well-formatted texts as input,
which is not available in ASR output. This paper proposes a novel technique of
jointly modeling multiple correlated tasks such as punctuation and
capitalization using bidirectional recurrent neural networks, which leads to
improved performance for each of these tasks. This method could be extended for
joint modeling of any other correlated sequence labeling tasks.
| 2,017 | Computation and Language |
A computational investigation of sources of variability in sentence
comprehension difficulty in aphasia | We present a computational evaluation of three hypotheses about sources of
deficit in sentence comprehension in aphasia: slowed processing, intermittent
deficiency, and resource reduction. The ACT-R based Lewis and Vasishth (2005)
model is used to implement these three proposals. Slowed processing is
implemented as slowed default production-rule firing time; intermittent
deficiency as increased random noise in activation of chunks in memory; and
resource reduction as reduced goal activation. As data, we considered subject
vs. object rela- tives whose matrix clause contained either an NP or a
reflexive, presented in a self-paced listening modality to 56 individuals with
aphasia (IWA) and 46 matched controls. The participants heard the sentences and
carried out a picture verification task to decide on an interpretation of the
sentence. These response accuracies are used to identify the best parameters
(for each participant) that correspond to the three hypotheses mentioned above.
We show that controls have more tightly clustered (less variable) parameter
values than IWA; specifically, compared to controls, among IWA there are more
individuals with low goal activations, high noise, and slow default action
times. This suggests that (i) individual patients show differential amounts of
deficit along the three dimensions of slowed processing, intermittent
deficient, and resource reduction, (ii) overall, there is evidence for all
three sources of deficit playing a role, and (iii) IWA have a more variable
range of parameter values than controls. In sum, this study contributes a proof
of concept of a quantitative implementation of, and evidence for, these three
accounts of comprehension deficits in aphasia.
| 2,017 | Computation and Language |
Extending Automatic Discourse Segmentation for Texts in Spanish to
Catalan | At present, automatic discourse analysis is a relevant research topic in the
field of NLP. However, discourse is one of the phenomena most difficult to
process. Although discourse parsers have been already developed for several
languages, this tool does not exist for Catalan. In order to implement this
kind of parser, the first step is to develop a discourse segmenter. In this
article we present the first discourse segmenter for texts in Catalan. This
segmenter is based on Rhetorical Structure Theory (RST) for Spanish, and uses
lexical and syntactic information to translate rules valid for Spanish into
rules for Catalan. We have evaluated the system by using a gold standard corpus
including manually segmented texts and results are promising.
| 2,016 | Computation and Language |
Making Neural QA as Simple as Possible but not Simpler | Recent development of large-scale question answering (QA) datasets triggered
a substantial amount of research into end-to-end neural architectures for QA.
Increasingly complex systems have been conceived without comparison to simpler
neural baseline systems that would justify their complexity. In this work, we
propose a simple heuristic that guides the development of neural baseline
systems for the extractive QA task. We find that there are two ingredients
necessary for building a high-performing neural QA system: first, the awareness
of question words while processing the context and second, a composition
function that goes beyond simple bag-of-words modeling, such as recurrent
neural networks. Our results show that FastQA, a system that meets these two
requirements, can achieve very competitive performance compared with existing
models. We argue that this surprising finding puts results of previous systems
and the complexity of recent QA datasets into perspective.
| 2,017 | Computation and Language |
Encoding Sentences with Graph Convolutional Networks for Semantic Role
Labeling | Semantic role labeling (SRL) is the task of identifying the
predicate-argument structure of a sentence. It is typically regarded as an
important step in the standard NLP pipeline. As the semantic representations
are closely related to syntactic ones, we exploit syntactic information in our
model. We propose a version of graph convolutional networks (GCNs), a recent
class of neural networks operating on graphs, suited to model syntactic
dependency graphs. GCNs over syntactic dependency trees are used as sentence
encoders, producing latent feature representations of words in a sentence. We
observe that GCN layers are complementary to LSTM ones: when we stack both GCN
and LSTM layers, we obtain a substantial improvement over an already
state-of-the-art LSTM SRL model, resulting in the best reported scores on the
standard benchmark (CoNLL-2009) both for Chinese and English.
| 2,017 | Computation and Language |
Sparse Named Entity Classification using Factorization Machines | Named entity classification is the task of classifying text-based elements
into various categories, including places, names, dates, times, and monetary
values. A bottleneck in named entity classification, however, is the data
problem of sparseness, because new named entities continually emerge, making it
rather difficult to maintain a dictionary for named entity classification.
Thus, in this paper, we address the problem of named entity classification
using matrix factorization to overcome the problem of feature sparsity.
Experimental results show that our proposed model, with fewer features and a
smaller size, achieves competitive accuracy to state-of-the-art models.
| 2,017 | Computation and Language |
Improving Neural Machine Translation with Conditional Sequence
Generative Adversarial Nets | This paper proposes an approach for applying GANs to NMT. We build a
conditional sequence generative adversarial net which comprises of two
adversarial sub models, a generator and a discriminator. The generator aims to
generate sentences which are hard to be discriminated from human-translated
sentences (i.e., the golden target sentences), And the discriminator makes
efforts to discriminate the machine-generated sentences from human-translated
ones. The two sub models play a mini-max game and achieve the win-win situation
when they reach a Nash Equilibrium. Additionally, the static sentence-level
BLEU is utilized as the reinforced objective for the generator, which biases
the generation towards high BLEU points. During training, both the dynamic
discriminator and the static BLEU objective are employed to evaluate the
generated sentences and feedback the evaluations to guide the learning of the
generator. Experimental results show that the proposed model consistently
outperforms the traditional RNNSearch and the newly emerged state-of-the-art
Transformer on English-German and Chinese-English translation tasks.
| 2,018 | Computation and Language |
Ensemble of Neural Classifiers for Scoring Knowledge Base Triples | This paper describes our approach for the triple scoring task at the WSDM Cup
2017. The task required participants to assign a relevance score for each pair
of entities and their types in a knowledge base in order to enhance the ranking
results in entity retrieval tasks. We propose an approach wherein the outputs
of multiple neural network classifiers are combined using a supervised machine
learning model. The experimental results showed that our proposed method
achieved the best performance in one out of three measures (i.e., Kendall's
tau), and performed competitively in the other two measures (i.e., accuracy and
average score difference).
| 2,017 | Computation and Language |
SyntaxNet Models for the CoNLL 2017 Shared Task | We describe a baseline dependency parsing system for the CoNLL2017 Shared
Task. This system, which we call "ParseySaurus," uses the DRAGNN framework
[Kong et al, 2017] to combine transition-based recurrent parsing and tagging
with character-based word representations. On the v1.3 Universal Dependencies
Treebanks, the new system outpeforms the publicly available, state-of-the-art
"Parsey's Cousins" models by 3.47% absolute Labeled Accuracy Score (LAS) across
52 treebanks.
| 2,017 | Computation and Language |
Is this word borrowed? An automatic approach to quantify the likeliness
of borrowing in social media | Code-mixing or code-switching are the effortless phenomena of natural
switching between two or more languages in a single conversation. Use of a
foreign word in a language; however, does not necessarily mean that the speaker
is code-switching because often languages borrow lexical items from other
languages. If a word is borrowed, it becomes a part of the lexicon of a
language; whereas, during code-switching, the speaker is aware that the
conversation involves foreign words or phrases. Identifying whether a foreign
word used by a bilingual speaker is due to borrowing or code-switching is a
fundamental importance to theories of multilingualism, and an essential
prerequisite towards the development of language and speech technologies for
multilingual communities. In this paper, we present a series of novel
computational methods to identify the borrowed likeliness of a word, based on
the social media signals. We first propose context based clustering method to
sample a set of candidate words from the social media data.Next, we propose
three novel and similar metrics based on the usage of these words by the users
in different tweets; these metrics were used to score and rank the candidate
words indicating their borrowed likeliness. We compare these rankings with a
ground truth ranking constructed through a human judgment experiment. The
Spearman's rank correlation between the two rankings (nearly 0.62 for all the
three metric variants) is more than double the value (0.26) of the most
competitive existing baseline reported in the literature. Some other striking
observations are, (i) the correlation is higher for the ground truth data
elicited from the younger participants (age less than 30) than that from the
older participants, and (ii )those participants who use mixed-language for
tweeting the least, provide the best signals of borrowing.
| 2,017 | Computation and Language |
InScript: Narrative texts annotated with script information | This paper presents the InScript corpus (Narrative Texts Instantiating Script
structure). InScript is a corpus of 1,000 stories centered around 10 different
scenarios. Verbs and noun phrases are annotated with event and participant
types, respectively. Additionally, the text is annotated with coreference
information. The corpus shows rich lexical variation and will serve as a unique
resource for the study of the role of script knowledge in natural language
processing.
| 2,016 | Computation and Language |
Legal Question Answering using Ranking SVM and Deep Convolutional Neural
Network | This paper presents a study of employing Ranking SVM and Convolutional Neural
Network for two missions: legal information retrieval and question answering in
the Competition on Legal Information Extraction/Entailment. For the first task,
our proposed model used a triple of features (LSI, Manhattan, Jaccard), and is
based on paragraph level instead of article level as in previous studies. In
fact, each single-paragraph article corresponds to a particular paragraph in a
huge multiple-paragraph article. For the legal question answering task,
additional statistical features from information retrieval task integrated into
Convolutional Neural Network contribute to higher accuracy.
| 2,017 | Computation and Language |
Convolutional Recurrent Neural Networks for Small-Footprint Keyword
Spotting | Keyword spotting (KWS) constitutes a major component of human-technology
interfaces. Maximizing the detection accuracy at a low false alarm (FA) rate,
while minimizing the footprint size, latency and complexity are the goals for
KWS. Towards achieving them, we study Convolutional Recurrent Neural Networks
(CRNNs). Inspired by large-scale state-of-the-art speech recognition systems,
we combine the strengths of convolutional layers and recurrent layers to
exploit local structure and long-range context. We analyze the effect of
architecture parameters, and propose training strategies to improve
performance. With only ~230k parameters, our CRNN model yields acceptably low
latency, and achieves 97.71% accuracy at 0.5 FA/hour for 5 dB signal-to-noise
ratio.
| 2,017 | Computation and Language |
End-to-end optimization of goal-driven and visually grounded dialogue
systems | End-to-end design of dialogue systems has recently become a popular research
topic thanks to powerful tools such as encoder-decoder architectures for
sequence-to-sequence learning. Yet, most current approaches cast human-machine
dialogue management as a supervised learning problem, aiming at predicting the
next utterance of a participant given the full history of the dialogue. This
vision is too simplistic to render the intrinsic planning problem inherent to
dialogue as well as its grounded nature, making the context of a dialogue
larger than the sole history. This is why only chit-chat and question answering
tasks have been addressed so far using end-to-end architectures. In this paper,
we introduce a Deep Reinforcement Learning method to optimize visually grounded
task-oriented dialogues, based on the policy gradient algorithm. This approach
is tested on a dataset of 120k dialogues collected through Mechanical Turk and
provides encouraging results at solving both the problem of generating natural
dialogues and the task of discovering a specific object in a complex picture.
| 2,017 | Computation and Language |
Neobility at SemEval-2017 Task 1: An Attention-based Sentence Similarity
Model | This paper describes a neural-network model which performed competitively
(top 6) at the SemEval 2017 cross-lingual Semantic Textual Similarity (STS)
task. Our system employs an attention-based recurrent neural network model that
optimizes the sentence similarity. In this paper, we describe our participation
in the multilingual STS task which measures similarity across English, Spanish,
and Arabic.
| 2,017 | Computation and Language |
Empirical Evaluation of Parallel Training Algorithms on Acoustic
Modeling | Deep learning models (DLMs) are state-of-the-art techniques in speech
recognition. However, training good DLMs can be time consuming especially for
production-size models and corpora. Although several parallel training
algorithms have been proposed to improve training efficiency, there is no clear
guidance on which one to choose for the task in hand due to lack of systematic
and fair comparison among them. In this paper we aim at filling this gap by
comparing four popular parallel training algorithms in speech recognition,
namely asynchronous stochastic gradient descent (ASGD), blockwise model-update
filtering (BMUF), bulk synchronous parallel (BSP) and elastic averaging
stochastic gradient descent (EASGD), on 1000-hour LibriSpeech corpora using
feed-forward deep neural networks (DNNs) and convolutional, long short-term
memory, DNNs (CLDNNs). Based on our experiments, we recommend using BMUF as the
top choice to train acoustic models since it is most stable, scales well with
number of GPUs, can achieve reproducible results, and in many cases even
outperforms single-GPU SGD. ASGD can be used as a substitute in some cases.
| 2,018 | Computation and Language |
Construction of a Japanese Word Similarity Dataset | An evaluation of distributed word representation is generally conducted using
a word similarity task and/or a word analogy task. There are many datasets
readily available for these tasks in English. However, evaluating distributed
representation in languages that do not have such resources (e.g., Japanese) is
difficult. Therefore, as a first step toward evaluating distributed
representations in Japanese, we constructed a Japanese word similarity dataset.
To the best of our knowledge, our dataset is the first resource that can be
used to evaluate distributed representations in Japanese. Moreover, our dataset
contains various parts of speech and includes rare words in addition to common
words.
| 2,018 | Computation and Language |
Transfer Learning for Sequence Tagging with Hierarchical Recurrent
Networks | Recent papers have shown that neural networks obtain state-of-the-art
performance on several different sequence tagging tasks. One appealing property
of such systems is their generality, as excellent performance can be achieved
with a unified architecture and without task-specific feature engineering.
However, it is unclear if such systems can be used for tasks without large
amounts of training data. In this paper we explore the problem of transfer
learning for neural sequence taggers, where a source task with plentiful
annotations (e.g., POS tagging on Penn Treebank) is used to improve performance
on a target task with fewer available annotations (e.g., POS tagging for
microblogs). We examine the effects of transfer learning for deep hierarchical
recurrent networks across domains, applications, and languages, and show that
significant improvement can often be obtained. These improvements lead to
improvements over the current state-of-the-art on several well-studied tasks.
| 2,017 | Computation and Language |
M\'etodos de Otimiza\c{c}\~ao Combinat\'oria Aplicados ao Problema de
Compress\~ao MultiFrases | The Internet has led to a dramatic increase in the amount of available
information. In this context, reading and understanding this flow of
information have become costly tasks. In the last years, to assist people to
understand textual data, various Natural Language Processing (NLP) applications
based on Combinatorial Optimization have been devised. However, for
Multi-Sentences Compression (MSC), method which reduces the sentence length
without removing core information, the insertion of optimization methods
requires further study to improve the performance of MSC. This article
describes a method for MSC using Combinatorial Optimization and Graph Theory to
generate more informative sentences while maintaining their grammaticality. An
experiment led on a corpus of 40 clusters of sentences shows that our system
has achieved a very good quality and is better than the state-of-the-art.
| 2,017 | Computation and Language |
Native Language Identification using Stacked Generalization | Ensemble methods using multiple classifiers have proven to be the most
successful approach for the task of Native Language Identification (NLI),
achieving the current state of the art. However, a systematic examination of
ensemble methods for NLI has yet to be conducted. Additionally, deeper ensemble
architectures such as classifier stacking have not been closely evaluated. We
present a set of experiments using three ensemble-based models, testing each
with multiple configurations and algorithms. This includes a rigorous
application of meta-classification models for NLI, achieving state-of-the-art
results on three datasets from different languages. We also present the first
use of statistical significance testing for comparing NLI systems, showing that
our results are significantly better than the previous state of the art. We
make available a collection of test set predictions to facilitate future
statistical tests.
| 2,017 | Computation and Language |
Investigation of Language Understanding Impact for Reinforcement
Learning Based Dialogue Systems | Language understanding is a key component in a spoken dialogue system. In
this paper, we investigate how the language understanding module influences the
dialogue system performance by conducting a series of systematic experiments on
a task-oriented neural dialogue system in a reinforcement learning based
setting. The empirical study shows that among different types of language
understanding errors, slot-level errors can have more impact on the overall
performance of a dialogue system compared to intent-level errors. In addition,
our experiments demonstrate that the reinforcement learning based dialogue
system is able to learn when and what to confirm in order to achieve better
performance and greater robustness.
| 2,017 | Computation and Language |
Deep LSTM for Large Vocabulary Continuous Speech Recognition | Recurrent neural networks (RNNs), especially long short-term memory (LSTM)
RNNs, are effective network for sequential task like speech recognition. Deeper
LSTM models perform well on large vocabulary continuous speech recognition,
because of their impressive learning ability. However, it is more difficult to
train a deeper network. We introduce a training framework with layer-wise
training and exponential moving average methods for deeper LSTM models. It is a
competitive framework that LSTM models of more than 7 layers are successfully
trained on Shenma voice search data in Mandarin and they outperform the deep
LSTM models trained by conventional approach. Moreover, in order for online
streaming speech recognition applications, the shallow model with low real time
factor is distilled from the very deep model. The recognition accuracy have
little loss in the distillation process. Therefore, the model trained with the
proposed training framework reduces relative 14\% character error rate,
compared to original model which has the similar real-time capability.
Furthermore, the novel transfer learning strategy with segmental Minimum
Bayes-Risk is also introduced in the framework. The strategy makes it possible
that training with only a small part of dataset could outperform full dataset
training from the beginning.
| 2,017 | Computation and Language |
The NLTK FrameNet API: Designing for Discoverability with a Rich
Linguistic Resource | A new Python API, integrated within the NLTK suite, offers access to the
FrameNet 1.7 lexical database. The lexicon (structured in terms of frames) as
well as annotated sentences can be processed programatically, or browsed with
human-readable displays via the interactive Python prompt.
| 2,017 | Computation and Language |
Topic Identification for Speech without ASR | Modern topic identification (topic ID) systems for speech use automatic
speech recognition (ASR) to produce speech transcripts, and perform supervised
classification on such ASR outputs. However, under resource-limited conditions,
the manually transcribed speech required to develop standard ASR systems can be
severely limited or unavailable. In this paper, we investigate alternative
unsupervised solutions to obtaining tokenizations of speech in terms of a
vocabulary of automatically discovered word-like or phoneme-like units, without
depending on the supervised training of ASR systems. Moreover, using automatic
phoneme-like tokenizations, we demonstrate that a convolutional neural network
based framework for learning spoken document representations provides
competitive performance compared to a standard bag-of-words representation, as
evidenced by comprehensive topic ID evaluations on both single-label and
multi-label classification tasks.
| 2,017 | Computation and Language |
Hierarchical RNN with Static Sentence-Level Attention for Text-Based
Speaker Change Detection | Speaker change detection (SCD) is an important task in dialog modeling. Our
paper addresses the problem of text-based SCD, which differs from existing
audio-based studies and is useful in various scenarios, for example, processing
dialog transcripts where speaker identities are missing (e.g., OpenSubtitle),
and enhancing audio SCD with textual information. We formulate text-based SCD
as a matching problem of utterances before and after a certain decision point;
we propose a hierarchical recurrent neural network (RNN) with static
sentence-level attention. Experimental results show that neural networks
consistently achieve better performance than feature-based approaches, and that
our attention-based model significantly outperforms non-attention neural
networks.
| 2,018 | Computation and Language |
Direct Acoustics-to-Word Models for English Conversational Speech
Recognition | Recent work on end-to-end automatic speech recognition (ASR) has shown that
the connectionist temporal classification (CTC) loss can be used to convert
acoustics to phone or character sequences. Such systems are used with a
dictionary and separately-trained Language Model (LM) to produce word
sequences. However, they are not truly end-to-end in the sense of mapping
acoustics directly to words without an intermediate phone representation. In
this paper, we present the first results employing direct acoustics-to-word CTC
models on two well-known public benchmark tasks: Switchboard and CallHome.
These models do not require an LM or even a decoder at run-time and hence
recognize speech with minimal complexity. However, due to the large number of
word output units, CTC word models require orders of magnitude more data to
train reliably compared to traditional systems. We present some techniques to
mitigate this issue. Our CTC word model achieves a word error rate of
13.0%/18.8% on the Hub5-2000 Switchboard/CallHome test sets without any LM or
decoder compared with 9.6%/16.0% for phone-based CTC with a 4-gram LM. We also
present rescoring results on CTC word model lattices to quantify the
performance benefits of a LM, and contrast the performance of word and phone
CTC models.
| 2,017 | Computation and Language |
Supervised Typing of Big Graphs using Semantic Embeddings | We propose a supervised algorithm for generating type embeddings in the same
semantic vector space as a given set of entity embeddings. The algorithm is
agnostic to the derivation of the underlying entity embeddings. It does not
require any manual feature engineering, generalizes well to hundreds of types
and achieves near-linear scaling on Big Graphs containing many millions of
triples and instances by virtue of an incremental execution. We demonstrate the
utility of the embeddings on a type recommendation task, outperforming a
non-parametric feature-agnostic baseline while achieving 15x speedup and
near-constant memory usage on a full partition of DBpedia. Using
state-of-the-art visualization, we illustrate the agreement of our
extensionally derived DBpedia type embeddings with the manually curated domain
ontology. Finally, we use the embeddings to probabilistically cluster about 4
million DBpedia instances into 415 types in the DBpedia ontology.
| 2,017 | Computation and Language |
A network of deep neural networks for distant speech recognition | Despite the remarkable progress recently made in distant speech recognition,
state-of-the-art technology still suffers from a lack of robustness, especially
when adverse acoustic conditions characterized by non-stationary noises and
reverberation are met. A prominent limitation of current systems lies in the
lack of matching and communication between the various technologies involved in
the distant speech recognition process. The speech enhancement and speech
recognition modules are, for instance, often trained independently. Moreover,
the speech enhancement normally helps the speech recognizer, but the output of
the latter is not commonly used, in turn, to improve the speech enhancement. To
address both concerns, we propose a novel architecture based on a network of
deep neural networks, where all the components are jointly trained and better
cooperate with each other thanks to a full communication scheme between them.
Experiments, conducted using different datasets, tasks and acoustic conditions,
revealed that the proposed framework can overtake other competitive solutions,
including recent joint training approaches.
| 2,017 | Computation and Language |
Sequential Recurrent Neural Networks for Language Modeling | Feedforward Neural Network (FNN)-based language models estimate the
probability of the next word based on the history of the last N words, whereas
Recurrent Neural Networks (RNN) perform the same task based only on the last
word and some context information that cycles in the network. This paper
presents a novel approach, which bridges the gap between these two categories
of networks. In particular, we propose an architecture which takes advantage of
the explicit, sequential enumeration of the word history in FNN structure while
enhancing each word representation at the projection layer through recurrent
context information that evolves in the network. The context integration is
performed using an additional word-dependent weight matrix that is also learned
during the training. Extensive experiments conducted on the Penn Treebank (PTB)
and the Large Text Compression Benchmark (LTCB) corpus showed a significant
reduction of the perplexity when compared to state-of-the-art feedforward as
well as recurrent neural network architectures.
| 2,017 | Computation and Language |
Multimodal Compact Bilinear Pooling for Multimodal Neural Machine
Translation | In state-of-the-art Neural Machine Translation, an attention mechanism is
used during decoding to enhance the translation. At every step, the decoder
uses this mechanism to focus on different parts of the source sentence to
gather the most useful information before outputting its target word. Recently,
the effectiveness of the attention mechanism has also been explored for
multimodal tasks, where it becomes possible to focus both on sentence parts and
image regions. Approaches to pool two modalities usually include element-wise
product, sum or concatenation. In this paper, we evaluate the more advanced
Multimodal Compact Bilinear pooling method, which takes the outer product of
two vectors to combine the attention features for the two modalities. This has
been previously investigated for visual question answering. We try out this
approach for multimodal image caption translation and show improvements
compared to basic combination methods.
| 2,017 | Computation and Language |
Rapid-Rate: A Framework for Semi-supervised Real-time Sentiment Trend
Detection in Unstructured Big Data | Commercial establishments like restaurants, service centres and retailers
have several sources of customer feedback about products and services, most of
which need not be as structured as rated reviews provided by services like
Yelp, or Amazon, in terms of sentiment conveyed. For instance, Amazon provides
a fine-grained score on a numeric scale for product reviews. Some sources,
however, like social media (Twitter, Facebook), mailing lists (Google Groups)
and forums (Quora) contain text data that is much more voluminous, but
unstructured and unlabelled. It might be in the best interests of a business
establishment to assess the general sentiment towards their brand on these
platforms as well. This text could be pipelined into a system with a built-in
prediction model, with the objective of generating real-time graphs on opinion
and sentiment trends. Although such tasks like the one described about have
been explored with respect to document classification problems in the past, the
implementation described in this paper, by virtue of learning a continuous
function rather than a discrete one, offers a lot more depth of insight as
compared to document classification approaches. This study aims to explore the
validity of such a continuous function predicting model to quantify sentiment
about an entity, without the additional overhead of manual labelling, and
computational preprocessing & feature extraction. This research project also
aims to design and implement a re-usable document regression pipeline as a
framework, Rapid-Rate, that can be used to predict document scores in
real-time.
| 2,017 | Computation and Language |
A survey of embedding models of entities and relationships for knowledge
graph completion | Knowledge graphs (KGs) of real-world facts about entities and their
relationships are useful resources for a variety of natural language processing
tasks. However, because knowledge graphs are typically incomplete, it is useful
to perform knowledge graph completion or link prediction, i.e. predict whether
a relationship not in the knowledge graph is likely to be true. This paper
serves as a comprehensive survey of embedding models of entities and
relationships for knowledge graph completion, summarizing up-to-date
experimental results on standard benchmark datasets and pointing out potential
future research directions.
| 2,020 | Computation and Language |
Recurrent and Contextual Models for Visual Question Answering | We propose a series of recurrent and contextual neural network models for
multiple choice visual question answering on the Visual7W dataset. Motivated by
divergent trends in model complexities in the literature, we explore the
balance between model expressiveness and simplicity by studying incrementally
more complex architectures. We start with LSTM-encoding of input questions and
answers; build on this with context generation by LSTM-encodings of neural
image and question representations and attention over images; and evaluate the
diversity and predictive power of our models and the ensemble thereof. All
models are evaluated against a simple baseline inspired by the current
state-of-the-art, consisting of involving simple concatenation of bag-of-words
and CNN representations for the text and images, respectively. Generally, we
observe marked variation in image-reasoning performance between our models not
obvious from their overall performance, as well as evidence of dataset bias.
Our standalone models achieve accuracies up to $64.6\%$, while the ensemble of
all models achieves the best accuracy of $66.67\%$, within $0.5\%$ of the
current state-of-the-art for Visual7W.
| 2,017 | Computation and Language |
An embedded segmental K-means model for unsupervised segmentation and
clustering of speech | Unsupervised segmentation and clustering of unlabelled speech are core
problems in zero-resource speech processing. Most approaches lie at
methodological extremes: some use probabilistic Bayesian models with
convergence guarantees, while others opt for more efficient heuristic
techniques. Despite competitive performance in previous work, the full Bayesian
approach is difficult to scale to large speech corpora. We introduce an
approximation to a recent Bayesian model that still has a clear objective
function but improves efficiency by using hard clustering and segmentation
rather than full Bayesian inference. Like its Bayesian counterpart, this
embedded segmental K-means model (ES-KMeans) represents arbitrary-length word
segments as fixed-dimensional acoustic word embeddings. We first compare
ES-KMeans to previous approaches on common English and Xitsonga data sets (5
and 2.5 hours of speech): ES-KMeans outperforms a leading heuristic method in
word segmentation, giving similar scores to the Bayesian model while being 5
times faster with fewer hyperparameters. However, its clusters are less pure
than those of the other models. We then show that ES-KMeans scales to larger
corpora by applying it to the 5 languages of the Zero Resource Speech Challenge
2017 (up to 45 hours), where it performs competitively compared to the
challenge baseline.
| 2,017 | Computation and Language |
Visually grounded learning of keyword prediction from untranscribed
speech | During language acquisition, infants have the benefit of visual cues to
ground spoken language. Robots similarly have access to audio and visual
sensors. Recent work has shown that images and spoken captions can be mapped
into a meaningful common space, allowing images to be retrieved using speech
and vice versa. In this setting of images paired with untranscribed spoken
captions, we consider whether computer vision systems can be used to obtain
textual labels for the speech. Concretely, we use an image-to-words multi-label
visual classifier to tag images with soft textual labels, and then train a
neural network to map from the speech to these soft targets. We show that the
resulting speech system is able to predict which words occur in an
utterance---acting as a spoken bag-of-words classifier---without seeing any
parallel speech and text. We find that the model often confuses semantically
related words, e.g. "man" and "person", making it even more effective as a
semantic keyword spotter.
| 2,017 | Computation and Language |
TokTrack: A Complete Token Provenance and Change Tracking Dataset for
the English Wikipedia | We present a dataset that contains every instance of all tokens (~ words)
ever written in undeleted, non-redirect English Wikipedia articles until
October 2016, in total 13,545,349,787 instances. Each token is annotated with
(i) the article revision it was originally created in, and (ii) lists with all
the revisions in which the token was ever deleted and (potentially) re-added
and re-deleted from its article, enabling a complete and straightforward
tracking of its history. This data would be exceedingly hard to create by an
average potential user as it is (i) very expensive to compute and as (ii)
accurately tracking the history of each token in revisioned documents is a
non-trivial task. Adapting a state-of-the-art algorithm, we have produced a
dataset that allows for a range of analyses and metrics, already popular in
research and going beyond, to be generated on complete-Wikipedia scale;
ensuring quality and allowing researchers to forego expensive text-comparison
computation, which so far has hindered scalable usage. We show how this data
enables, on token-level, computation of provenance, measuring survival of
content over time, very detailed conflict metrics, and fine-grained
interactions of editors like partial reverts, re-additions and other metrics,
in the process gaining several novel insights.
| 2,017 | Computation and Language |
Batch-normalized joint training for DNN-based distant speech recognition | Improving distant speech recognition is a crucial step towards flexible
human-machine interfaces. Current technology, however, still exhibits a lack of
robustness, especially when adverse acoustic conditions are met. Despite the
significant progress made in the last years on both speech enhancement and
speech recognition, one potential limitation of state-of-the-art technology
lies in composing modules that are not well matched because they are not
trained jointly. To address this concern, a promising approach consists in
concatenating a speech enhancement and a speech recognition deep neural network
and to jointly update their parameters as if they were within a single bigger
network. Unfortunately, joint training can be difficult because the output
distribution of the speech enhancement system may change substantially during
the optimization procedure. The speech recognition module would have to deal
with an input distribution that is non-stationary and unnormalized. To mitigate
this issue, we propose a joint training approach based on a fully
batch-normalized architecture. Experiments, conducted using different datasets,
tasks and acoustic conditions, revealed that the proposed framework
significantly overtakes other competitive solutions, especially in challenging
environments.
| 2,017 | Computation and Language |
Interactive Natural Language Acquisition in a Multi-modal Recurrent
Neural Architecture | For the complex human brain that enables us to communicate in natural
language, we gathered good understandings of principles underlying language
acquisition and processing, knowledge about socio-cultural conditions, and
insights about activity patterns in the brain. However, we were not yet able to
understand the behavioural and mechanistic characteristics for natural language
and how mechanisms in the brain allow to acquire and process language. In
bridging the insights from behavioural psychology and neuroscience, the goal of
this paper is to contribute a computational understanding of appropriate
characteristics that favour language acquisition. Accordingly, we provide
concepts and refinements in cognitive modelling regarding principles and
mechanisms in the brain and propose a neurocognitively plausible model for
embodied language acquisition from real world interaction of a humanoid robot
with its environment. In particular, the architecture consists of a continuous
time recurrent neural network, where parts have different leakage
characteristics and thus operate on multiple timescales for every modality and
the association of the higher level nodes of all modalities into cell
assemblies. The model is capable of learning language production grounded in
both, temporal dynamic somatosensation and vision, and features hierarchical
concept abstraction, concept decomposition, multi-modal integration, and
self-organisation of latent representations.
| 2,017 | Computation and Language |
Crowdsourcing Universal Part-Of-Speech Tags for Code-Switching | Code-switching is the phenomenon by which bilingual speakers switch between
multiple languages during communication. The importance of developing language
technologies for codeswitching data is immense, given the large populations
that routinely code-switch. High-quality linguistic annotations are extremely
valuable for any NLP task, and performance is often limited by the amount of
high-quality labeled data. However, little such data exists for code-switching.
In this paper, we describe crowd-sourcing universal part-of-speech tags for the
Miami Bangor Corpus of Spanish-English code-switched speech. We split the
annotation task into three subtasks: one in which a subset of tokens are
labeled automatically, one in which questions are specifically designed to
disambiguate a subset of high frequency words, and a more general cascaded
approach for the remaining data in which questions are displayed to the worker
following a decision tree structure. Each subtask is extended and adapted for a
multilingual setting and the universal tagset. The quality of the annotation
process is measured using hidden check questions annotated with gold labels.
The overall agreement between gold standard labels and the majority vote is
between 0.95 and 0.96 for just three labels and the average recall across
part-of-speech tags is between 0.87 and 0.99, depending on the task.
| 2,017 | Computation and Language |
Sequence-to-Sequence Models Can Directly Translate Foreign Speech | We present a recurrent encoder-decoder deep neural network architecture that
directly translates speech in one language into text in another. The model does
not explicitly transcribe the speech into text in the source language, nor does
it require supervision from the ground truth source language transcription
during training. We apply a slightly modified sequence-to-sequence with
attention architecture that has previously been used for speech recognition and
show that it can be repurposed for this more complex task, illustrating the
power of attention-based models. A single model trained end-to-end obtains
state-of-the-art performance on the Fisher Callhome Spanish-English speech
translation task, outperforming a cascade of independently trained
sequence-to-sequence speech recognition and machine translation models by 1.8
BLEU points on the Fisher test set. In addition, we find that making use of the
training data in both languages by multi-task training sequence-to-sequence
speech translation and recognition models with a shared encoder network can
improve performance by a further 1.4 BLEU points.
| 2,017 | Computation and Language |
Simplifying the Bible and Wikipedia Using Statistical Machine
Translation | I started this work with the hope of generating a text synthesizer (like a
musical synthesizer) that can imitate certain linguistic styles. Most of the
report focuses on text simplification using statistical machine translation
(SMT) techniques. I applied MOSES to a parallel corpus of the Bible (King James
Version and Easy-to-Read Version) and that of Wikipedia articles (normal and
simplified). I report the importance of the three main components of
SMT---phrase translation, language model, and recording---by changing their
weights and comparing the resulting quality of simplified text in terms of
METEOR and BLEU. Toward the end of the report will be presented some examples
of text "synthesized" into the King James style.
| 2,017 | Computation and Language |
Morphological Analysis for the Maltese Language: The Challenges of a
Hybrid System | Maltese is a morphologically rich language with a hybrid morphological system
which features both concatenative and non-concatenative processes. This paper
analyses the impact of this hybridity on the performance of machine learning
techniques for morphological labelling and clustering. In particular, we
analyse a dataset of morphologically related word clusters to evaluate the
difference in results for concatenative and nonconcatenative clusters. We also
describe research carried out in morphological labelling, with a particular
focus on the verb category. Two evaluations were carried out, one using an
unseen dataset, and another one using a gold standard dataset which was
manually labelled. The gold standard dataset was split into concatenative and
non-concatenative to analyse the difference in results between the two
morphological systems.
| 2,017 | Computation and Language |
Comparing Rule-Based and Deep Learning Models for Patient Phenotyping | Objective: We investigate whether deep learning techniques for natural
language processing (NLP) can be used efficiently for patient phenotyping.
Patient phenotyping is a classification task for determining whether a patient
has a medical condition, and is a crucial part of secondary analysis of
healthcare data. We assess the performance of deep learning algorithms and
compare them with classical NLP approaches.
Materials and Methods: We compare convolutional neural networks (CNNs),
n-gram models, and approaches based on cTAKES that extract pre-defined medical
concepts from clinical notes and use them to predict patient phenotypes. The
performance is tested on 10 different phenotyping tasks using 1,610 discharge
summaries extracted from the MIMIC-III database.
Results: CNNs outperform other phenotyping algorithms in all 10 tasks. The
average F1-score of our model is 76 (PPV of 83, and sensitivity of 71) with our
model having an F1-score up to 37 points higher than alternative approaches. We
additionally assess the interpretability of our model by presenting a method
that extracts the most salient phrases for a particular prediction.
Conclusion: We show that NLP methods based on deep learning improve the
performance of patient phenotyping. Our CNN-based algorithm automatically
learns the phrases associated with each patient phenotype. As such, it reduces
the annotation complexity for clinical domain experts, who are normally
required to develop task-specific annotation rules and identify relevant
phrases. Our method performs well in terms of both performance and
interpretability, which indicates that deep learning is an effective approach
to patient phenotyping based on clinicians' notes.
| 2,017 | Computation and Language |
LEPOR: An Augmented Machine Translation Evaluation Metric | Machine translation (MT) was developed as one of the hottest research topics
in the natural language processing (NLP) literature. One important issue in MT
is that how to evaluate the MT system reasonably and tell us whether the
translation system makes an improvement or not. The traditional manual judgment
methods are expensive, time-consuming, unrepeatable, and sometimes with low
agreement. On the other hand, the popular automatic MT evaluation methods have
some weaknesses. Firstly, they tend to perform well on the language pairs with
English as the target language, but weak when English is used as source.
Secondly, some methods rely on many additional linguistic features to achieve
good performance, which makes the metric unable to replicate and apply to other
language pairs easily. Thirdly, some popular metrics utilize incomprehensive
factors, which result in low performance on some practical tasks. In this
thesis, to address the existing problems, we design novel MT evaluation methods
and investigate their performances on different languages. Firstly, we design
augmented factors to yield highly accurate evaluation. Secondly, we design a
tunable evaluation model where weighting of factors can be optimized according
to the characteristics of languages. Thirdly, in the enhanced version of our
methods, we design concise linguistic feature using part-of-speech (POS) to
show that our methods can yield even higher performance when using some
external linguistic resources. Finally, we introduce the practical performance
of our metrics in the ACL-WMT workshop shared tasks, which show that the
proposed methods are robust across different languages. In addition, we also
present some novel work on quality estimation of MT without using reference
translations including the usage of probability models of Na\"ive Bayes (NB),
support vector machine (SVM) classification algorithms, and CRFs.
| 2,014 | Computation and Language |
Learning Simpler Language Models with the Differential State Framework | Learning useful information across long time lags is a critical and difficult
problem for temporal neural models in tasks such as language modeling. Existing
architectures that address the issue are often complex and costly to train. The
Differential State Framework (DSF) is a simple and high-performing design that
unifies previously introduced gated neural models. DSF models maintain
longer-term memory by learning to interpolate between a fast-changing
data-driven representation and a slowly changing, implicitly stable state. This
requires hardly any more parameters than a classical, simple recurrent network.
Within the DSF framework, a new architecture is presented, the Delta-RNN. In
language modeling at the word and character levels, the Delta-RNN outperforms
popular complex architectures, such as the Long Short Term Memory (LSTM) and
the Gated Recurrent Unit (GRU), and, when regularized, performs comparably to
several state-of-the-art baselines. At the subword level, the Delta-RNN's
performance is comparable to that of complex gated architectures.
| 2,017 | Computation and Language |
Question Answering from Unstructured Text by Retrieval and Comprehension | Open domain Question Answering (QA) systems must interact with external
knowledge sources, such as web pages, to find relevant information. Information
sources like Wikipedia, however, are not well structured and difficult to
utilize in comparison with Knowledge Bases (KBs). In this work we present a
two-step approach to question answering from unstructured text, consisting of a
retrieval step and a comprehension step. For comprehension, we present an RNN
based attention model with a novel mixture mechanism for selecting answers from
either retrieved articles or a fixed vocabulary. For retrieval we introduce a
hand-crafted model and a neural model for ranking relevant articles. We achieve
state-of-the-art performance on W IKI M OVIES dataset, reducing the error by
40%. Our experimental results further demonstrate the importance of each of the
introduced components.
| 2,017 | Computation and Language |
A Sentence Simplification System for Improving Relation Extraction | In this demo paper, we present a text simplification approach that is
directed at improving the performance of state-of-the-art Open Relation
Extraction (RE) systems. As syntactically complex sentences often pose a
challenge for current Open RE approaches, we have developed a simplification
framework that performs a pre-processing step by taking a single sentence as
input and using a set of syntactic-based transformation rules to create a
textual input that is easier to process for subsequently applied Open RE
systems.
| 2,017 | Computation and Language |
A practical approach to dialogue response generation in closed domains | We describe a prototype dialogue response generation model for the customer
service domain at Amazon. The model, which is trained in a weakly supervised
fashion, measures the similarity between customer questions and agent answers
using a dual encoder network, a Siamese-like neural network architecture.
Answer templates are extracted from embeddings derived from past agent answers,
without turn-by-turn annotations. Responses to customer inquiries are generated
by selecting the best template from the final set of templates. We show that,
in a closed domain like customer service, the selected templates cover $>$70\%
of past customer inquiries. Furthermore, the relevance of the model-selected
templates is significantly higher than templates selected by a standard tf-idf
baseline.
| 2,017 | Computation and Language |
Is This a Joke? Detecting Humor in Spanish Tweets | While humor has been historically studied from a psychological, cognitive and
linguistic standpoint, its study from a computational perspective is an area
yet to be explored in Computational Linguistics. There exist some previous
works, but a characterization of humor that allows its automatic recognition
and generation is far from being specified. In this work we build a
crowdsourced corpus of labeled tweets, annotated according to its humor value,
letting the annotators subjectively decide which are humorous. A humor
classifier for Spanish tweets is assembled based on supervised learning,
reaching a precision of 84% and a recall of 69%.
| 2,016 | Computation and Language |
A Tidy Data Model for Natural Language Processing using cleanNLP | The package cleanNLP provides a set of fast tools for converting a textual
corpus into a set of normalized tables. The underlying natural language
processing pipeline utilizes Stanford's CoreNLP library, exposing a number of
annotation tasks for text written in English, French, German, and Spanish.
Annotators include tokenization, part of speech tagging, named entity
recognition, entity linking, sentiment analysis, dependency parsing,
coreference resolution, and information extraction.
| 2,017 | Computation and Language |
Learning Similarity Functions for Pronunciation Variations | A significant source of errors in Automatic Speech Recognition (ASR) systems
is due to pronunciation variations which occur in spontaneous and
conversational speech. Usually ASR systems use a finite lexicon that provides
one or more pronunciations for each word. In this paper, we focus on learning a
similarity function between two pronunciations. The pronunciations can be the
canonical and the surface pronunciations of the same word or they can be two
surface pronunciations of different words. This task generalizes problems such
as lexical access (the problem of learning the mapping between words and their
possible pronunciations), and defining word neighborhoods. It can also be used
to dynamically increase the size of the pronunciation lexicon, or in predicting
ASR errors. We propose two methods, which are based on recurrent neural
networks, to learn the similarity function. The first is based on binary
classification, and the second is based on learning the ranking of the
pronunciations. We demonstrate the efficiency of our approach on the task of
lexical access using a subset of the Switchboard conversational speech corpus.
Results suggest that on this task our methods are superior to previous methods
which are based on graphical Bayesian methods.
| 2,017 | Computation and Language |
Semi-Supervised Affective Meaning Lexicon Expansion Using Semantic and
Distributed Word Representations | In this paper, we propose an extension to graph-based sentiment lexicon
induction methods by incorporating distributed and semantic word
representations in building the similarity graph to expand a three-dimensional
sentiment lexicon. We also implemented and evaluated the label propagation
using four different word representations and similarity metrics. Our
comprehensive evaluation of the four approaches was performed on a single data
set, demonstrating that all four methods can generate a significant number of
new sentiment assignments with high accuracy. The highest correlations
(tau=0.51) and the lowest error (mean absolute error < 1.1%), obtained by
combining both the semantic and the distributional features, outperformed the
distributional-based and semantic-based label-propagation models and approached
a supervised algorithm.
| 2,017 | Computation and Language |
A Deep Compositional Framework for Human-like Language Acquisition in
Virtual Environment | We tackle a task where an agent learns to navigate in a 2D maze-like
environment called XWORLD. In each session, the agent perceives a sequence of
raw-pixel frames, a natural language command issued by a teacher, and a set of
rewards. The agent learns the teacher's language from scratch in a grounded and
compositional manner, such that after training it is able to correctly execute
zero-shot commands: 1) the combination of words in the command never appeared
before, and/or 2) the command contains new object concepts that are learned
from another task but never learned from navigation. Our deep framework for the
agent is trained end to end: it learns simultaneously the visual
representations of the environment, the syntax and semantics of the language,
and the action module that outputs actions. The zero-shot learning capability
of our framework results from its compositionality and modularity with
parameter tying. We visualize the intermediate outputs of the framework,
demonstrating that the agent truly understands how to solve the problem. We
believe that our results provide some preliminary insights on how to train an
agent with similar abilities in a 3D environment.
| 2,017 | Computation and Language |
Survey of the State of the Art in Natural Language Generation: Core
tasks, applications and evaluation | This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.
| 2,017 | Computation and Language |
Hierarchical Classification for Spoken Arabic Dialect Identification
using Prosody: Case of Algerian Dialects | In daily communications, Arabs use local dialects which are hard to identify
automatically using conventional classification methods. The dialect
identification challenging task becomes more complicated when dealing with an
under-resourced dialects belonging to a same county/region. In this paper, we
start by analyzing statistically Algerian dialects in order to capture their
specificities related to prosody information which are extracted at utterance
level after a coarse-grained consonant/vowel segmentation. According to these
analysis findings, we propose a Hierarchical classification approach for spoken
Arabic algerian Dialect IDentification (HADID). It takes advantage from the
fact that dialects have an inherent property of naturally structured into
hierarchy. Within HADID, a top-down hierarchical classification is applied, in
which we use Deep Neural Networks (DNNs) method to build a local classifier for
every parent node into the hierarchy dialect structure. Our framework is
implemented and evaluated on Algerian Arabic dialects corpus. Whereas, the
hierarchy dialect structure is deduced from historic and linguistic knowledges.
The results reveal that within {\HD}, the best classifier is DNNs compared to
Support Vector Machine. In addition, compared with a baseline Flat
classification system, our HADID gives an improvement of 63.5% in term of
precision. Furthermore, overall results evidence the suitability of our
prosody-based HADID for speaker independent dialect identification while
requiring less than 6s test utterances.
| 2,017 | Computation and Language |
A Short Review of Ethical Challenges in Clinical Natural Language
Processing | Clinical NLP has an immense potential in contributing to how clinical
practice will be revolutionized by the advent of large scale processing of
clinical records. However, this potential has remained largely untapped due to
slow progress primarily caused by strict data access policies for researchers.
In this paper, we discuss the concern for privacy and the measures it entails.
We also suggest sources of less sensitive data. Finally, we draw attention to
biases that can compromise the validity of empirical research and lead to
socially harmful applications.
| 2,017 | Computation and Language |
Tacotron: Towards End-to-End Speech Synthesis | A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given <text, audio> pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods.
| 2,017 | Computation and Language |
Automatic Argumentative-Zoning Using Word2vec | In comparison with document summarization on the articles from social media
and newswire, argumentative zoning (AZ) is an important task in scientific
paper analysis. Traditional methodology to carry on this task relies on feature
engineering from different levels. In this paper, three models of generating
sentence vectors for the task of sentence classification were explored and
compared. The proposed approach builds sentence representations using learned
embeddings based on neural network. The learned word embeddings formed a
feature space, to which the examined sentence is mapped to. Those features are
input into the classifiers for supervised classification. Using
10-cross-validation scheme, evaluation was conducted on the
Argumentative-Zoning (AZ) annotated articles. The results showed that simply
averaging the word vectors in a sentence works better than the paragraph to
vector algorithm and by integrating specific cuewords into the loss function of
the neural network can improve the classification performance. In comparison
with the hand-crafted features, the word2vec method won for most of the
categories. However, the hand-crafted features showed their strength on
classifying some of the categories.
| 2,017 | Computation and Language |
Colors in Context: A Pragmatic Neural Model for Grounded Language
Understanding | We present a model of pragmatic referring expression interpretation in a
grounded communication task (identifying colors from descriptions) that draws
upon predictions from two recurrent neural network classifiers, a speaker and a
listener, unified by a recursive pragmatic reasoning framework. Experiments
show that this combined pragmatic model interprets color descriptions more
accurately than the classifiers from which it is built, and that much of this
improvement results from combining the speaker and listener perspectives. We
observe that pragmatic reasoning helps primarily in the hardest cases: when the
model must distinguish very similar colors, or when few utterances adequately
express the target color. Our findings make use of a newly-collected corpus of
human utterances in color reference games, which exhibit a variety of pragmatic
behaviors. We also show that the embedded speaker model reproduces many of
these pragmatic behaviors.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.