Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Learning Semantically and Additively Compositional Distributional
Representations | This paper connects a vector-based composition model to a formal semantics,
the Dependency-based Compositional Semantics (DCS). We show theoretical
evidence that the vector compositions in our model conform to the logic of DCS.
Experimentally, we show that vector-based composition brings a strong ability
to calculate similar phrases as similar vectors, achieving near
state-of-the-art on a wide range of phrase similarity tasks and relation
classification; meanwhile, DCS can guide building vectors for structured
queries that can be directly executed. We evaluate this utility on sentence
completion task and report a new state-of-the-art.
| 2,016 | Computation and Language |
DefExt: A Semi Supervised Definition Extraction Tool | We present DefExt, an easy to use semi supervised Definition Extraction Tool.
DefExt is designed to extract from a target corpus those textual fragments
where a term is explicitly mentioned together with its core features, i.e. its
definition. It works on the back of a Conditional Random Fields based
sequential labeling algorithm and a bootstrapping approach. Bootstrapping
enables the model to gradually become more aware of the idiosyncrasies of the
target corpus. In this paper we describe the main components of the toolkit as
well as experimental results stemming from both automatic and manual
evaluation. We release DefExt as open source along with the necessary files to
run it in any Unix machine. We also provide access to training and test data
for immediate use.
| 2,016 | Computation and Language |
Coordination Annotation Extension in the Penn Tree Bank | Coordination is an important and common syntactic construction which is not
handled well by state of the art parsers. Coordinations in the Penn Treebank
are missing internal structure in many cases, do not include explicit marking
of the conjuncts and contain various errors and inconsistencies. In this work,
we initiated manual annotation process for solving these issues. We identify
the different elements in a coordination phrase and label each element with its
function. We add phrase boundaries when these are missing, unify
inconsistencies, and fix errors. The outcome is an extension of the PTB that
includes consistent and detailed structures for coordinations. We make the
coordination annotation publicly available, in hope that they will facilitate
further research into coordination disambiguation.
| 2,016 | Computation and Language |
Improving Recurrent Neural Networks For Sequence Labelling | In this paper we study different types of Recurrent Neural Networks (RNN) for
sequence labeling tasks. We propose two new variants of RNNs integrating
improvements for sequence labeling, and we compare them to the more traditional
Elman and Jordan RNNs. We compare all models, either traditional or new, on
four distinct tasks of sequence labeling: two on Spoken Language Understanding
(ATIS and MEDIA); and two of POS tagging for the French Treebank (FTB) and the
Penn Treebank (PTB) corpora. The results show that our new variants of RNNs are
always more effective than the others.
| 2,016 | Computation and Language |
A Joint Model for Word Embedding and Word Morphology | This paper presents a joint model for performing unsupervised morphological
analysis on words, and learning a character-level composition function from
morphemes to word embeddings. Our model splits individual words into segments,
and weights each segment according to its ability to predict context words. Our
morphological analysis is comparable to dedicated morphological analyzers at
the task of morpheme boundary recovery, and also performs better than
word-based embedding models at the task of syntactic analogy answering.
Finally, we show that incorporating morphology explicitly into character-level
models help them produce embeddings for unseen words which correlate better
with human judgments.
| 2,016 | Computation and Language |
Addressing Limited Data for Textual Entailment Across Domains | We seek to address the lack of labeled data (and high cost of annotation) for
textual entailment in some domains. To that end, we first create (for
experimental purposes) an entailment dataset for the clinical domain, and a
highly competitive supervised entailment system, ENT, that is effective (out of
the box) on two domains. We then explore self-training and active learning
strategies to address the lack of labeled data. With self-training, we
successfully exploit unlabeled data to improve over ENT by 15% F-score on the
newswire domain, and 13% F-score on clinical data. On the other hand, our
active learning experiments demonstrate that we can match (and even beat) ENT
using only 6.6% of the training data in the clinical domain, and only 5.8% of
the training data in the newswire domain.
| 2,016 | Computation and Language |
First Result on Arabic Neural Machine Translation | Neural machine translation has become a major alternative to widely used
phrase-based statistical machine translation. We notice however that much of
research on neural machine translation has focused on European languages
despite its language agnostic nature. In this paper, we apply neural machine
translation to the task of Arabic translation (Ar<->En) and compare it against
a standard phrase-based translation system. We run extensive comparison using
various configurations in preprocessing Arabic script and show that the
phrase-based and neural translation systems perform comparably to each other
and that proper preprocessing of Arabic script has a similar effect on both of
the systems. We however observe that the neural machine translation
significantly outperform the phrase-based system on an out-of-domain test set,
making it attractive for real-world deployment.
| 2,016 | Computation and Language |
Continuously Learning Neural Dialogue Management | We describe a two-step approach for dialogue management in task-oriented
spoken dialogue systems. A unified neural network framework is proposed to
enable the system to first learn by supervision from a set of dialogue data and
then continuously improve its behaviour via reinforcement learning, all using
gradient-based algorithms on one single model. The experiments demonstrate the
supervised model's effectiveness in the corpus-based evaluation, with user
simulation, and with paid human subjects. The use of reinforcement learning
further improves the model's performance in both interactive settings,
especially under higher-noise conditions.
| 2,016 | Computation and Language |
Neural Network-Based Abstract Generation for Opinions and Arguments | We study the problem of generating abstractive summaries for opinionated
text. We propose an attention-based neural network model that is able to absorb
information from multiple text units to construct informative, concise, and
fluent summaries. An importance-based sampling method is designed to allow the
encoder to integrate information from an important subset of input. Automatic
evaluation indicates that our system outperforms state-of-the-art abstractive
and extractive summarization systems on two newly collected datasets of movie
reviews and arguments. Our system summaries are also rated as more informative
and grammatical in human evaluation.
| 2,016 | Computation and Language |
Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora | A word's sentiment depends on the domain in which it is used. Computational
social science research thus requires sentiment lexicons that are specific to
the domains being studied. We combine domain-specific word embeddings with a
label propagation framework to induce accurate domain-specific sentiment
lexicons using small sets of seed words, achieving state-of-the-art performance
competitive with approaches that rely on hand-curated resources. Using our
framework we perform two large-scale empirical studies to quantify the extent
to which sentiment varies across time and between communities. We induce and
release historical sentiment lexicons for 150 years of English and
community-specific sentiment lexicons for 250 online communities from the
social media forum Reddit. The historical lexicons show that more than 5% of
sentiment-bearing (non-neutral) English words completely switched polarity
during the last 150 years, and the community-specific lexicons highlight how
sentiment varies drastically between different communities.
| 2,016 | Computation and Language |
Cultural Shift or Linguistic Drift? Comparing Two Computational Measures
of Semantic Change | Words shift in meaning for many reasons, including cultural factors like new
technologies and regular linguistic processes like subjectification.
Understanding the evolution of language and culture requires disentangling
these underlying causes. Here we show how two different distributional measures
can be used to detect two different types of semantic change. The first
measure, which has been used in many previous works, analyzes global shifts in
a word's distributional semantics, it is sensitive to changes due to regular
processes of linguistic drift, such as the semantic generalization of promise
("I promise." -> "It promised to be exciting."). The second measure, which we
develop here, focuses on local changes to a word's nearest semantic neighbors;
it is more sensitive to cultural shifts, such as the change in the meaning of
cell ("prison cell" -> "cell phone"). Comparing measurements made by these two
methods allows researchers to determine whether changes are more cultural or
linguistic in nature, a distinction that is essential for work in the digital
humanities and historical linguistics.
| 2,016 | Computation and Language |
A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task | Enabling a computer to understand a document so that it can answer
comprehension questions is a central, yet unsolved goal of NLP. A key factor
impeding its solution by machine learned systems is the limited availability of
human-annotated data. Hermann et al. (2015) seek to solve this problem by
creating over a million training examples by pairing CNN and Daily Mail news
articles with their summarized bullet points, and show that a neural network
can then be trained to give good performance on this task. In this paper, we
conduct a thorough examination of this new reading comprehension task. Our
primary aim is to understand what depth of language understanding is required
to do well on this task. We approach this from one side by doing a careful
hand-analysis of a small subset of the problems and from the other by showing
that simple, carefully designed systems can obtain accuracies of 73.6% and
76.6% on these two datasets, exceeding current state-of-the-art results by
7-10% and approaching what we believe is the ceiling for performance on this
task.
| 2,016 | Computation and Language |
Edinburgh Neural Machine Translation Systems for WMT 16 | We participated in the WMT 2016 shared news translation task by building
neural translation systems for four language pairs, each trained in both
directions: English<->Czech, English<->German, English<->Romanian and
English<->Russian. Our systems are based on an attentional encoder-decoder,
using BPE subword segmentation for open-vocabulary translation with a fixed
vocabulary. We experimented with using automatic back-translations of the
monolingual News corpus as additional training data, pervasive dropout, and
target-bidirectional models. All reported methods give substantial
improvements, and we see improvements of 4.3--11.2 BLEU over our baseline
systems. In the human evaluation, our systems were the (tied) best constrained
system for 7 out of 8 translation directions in which we participated.
| 2,016 | Computation and Language |
Linguistic Input Features Improve Neural Machine Translation | Neural machine translation has recently achieved impressive results, while
using little in the way of external linguistic information. In this paper we
show that the strong learning capability of neural MT models does not make
linguistic features redundant; they can be easily incorporated to provide
further improvements in performance. We generalize the embedding layer of the
encoder in the attentional encoder--decoder architecture to support the
inclusion of arbitrary features, in addition to the baseline word feature. We
add morphological features, part-of-speech tags, and syntactic dependency
labels as input features to English<->German, and English->Romanian neural
machine translation systems. In experiments on WMT16 training and test sets, we
find that linguistic input features improve model quality according to three
metrics: perplexity, BLEU and CHRF3. An open-source implementation of our
neural MT system is available, as are sample files and configurations.
| 2,016 | Computation and Language |
Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation.
| 2,016 | Computation and Language |
Generative Topic Embedding: a Continuous Representation of Documents
(Extended Version with Proofs) | Word embedding maps words into a low-dimensional continuous embedding space
by exploiting the local word collocation patterns in a small context window. On
the other hand, topic modeling maps documents onto a low-dimensional topic
space, by utilizing the global word collocation patterns in the same document.
These two types of patterns are complementary. In this paper, we propose a
generative topic embedding model to combine the two types of patterns. In our
model, topics are represented by embedding vectors, and are shared across
documents. The probability of each word is influenced by both its local context
and its topic. A variational inference method yields the topic embeddings as
well as the topic mixing proportions for each document. Jointly they represent
the document in a low-dimensional continuous space. In two document
classification tasks, our method performs better than eight existing methods,
with fewer features. In addition, we illustrate with an example that our method
can generate coherent topics even based on only one document.
| 2,016 | Computation and Language |
Key-Value Memory Networks for Directly Reading Documents | Directly reading documents and being able to answer questions from them is an
unsolved challenge. To avoid its inherent difficulty, question answering (QA)
has been directed towards using Knowledge Bases (KBs) instead, which has proven
effective. Unfortunately KBs often suffer from being too restrictive, as the
schema cannot support certain types of answers, and too sparse, e.g. Wikipedia
contains much more information than Freebase. In this work we introduce a new
method, Key-Value Memory Networks, that makes reading documents more viable by
utilizing different encodings in the addressing and output stages of the memory
read operation. To compare using KBs, information extraction or Wikipedia
documents directly in a single framework we construct an analysis tool,
WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in
the domain of movies. Our method reduces the gap between all three settings. It
also achieves state-of-the-art results on the existing WikiQA benchmark.
| 2,016 | Computation and Language |
PerSum: Novel Systems for Document Summarization in Persian | In this paper we explore the problem of document summarization in Persian
language from two distinct angles. In our first approach, we modify a popular
and widely cited Persian document summarization framework to see how it works
on a realistic corpus of news articles. Human evaluation on generated summaries
shows that graph-based methods perform better than the modified systems. We
carry this intuition forward in our second approach, and probe deeper into the
nature of graph-based systems by designing several summarizers based on
centrality measures. Ad hoc evaluation using ROUGE score on these summarizers
suggests that there is a small class of centrality measures that perform better
than three strong unsupervised baselines.
| 2,016 | Computation and Language |
Sentence Similarity Measures for Fine-Grained Estimation of Topical
Relevance in Learner Essays | We investigate the task of assessing sentence-level prompt relevance in
learner essays. Various systems using word overlap, neural embeddings and
neural compositional models are evaluated on two datasets of learner writing.
We propose a new method for sentence-level similarity calculation, which learns
to adjust the weights of pre-trained word embeddings for a specific task,
achieving substantially higher accuracy compared to other relevant baselines.
| 2,017 | Computation and Language |
Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset.
| 2,016 | Computation and Language |
Unsupervised Learning of Word-Sequence Representations from Scratch via
Convolutional Tensor Decomposition | Unsupervised text embeddings extraction is crucial for text understanding in
machine learning. Word2Vec and its variants have received substantial success
in mapping words with similar syntactic or semantic meaning to vectors close to
each other. However, extracting context-aware word-sequence embedding remains a
challenging task. Training over large corpus is difficult as labels are
difficult to get. More importantly, it is challenging for pre-trained models to
obtain word-sequence embeddings that are universally good for all downstream
tasks or for any new datasets. We propose a two-phased ConvDic+DeconvDec
framework to solve the problem by combining a word-sequence dictionary learning
model with a word-sequence embedding decode model. We propose a convolutional
tensor decomposition mechanism to learn good word-sequence phrase dictionary in
the learning phase. It is proved to be more accurate and much more efficient
than the popular alternating minimization method. In the decode phase, we
introduce a deconvolution framework that is immune to the problem of varying
sentence lengths. The word-sequence embeddings we extracted using
ConvDic+DeconvDec are universally good for a few downstream tasks we test on.
The framework requires neither pre-training nor prior/outside information.
| 2,018 | Computation and Language |
PSDVec: a Toolbox for Incremental and Scalable Word Embedding | PSDVec is a Python/Perl toolbox that learns word embeddings, i.e. the mapping
of words in a natural language to continuous vectors which encode the
semantic/syntactic regularities between the words. PSDVec implements a word
embedding learning method based on a weighted low-rank positive semidefinite
approximation. To scale up the learning process, we implement a blockwise
online learning algorithm to learn the embeddings incrementally. This strategy
greatly reduces the learning time of word embeddings on a large vocabulary, and
can learn the embeddings of new words without re-learning the whole vocabulary.
On 9 word similarity/analogy benchmark sets and 2 Natural Language Processing
(NLP) tasks, PSDVec produces embeddings that has the best average performance
among popular word embedding tools. PSDVec provides a new option for NLP
practitioners.
| 2,016 | Computation and Language |
Deep CNNs along the Time Axis with Intermap Pooling for Robustness to
Spectral Variations | Convolutional neural networks (CNNs) with convolutional and pooling
operations along the frequency axis have been proposed to attain invariance to
frequency shifts of features. However, this is inappropriate with regard to the
fact that acoustic features vary in frequency. In this paper, we contend that
convolution along the time axis is more effective. We also propose the addition
of an intermap pooling (IMP) layer to deep CNNs. In this layer, filters in each
group extract common but spectrally variant features, then the layer pools the
feature maps of each group. As a result, the proposed IMP CNN can achieve
insensitivity to spectral variations characteristic of different speakers and
utterances. The effectiveness of the IMP CNN architecture is demonstrated on
several LVCSR tasks. Even without speaker adaptation techniques, the
architecture achieved a WER of 12.7% on the SWB part of the Hub5'2000
evaluation test set, which is competitive with other state-of-the-art methods.
| 2,016 | Computation and Language |
Natural Language Generation enhances human decision-making with
uncertain information | Decision-making is often dependent on uncertain data, e.g. data associated
with confidence scores or probabilities. We present a comparison of different
information presentations for uncertain data and, for the first time, measure
their effects on human decision-making. We show that the use of Natural
Language Generation (NLG) improves decision-making under uncertainty, compared
to state-of-the-art graphical-based representation methods. In a task-based
study with 442 adults, we found that presentations using NLG lead to 24% better
decision-making on average than the graphical presentations, and to 44% better
decision-making when NLG is combined with graphics. We also show that women
achieve significantly better results when presented with NLG output (an 87%
increase on average compared to graphical presentations).
| 2,016 | Computation and Language |
WordNet2Vec: Corpora Agnostic Word Vectorization Method | A complex nature of big data resources demands new methods for structuring
especially for textual content. WordNet is a good knowledge source for
comprehensive abstraction of natural language as its good implementations exist
for many languages. Since WordNet embeds natural language in the form of a
complex network, a transformation mechanism WordNet2Vec is proposed in the
paper. It creates vectors for each word from WordNet. These vectors encapsulate
general position - role of a given word towards all other words in the natural
language. Any list or set of such vectors contains knowledge about the context
of its component within the whole language. Such word representation can be
easily applied to many analytic tasks like classification or clustering. The
usefulness of the WordNet2Vec method was demonstrated in sentiment analysis,
i.e. classification with transfer learning for the real Amazon opinion textual
dataset.
| 2,016 | Computation and Language |
Conditional Generation and Snapshot Learning in Neural Dialogue Systems | Recently a variety of LSTM-based conditional language models (LM) have been
applied across a range of language generation tasks. In this work we study
various model architectures and different ways to represent and aggregate the
source information in an end-to-end neural dialogue system framework. A method
called snapshot learning is also proposed to facilitate learning from
supervised sequential signals by applying a companion cross-entropy objective
function to the conditioning vector. The experimental and analytical results
demonstrate firstly that competition occurs between the conditioning vector and
the LM, and the differing architectures provide different trade-offs between
the two. Secondly, the discriminative power and transparency of the
conditioning vector is key to providing both model interpretability and better
performance. Thirdly, snapshot learning leads to consistent performance
improvements independent of which architecture is used.
| 2,016 | Computation and Language |
Simple Question Answering by Attentive Convolutional Neural Network | This work focuses on answering single-relation factoid questions over
Freebase. Each question can acquire the answer from a single fact of form
(subject, predicate, object) in Freebase. This task, simple question answering
(SimpleQA), can be addressed via a two-step pipeline: entity linking and fact
selection. In fact selection, we match the subject entity in a fact candidate
with the entity mention in the question by a character-level convolutional
neural network (char-CNN), and match the predicate in that fact with the
question by a word-level CNN (word-CNN). This work makes two main
contributions. (i) A simple and effective entity linker over Freebase is
proposed. Our entity linker outperforms the state-of-the-art entity linker over
SimpleQA task. (ii) A novel attentive maxpooling is stacked over word-CNN, so
that the predicate representation can be matched with the predicate-focused
question representation more effectively. Experiments show that our system sets
new state-of-the-art in this task.
| 2,016 | Computation and Language |
Bootstrapping Distantly Supervised IE using Joint Learning and Small
Well-structured Corpora | We propose a framework to improve performance of distantly-supervised
relation extraction, by jointly learning to solve two related tasks:
concept-instance extraction and relation extraction. We combine this with a
novel use of document structure: in some small, well-structured corpora,
sections can be identified that correspond to relation arguments, and
distantly-labeled examples from such sections tend to have good precision.
Using these as seeds we extract additional relation examples by applying label
propagation on a graph composed of noisy examples extracted from a large
unstructured testing corpus. Combined with the soft constraint that concept
examples should have the same type as the second argument of the relation, we
get significant improvements over several state-of-the-art approaches to
distantly-supervised relation extraction.
| 2,016 | Computation and Language |
De-identification of Patient Notes with Recurrent Neural Networks | Objective: Patient notes in electronic health records (EHRs) may contain
critical information for medical investigations. However, the vast majority of
medical investigators can only access de-identified notes, in order to protect
the confidentiality of patients. In the United States, the Health Insurance
Portability and Accountability Act (HIPAA) defines 18 types of protected health
information (PHI) that needs to be removed to de-identify patient notes. Manual
de-identification is impractical given the size of EHR databases, the limited
number of researchers with access to the non-de-identified notes, and the
frequent mistakes of human annotators. A reliable automated de-identification
system would consequently be of high value.
Materials and Methods: We introduce the first de-identification system based
on artificial neural networks (ANNs), which requires no handcrafted features or
rules, unlike existing systems. We compare the performance of the system with
state-of-the-art systems on two datasets: the i2b2 2014 de-identification
challenge dataset, which is the largest publicly available de-identification
dataset, and the MIMIC de-identification dataset, which we assembled and is
twice as large as the i2b2 2014 dataset.
Results: Our ANN model outperforms the state-of-the-art systems. It yields an
F1-score of 97.85 on the i2b2 2014 dataset, with a recall 97.38 and a precision
of 97.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with
a recall 99.25 and a precision of 99.06.
Conclusion: Our findings support the use of ANNs for de-identification of
patient notes, as they show better performance than previously published
systems while requiring no feature engineering.
| 2,016 | Computation and Language |
Word Sense Disambiguation using a Bidirectional LSTM | In this paper we present a clean, yet effective, model for word sense
disambiguation. Our approach leverage a bidirectional long short-term memory
network which is shared between all words. This enables the model to share
statistical strength and to scale well with vocabulary size. The model is
trained end-to-end, directly from the raw text to sense labels, and makes
effective use of word order. We evaluate our approach on two standard datasets,
using identical hyperparameter settings, which are in turn tuned on a third set
of held out data. We employ no external resources (e.g. knowledge graphs,
part-of-speech tagging, etc), language specific features, or hand crafted
rules, but still achieve statistically equivalent results to the best
state-of-the-art systems, that employ no such limitations.
| 2,016 | Computation and Language |
Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision.
| 2,016 | Computation and Language |
Natural Language Generation in Dialogue using Lexicalized and
Delexicalized Data | Natural language generation plays a critical role in spoken dialogue systems.
We present a new approach to natural language generation for task-oriented
dialogue using recurrent neural networks in an encoder-decoder framework. In
contrast to previous work, our model uses both lexicalized and delexicalized
components i.e. slot-value pairs for dialogue acts, with slots and
corresponding values aligned together. This allows our model to learn from all
available data including the slot-value pairing, rather than being restricted
to delexicalized slots. We show that this helps our model generate more natural
sentences with better grammar. We further improve our model's performance by
transferring weights learnt from a pretrained sentence auto-encoder. Human
evaluation of our best-performing model indicates that it generates sentences
which users find more appealing.
| 2,017 | Computation and Language |
Deep Reinforcement Learning with a Combinatorial Action Space for
Predicting Popular Reddit Threads | We introduce an online popularity prediction and tracking task as a benchmark
task for reinforcement learning with a combinatorial, natural language action
space. A specified number of discussion threads predicted to be popular are
recommended, chosen from a fixed window of recent comments to track. Novel deep
reinforcement learning architectures are studied for effective modeling of the
value function associated with actions comprised of interdependent sub-actions.
The proposed model, which represents dependence between sub-actions through a
bi-directional LSTM, gives the best performance across different experimental
configurations and domains, and it also generalizes well with varying numbers
of recommendation requests.
| 2,016 | Computation and Language |
External Lexical Information for Multilingual Part-of-Speech Tagging | Morphosyntactic lexicons and word vector representations have both proven
useful for improving the accuracy of statistical part-of-speech taggers. Here
we compare the performances of four systems on datasets covering 16 languages,
two of these systems being feature-based (MEMMs and CRFs) and two of them being
neural-based (bi-LSTMs). We show that, on average, all four approaches perform
similarly and reach state-of-the-art results. Yet better performances are
obtained with our feature-based models on lexically richer datasets (e.g. for
morphologically rich languages), whereas neural-based results are higher on
datasets with less lexical variability (e.g. for English). These conclusions
hold in particular for the MEMM models relying on our system MElt, which
benefited from newly designed features. This shows that, under certain
conditions, feature-based approaches enriched with morphosyntactic lexicons are
competitive with respect to neural methods.
| 2,016 | Computation and Language |
Neural Belief Tracker: Data-Driven Dialogue State Tracking | One of the core components of modern spoken dialogue systems is the belief
tracker, which estimates the user's goal at every step of the dialogue.
However, most current approaches have difficulty scaling to larger, more
complex dialogue domains. This is due to their dependency on either: a) Spoken
Language Understanding models that require large amounts of annotated training
data; or b) hand-crafted lexicons for capturing some of the linguistic
variation in users' language. We propose a novel Neural Belief Tracking (NBT)
framework which overcomes these problems by building on recent advances in
representation learning. NBT models reason over pre-trained word vectors,
learning to compose them into distributed representations of user utterances
and dialogue context. Our evaluation on two datasets shows that this approach
surpasses past limitations, matching the performance of state-of-the-art models
which rely on hand-crafted semantic lexicons and outperforming them when such
lexicons are not provided.
| 2,017 | Computation and Language |
Learning to Generate Compositional Color Descriptions | The production of color language is essential for grounded language
generation. Color descriptions have many challenging properties: they can be
vague, compositionally complex, and denotationally rich. We present an
effective approach to generating color descriptions using recurrent neural
networks and a Fourier-transformed color representation. Our model outperforms
previous work on a conditional language modeling task over a large corpus of
naturalistic color descriptions. In addition, probing the model's output
reveals that it can accurately produce not only basic color terms but also
descriptors with non-convex denotations ("greenish"), bare modifiers ("bright",
"dull"), and compositional phrases ("faded teal") not seen in training.
| 2,016 | Computation and Language |
Dialog state tracking, a machine reading approach using Memory Network | In an end-to-end dialog system, the aim of dialog state tracking is to
accurately estimate a compact representation of the current dialog status from
a sequence of noisy observations produced by the speech recognition and the
natural language understanding modules. This paper introduces a novel method of
dialog state tracking based on the general paradigm of machine reading and
proposes to solve it using an End-to-End Memory Network, MemN2N, a
memory-enhanced neural network architecture. We evaluate the proposed approach
on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has
been converted for the occasion in order to frame the hidden state variable
inference as a question-answering task based on a sequence of utterances
extracted from a dialog. We show that the proposed tracker gives encouraging
results. Then, we propose to extend the DSTC-2 dataset with specific reasoning
capabilities requirement like counting, list maintenance, yes-no question
answering and indefinite knowledge management. Finally, we present encouraging
results using our proposed MemN2N based tracking model.
| 2,017 | Computation and Language |
Graph-Community Detection for Cross-Document Topic Segment Relationship
Identification | In this paper we propose a graph-community detection approach to identify
cross-document relationships at the topic segment level. Given a set of related
documents, we automatically find these relationships by clustering segments
with similar content (topics). In this context, we study how different
weighting mechanisms influence the discovery of word communities that relate to
the different topics found in the documents. Finally, we test different mapping
functions to assign topic segments to word communities, determining which topic
segments are considered equivalent.
By performing this task it is possible to enable efficient multi-document
browsing, since when a user finds relevant content in one document we can
provide access to similar topics in other documents. We deploy our approach in
two different scenarios. One is an educational scenario where equivalence
relationships between learning materials need to be found. The other consists
of a series of dialogs in a social context where students discuss commonplace
topics. Results show that our proposed approach better discovered equivalence
relationships in learning material documents and obtained close results in the
social speech domain, where the best performing approach was a clustering
technique.
| 2,016 | Computation and Language |
Rationalizing Neural Predictions | Prediction without justification has limited applicability. As a remedy, we
learn to extract pieces of input text as justifications -- rationales -- that
are tailored to be short and coherent, yet sufficient for making the same
prediction. Our approach combines two modular components, generator and
encoder, which are trained to operate well together. The generator specifies a
distribution over text fragments as candidate rationales and these are passed
through the encoder for prediction. Rationales are never given during training.
Instead, the model is regularized by desiderata for rationales. We evaluate the
approach on multi-aspect sentiment analysis against manually annotated test
cases. Our approach outperforms attention-based baseline by a significant
margin. We also successfully illustrate the method on the question retrieval
task.
| 2,016 | Computation and Language |
Zero-Resource Translation with Multi-Lingual Neural Machine Translation | In this paper, we propose a novel finetuning algorithm for the recently
introduced multi-way, mulitlingual neural machine translate that enables
zero-resource machine translation. When used together with novel many-to-one
translation strategies, we empirically show that this finetuning algorithm
allows the multi-way, multilingual model to translate a zero-resource language
pair (1) as well as a single-pair neural translation model trained with up to
1M direct parallel sentences of the same language pair and (2) better than
pivot-based translation strategy, while keeping only one additional copy of
attention-related parameters.
| 2,016 | Computation and Language |
Deep Recurrent Models with Fast-Forward Connections for Neural Machine
Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task.
| 2,016 | Computation and Language |
Active Discriminative Text Representation Learning | We propose a new active learning (AL) method for text classification with
convolutional neural networks (CNNs). In AL, one selects the instances to be
manually labeled with the aim of maximizing model performance with minimal
effort. Neural models capitalize on word embeddings as representations
(features), tuning these to the task at hand. We argue that AL strategies for
multi-layered neural models should focus on selecting instances that most
affect the embedding space (i.e., induce discriminative word representations).
This is in contrast to traditional AL approaches (e.g., entropy-based
uncertainty sampling), which specify higher level objectives. We propose a
simple approach for sentence classification that selects instances containing
words whose embeddings are likely to be updated with the greatest magnitude,
thereby rapidly learning discriminative, task-specific embeddings. We extend
this approach to document classification by jointly considering: (1) the
expected changes to the constituent word representations; and (2) the model's
current overall uncertainty regarding the instance. The relative emphasis
placed on these criteria is governed by a stochastic process that favors
selecting instances likely to improve representations at the outset of
learning, and then shifts toward general uncertainty sampling as AL progresses.
Empirical results show that our method outperforms baseline AL approaches on
both sentence and document classification tasks. We also show that, as
expected, the method quickly learns discriminative word embeddings. To the best
of our knowledge, this is the first work on AL addressing neural models for
text classification.
| 2,016 | Computation and Language |
Cross-Lingual Morphological Tagging for Low-Resource Languages | Morphologically rich languages often lack the annotated linguistic resources
required to develop accurate natural language processing tools. We propose
models suitable for training morphological taggers with rich tagsets for
low-resource languages without using direct supervision. Our approach extends
existing approaches of projecting part-of-speech tags across languages, using
bitext to infer constraints on the possible tags for a given word type or
token. We propose a tagging model using Wsabie, a discriminative
embedding-based model with rank-based learning. In our evaluation on 11
languages, on average this model performs on par with a baseline
weakly-supervised HMM, while being more scalable. Multilingual experiments show
that the method performs best when projecting between related language pairs.
Despite the inherently lossy projection, we show that the morphological tags
predicted by our models improve the downstream performance of a parser by +0.6
LAS on average.
| 2,016 | Computation and Language |
Automatic Text Scoring Using Neural Networks | Automated Text Scoring (ATS) provides a cost-effective and consistent
alternative to human marking. However, in order to achieve good performance,
the predictive features of the system need to be manually engineered by human
experts. We introduce a model that forms word representations by learning the
extent to which specific words contribute to the text's score. Using Long-Short
Term Memory networks to represent the meaning of texts, we demonstrate that a
fully automated framework is able to achieve excellent results over similar
approaches. In an attempt to make our results more interpretable, and inspired
by recent advances in visualizing neural networks, we introduce a novel method
for identifying the regions of the text that the model has found more
discriminative.
| 2,017 | Computation and Language |
Neural Word Segmentation Learning for Chinese | Most previous approaches to Chinese word segmentation formalize this problem
as a character-based sequence labeling task where only contextual information
within fixed sized local windows and simple interactions between adjacent tags
can be captured. In this paper, we propose a novel neural framework which
thoroughly eliminates context windows and can utilize complete segmentation
history. Our model employs a gated combination neural network over characters
to produce distributed representations of word candidates, which are then given
to a long short-term memory (LSTM) language scoring model. Experiments on the
benchmark datasets show that without the help of feature engineering as most
existing approaches, our models achieve competitive or better performances with
previous state-of-the-art methods.
| 2,016 | Computation and Language |
TwiSE at SemEval-2016 Task 4: Twitter Sentiment Classification | This paper describes the participation of the team "TwiSE" in the SemEval
2016 challenge. Specifically, we participated in Task 4, namely "Sentiment
Analysis in Twitter" for which we implemented sentiment classification systems
for subtasks A, B, C and D. Our approach consists of two steps. In the first
step, we generate and validate diverse feature sets for twitter sentiment
evaluation, inspired by the work of participants of previous editions of such
challenges. In the second step, we focus on the optimization of the evaluation
measures of the different subtasks. To this end, we examine different learning
strategies by validating them on the data provided by the task organisers. For
our final submissions we used an ensemble learning approach (stacked
generalization) for Subtask A and single linear models for the rest of the
subtasks. In the official leaderboard we were ranked 9/35, 8/19, 1/11 and 2/14
for subtasks A, B, C and D respectively.\footnote{We make the code available
for research purposes at
\url{https://github.com/balikasg/SemEval2016-Twitter\_Sentiment\_Evaluation}.}
| 2,016 | Computation and Language |
Shallow Discourse Parsing Using Distributed Argument Representations and
Bayesian Optimization | This paper describes the Georgia Tech team's approach to the CoNLL-2016
supplementary evaluation on discourse relation sense classification. We use
long short-term memories (LSTM) to induce distributed representations of each
argument, and then combine these representations with surface features in a
neural network. The architecture of the neural network is determined by
Bayesian hyperparameter search.
| 2,016 | Computation and Language |
Query-Reduction Networks for Question Answering | In this paper, we study the problem of question answering when reasoning over
multiple facts is required. We propose Query-Reduction Network (QRN), a variant
of Recurrent Neural Network (RNN) that effectively handles both short-term
(local) and long-term (global) sequential dependencies to reason over multiple
facts. QRN considers the context sentences as a sequence of state-changing
triggers, and reduces the original query to a more informed query as it
observes each trigger (context sentence) through time. Our experiments show
that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and
in a real goal-oriented dialog dataset. In addition, QRN formulation allows
parallelization on RNN's time axis, saving an order of magnitude in time
complexity for training and inference.
| 2,017 | Computation and Language |
Semi-Supervised Learning for Neural Machine Translation | While end-to-end neural machine translation (NMT) has made remarkable
progress recently, NMT systems only rely on parallel corpora for parameter
estimation. Since parallel corpora are usually limited in quantity, quality,
and coverage, especially for low-resource languages, it is appealing to exploit
monolingual corpora to improve NMT. We propose a semi-supervised approach for
training NMT models on the concatenation of labeled (parallel corpora) and
unlabeled (monolingual corpora) data. The central idea is to reconstruct the
monolingual corpora using an autoencoder, in which the source-to-target and
target-to-source translation models serve as the encoder and decoder,
respectively. Our approach can not only exploit the monolingual corpora of the
target language, but also of the source language. Experiments on the
Chinese-English dataset show that our approach achieves significant
improvements over state-of-the-art SMT and NMT systems.
| 2,016 | Computation and Language |
Agreement-based Learning of Parallel Lexicons and Phrases from
Non-Parallel Corpora | We introduce an agreement-based approach to learning parallel lexicons and
phrases from non-parallel corpora. The basic idea is to encourage two
asymmetric latent-variable translation models (i.e., source-to-target and
target-to-source) to agree on identifying latent phrase and word alignments.
The agreement is defined at both word and phrase levels. We develop a Viterbi
EM algorithm for jointly training the two unidirectional models efficiently.
Experiments on the Chinese-English dataset show that agreement-based learning
significantly improves both alignment and translation performance.
| 2,016 | Computation and Language |
Siamese CBOW: Optimizing Word Embeddings for Sentence Representations | We present the Siamese Continuous Bag of Words (Siamese CBOW) model, a neural
network for efficient estimation of high-quality sentence embeddings. Averaging
the embeddings of words in a sentence has proven to be a surprisingly
successful and efficient way of obtaining sentence embeddings. However, word
embeddings trained with the methods currently available are not optimized for
the task of sentence representation, and, thus, likely to be suboptimal.
Siamese CBOW handles this problem by training word embeddings directly for the
purpose of being averaged. The underlying neural network learns word embeddings
by predicting, from a sentence representation, its surrounding sentences. We
show the robustness of the Siamese CBOW model by evaluating it on 20 datasets
stemming from a wide variety of sources.
| 2,016 | Computation and Language |
Constitutional Precedent of Amicus Briefs | We investigate shared language between U.S. Supreme Court majority opinions
and interest groups' corresponding amicus briefs. Specifically, we evaluate
whether language that originated in an amicus brief acquired legal precedent
status by being cited in the Court's opinion. Using plagiarism detection
software, automated querying of a large legal database, and manual analysis, we
establish seven instances where interest group amici were able to formulate
constitutional case law, setting binding legal precedent. We discuss several
such instances for their implications in the Supreme Court's creation of case
law.
| 2,016 | Computation and Language |
Natural Language Generation as Planning under Uncertainty Using
Reinforcement Learning | We present and evaluate a new model for Natural Language Generation (NLG) in
Spoken Dialogue Systems, based on statistical planning, given noisy feedback
from the current generation context (e.g. a user and a surface realiser). We
study its use in a standard NLG problem: how to present information (in this
case a set of search results) to users, given the complex trade- offs between
utterance length, amount of information conveyed, and cognitive load. We set
these trade-offs by analysing existing MATCH data. We then train a NLG pol- icy
using Reinforcement Learning (RL), which adapts its behaviour to noisy feed-
back from the current generation context. This policy is compared to several
base- lines derived from previous work in this area. The learned policy
significantly out- performs all the prior approaches.
| 2,016 | Computation and Language |
A Correlational Encoder Decoder Architecture for Pivot Based Sequence
Generation | Interlingua based Machine Translation (MT) aims to encode multiple languages
into a common linguistic representation and then decode sentences in multiple
target languages from this representation. In this work we explore this idea in
the context of neural encoder decoder architectures, albeit on a smaller scale
and without MT as the end goal. Specifically, we consider the case of three
languages or modalities X, Z and Y wherein we are interested in generating
sequences in Y starting from information available in X. However, there is no
parallel training data available between X and Y but, training data is
available between X & Z and Z & Y (as is often the case in many real world
applications). Z thus acts as a pivot/bridge. An obvious solution, which is
perhaps less elegant but works very well in practice is to train a two stage
model which first converts from X to Z and then from Z to Y. Instead we explore
an interlingua inspired solution which jointly learns to do the following (i)
encode X and Z to a common representation and (ii) decode Y from this common
representation. We evaluate our model on two tasks: (i) bridge transliteration
and (ii) bridge captioning. We report promising results in both these
applications and believe that this is a right step towards truly interlingua
inspired encoder decoder architectures.
| 2,016 | Computation and Language |
Learning Word Sense Embeddings from Word Sense Definitions | Word embeddings play a significant role in many modern NLP systems. Since
learning one representation per word is problematic for polysemous words and
homonymous words, researchers propose to use one embedding per word sense.
Their approaches mainly train word sense embeddings on a corpus. In this paper,
we propose to use word sense definitions to learn one embedding per word sense.
Experimental results on word similarity tasks and a word sense disambiguation
task show that word sense embeddings produced by our approach are of high
quality.
| 2,016 | Computation and Language |
Smart Reply: Automated Response Suggestion for Email | In this paper we propose and investigate a novel end-to-end method for
automatically generating short email responses, called Smart Reply. It
generates semantically diverse suggestions that can be used as complete email
responses with just one tap on mobile. The system is currently used in Inbox by
Gmail and is responsible for assisting with 10% of all mobile responses. It is
designed to work at very high throughput and process hundreds of millions of
messages daily. The system exploits state-of-the-art, large-scale deep
learning.
We describe the architecture of the system as well as the challenges that we
faced while building it, like response diversity and scalability. We also
introduce a new method for semantic clustering of user-generated content that
requires only a modest amount of explicitly labeled data.
| 2,016 | Computation and Language |
The Edit Distance Transducer in Action: The University of Cambridge
English-German System at WMT16 | This paper presents the University of Cambridge submission to WMT16.
Motivated by the complementary nature of syntactical machine translation and
neural machine translation (NMT), we exploit the synergies of Hiero and NMT in
different combination schemes. Starting out with a simple neural lattice
rescoring approach, we show that the Hiero lattices are often too narrow for
NMT ensembles. Therefore, instead of a hard restriction of the NMT search space
to the lattice, we propose to loosely couple NMT and Hiero by composition with
a modified version of the edit distance transducer. The loose combination
outperforms lattice rescoring, especially when using multiple NMT systems in an
ensemble.
| 2,016 | Computation and Language |
Automatic Pronunciation Generation by Utilizing a Semi-supervised Deep
Neural Networks | Phonemic or phonetic sub-word units are the most commonly used atomic
elements to represent speech signals in modern ASRs. However they are not the
optimal choice due to several reasons such as: large amount of effort required
to handcraft a pronunciation dictionary, pronunciation variations, human
mistakes and under-resourced dialects and languages. Here, we propose a
data-driven pronunciation estimation and acoustic modeling method which only
takes the orthographic transcription to jointly estimate a set of sub-word
units and a reliable dictionary. Experimental results show that the proposed
method which is based on semi-supervised training of a deep neural network
largely outperforms phoneme based continuous speech recognition on the TIMIT
dataset.
| 2,016 | Computation and Language |
No Need to Pay Attention: Simple Recurrent Neural Networks Work! (for
Answering "Simple" Questions) | First-order factoid question answering assumes that the question can be
answered by a single fact in a knowledge base (KB). While this does not seem
like a challenging task, many recent attempts that apply either complex
linguistic reasoning or deep neural networks achieve 65%-76% accuracy on
benchmark sets. Our approach formulates the task as two machine learning
problems: detecting the entities in the question, and classifying the question
as one of the relation types in the KB. We train a recurrent neural network to
solve each problem. On the SimpleQuestions dataset, our approach yields
substantial improvements over previously published results --- even neural
networks based on much more complex architectures. The simplicity of our
approach also has practical advantages, such as efficiency and modularity, that
are valuable especially in an industry setting. In fact, we present a
preliminary analysis of the performance of our model on real queries from
Comcast's X1 entertainment platform with millions of users every day.
| 2,017 | Computation and Language |
SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com
| 2,016 | Computation and Language |
Spectral decomposition method of dialog state tracking via collective
matrix factorization | The task of dialog management is commonly decomposed into two sequential
subtasks: dialog state tracking and dialog policy learning. In an end-to-end
dialog system, the aim of dialog state tracking is to accurately estimate the
true dialog state from noisy observations produced by the speech recognition
and the natural language understanding modules. The state tracking task is
primarily meant to support a dialog policy. From a probabilistic perspective,
this is achieved by maintaining a posterior distribution over hidden dialog
states composed of a set of context dependent variables. Once a dialog policy
is learned, it strives to select an optimal dialog act given the estimated
dialog state and a defined reward function. This paper introduces a novel
method of dialog state tracking based on a bilinear algebric decomposition
model that provides an efficient inference schema through collective matrix
factorization. We evaluate the proposed approach on the second Dialog State
Tracking Challenge (DSTC-2) dataset and we show that the proposed tracker gives
encouraging results compared to the state-of-the-art trackers that participated
in this standard benchmark. Finally, we show that the prediction schema is
computationally efficient in comparison to the previous approaches.
| 2,016 | Computation and Language |
Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser.
| 2,016 | Computation and Language |
Sense Embedding Learning for Word Sense Induction | Conventional word sense induction (WSI) methods usually represent each
instance with discrete linguistic features or cooccurrence features, and train
a model for each polysemous word individually. In this work, we propose to
learn sense embeddings for the WSI task. In the training stage, our method
induces several sense centroids (embedding) for each polysemous word. In the
testing stage, our method represents each instance as a contextual vector, and
induces its sense by finding the nearest sense centroid in the embedding space.
The advantages of our method are (1) distributed sense vectors are taken as the
knowledge representations which are trained discriminatively, and usually have
better performance than traditional count-based distributional models, and (2)
a general model for the whole vocabulary is jointly trained to induce sense
centroids under the mutlitask learning framework. Evaluated on SemEval-2010 WSI
dataset, our method outperforms all participants and most of the recent
state-of-the-art methods. We further verify the two advantages by comparing
with carefully designed baselines.
| 2,016 | Computation and Language |
Stance Detection with Bidirectional Conditional Encoding | Stance detection is the task of classifying the attitude expressed in a text
towards a target such as Hillary Clinton to be "positive", negative" or
"neutral". Previous work has assumed that either the target is mentioned in the
text or that training data for every target is given. This paper considers the
more challenging version of this task, where targets are not always mentioned
and no training data is available for the test targets. We experiment with
conditional LSTM encoding, which builds a representation of the tweet that is
dependent on the target, and demonstrate that it outperforms encoding the tweet
and the target independently. Performance is improved further when the
conditional model is augmented with bidirectional encoding. We evaluate our
approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving
performance second best only to a system trained on semi-automatically labelled
tweets for the test target. When such weak supervision is added, our approach
achieves state-of-the-art results.
| 2,016 | Computation and Language |
Gender Inference using Statistical Name Characteristics in Twitter | Much attention has been given to the task of gender inference of Twitter
users. Although names are strong gender indicators, the names of Twitter users
are rarely used as a feature; probably due to the high number of ill-formed
names, which cannot be found in any name dictionary. Instead of relying solely
on a name database, we propose a novel name classifier. Our approach extracts
characteristics from the user names and uses those in order to assign the names
to a gender. This enables us to classify international first names as well as
ill-formed names.
| 2,016 | Computation and Language |
Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax
Trees and Strings | We present a natural language generator based on the sequence-to-sequence
approach that can be trained to produce natural language strings as well as
deep syntax dependency trees from input dialogue acts, and we use it to
directly compare two-step generation with separate sentence planning and
surface realization stages to a joint, one-step approach. We were able to train
both setups successfully using very little training data. The joint setup
offers better performance, surpassing state-of-the-art with regards to
n-gram-based scores while providing more relevant outputs.
| 2,016 | Computation and Language |
Universal, Unsupervised (Rule-Based), Uncovered Sentiment Analysis | We present a novel unsupervised approach for multilingual sentiment analysis
driven by compositional syntax-based rules. On the one hand, we exploit some of
the main advantages of unsupervised algorithms: (1) the interpretability of
their output, in contrast with most supervised models, which behave as a black
box and (2) their robustness across different corpora and domains. On the other
hand, by introducing the concept of compositional operations and exploiting
syntactic information in the form of universal dependencies, we tackle one of
their main drawbacks: their rigidity on data that are structured differently
depending on the language concerned. Experiments show an improvement both over
existing unsupervised methods, and over state-of-the-art supervised models when
evaluating outside their corpus of origin. Experiments also show how the same
compositional operations can be shared across languages. The system is
available at http://www.grupolys.org/software/UUUSA/
| 2,017 | Computation and Language |
SMS Spam Filtering using Probabilistic Topic Modelling and Stacked
Denoising Autoencoder | In This paper we present a novel approach to spam filtering and demonstrate
its applicability with respect to SMS messages. Our approach requires minimum
features engineering and a small set of la- belled data samples. Features are
extracted using topic modelling based on latent Dirichlet allocation, and then
a comprehensive data model is created using a Stacked Denoising Autoencoder
(SDA). Topic modelling summarises the data providing ease of use and high
interpretability by visualising the topics using word clouds. Given that the
SMS messages can be regarded as either spam (unwanted) or ham (wanted), the SDA
is able to model the messages and accurately discriminate between the two
classes without the need for a pre-labelled training set. The results are
compared against the state-of-the-art spam detection algorithms with our
proposed approach achieving over 97% accuracy which compares favourably to the
best reported algorithms presented in the literature.
| 2,016 | Computation and Language |
Data-driven HR - R\'esum\'e Analysis Based on Natural Language
Processing and Machine Learning | Recruiters usually spend less than a minute looking at each r\'esum\'e when
deciding whether it's worth continuing the recruitment process with the
candidate. Recruiters focus on keywords, and it's almost impossible to
guarantee a fair process of candidate selection. The main scope of this paper
is to tackle this issue by introducing a data-driven approach that shows how to
process r\'esum\'es automatically and give recruiters more time to only examine
promising candidates. Furthermore, we show how to leverage Machine Learning and
Natural Language Processing in order to extract all required information from
the r\'esum\'es. Once the information is extracted, a ranking score is
calculated. The score describes how well the candidates fit based on their
education, work experience and skills. Later this paper illustrates a prototype
application that shows how this novel approach can increase the productivity of
recruiters. The application enables them to filter and rank candidates based on
predefined job descriptions. Guided by the ranking, recruiters can get deeper
insights from candidate profiles and validate why and how the application
ranked them. This application shows how to improve the hiring process by giving
an unbiased hiring decision support.
| 2,016 | Computation and Language |
Two Discourse Driven Language Models for Semantics | Natural language understanding often requires deep semantic knowledge.
Expanding on previous proposals, we suggest that some important aspects of
semantic knowledge can be modeled as a language model if done at an appropriate
level of abstraction. We develop two distinct models that capture semantic
frame chains and discourse information while abstracting over the specific
mentions of predicates and entities. For each model, we investigate four
implementations: a "standard" N-gram language model and three discriminatively
trained "neural" language models that generate embeddings for semantic frames.
The quality of the semantic language models (SemLM) is evaluated both
intrinsically, using perplexity and a narrative cloze test and extrinsically -
we show that our SemLM helps improve performance on semantic natural language
processing tasks such as co-reference resolution and discourse parsing.
| 2,016 | Computation and Language |
DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using
Character and Word-Level CNNs | This paper describes our approach for the Detecting Stance in Tweets task
(SemEval-2016 Task 6). We utilized recent advances in short text categorization
using deep learning to create word-level and character-level models. The choice
between word-level and character-level models in each particular case was
informed through validation performance. Our final system is a combination of
classifiers using word-level or character-level models. We also employed novel
data augmentation techniques to expand and diversify our training dataset, thus
making our system more robust. Our system achieved a macro-average precision,
recall and F1-scores of 0.67, 0.61 and 0.635 respectively.
| 2,016 | Computation and Language |
Socially-Informed Timeline Generation for Complex Events | Existing timeline generation systems for complex events consider only
information from traditional media, ignoring the rich social context provided
by user-generated content that reveals representative public interests or
insightful opinions. We instead aim to generate socially-informed timelines
that contain both news article summaries and selected user comments. We present
an optimization framework designed to balance topical cohesion between the
article and comment summaries along with their informativeness and coverage of
the event. Automatic evaluations on real-world datasets that cover four complex
events show that our system produces more informative timelines than
state-of-the-art systems. In human evaluation, the associated comment summaries
are furthermore rated more insightful than editor's picks and comments ranked
highly by users.
| 2,016 | Computation and Language |
Query-Focused Opinion Summarization for User-Generated Content | We present a submodular function-based framework for query-focused opinion
summarization. Within our framework, relevance ordering produced by a
statistical ranker, and information coverage with respect to topic distribution
and diverse viewpoints are both encoded as submodular functions. Dispersion
functions are utilized to minimize the redundancy. We are the first to evaluate
different metrics of text similarity for submodularity-based summarization
methods. By experimenting on community QA and blog summarization, we show that
our system outperforms state-of-the-art approaches in both automatic evaluation
and human evaluation. A human evaluation task is conducted on Amazon Mechanical
Turk with scale, and shows that our systems are able to generate summaries of
high overall quality and information diversity.
| 2,016 | Computation and Language |
A Piece of My Mind: A Sentiment Analysis Approach for Online Dispute
Detection | We investigate the novel task of online dispute detection and propose a
sentiment analysis solution to the problem: we aim to identify the sequence of
sentence-level sentiments expressed during a discussion and to use them as
features in a classifier that predicts the DISPUTE/NON-DISPUTE label for the
discussion as a whole. We evaluate dispute detection approaches on a newly
created corpus of Wikipedia Talk page disputes and find that classifiers that
rely on our sentiment tagging features outperform those that do not. The best
model achieves a very promising F1 score of 0.78 and an accuracy of 0.80.
| 2,016 | Computation and Language |
Improving Agreement and Disagreement Identification in Online
Discussions with A Socially-Tuned Sentiment Lexicon | We study the problem of agreement and disagreement detection in online
discussions. An isotonic Conditional Random Fields (isotonic CRF) based
sequential model is proposed to make predictions on sentence- or segment-level.
We automatically construct a socially-tuned lexicon that is bootstrapped from
existing general-purpose sentiment lexicons to further improve the performance.
We evaluate our agreement and disagreement tagging model on two disparate
online discussion corpora -- Wikipedia Talk pages and online debates. Our model
is shown to outperform the state-of-the-art approaches in both datasets. For
example, the isotonic CRF model achieves F1 scores of 0.74 and 0.67 for
agreement and disagreement detection, when a linear chain CRF obtains 0.58 and
0.56 for the discussions on Wikipedia Talk pages.
| 2,016 | Computation and Language |
Egyptian Arabic to English Statistical Machine Translation System for
NIST OpenMT'2015 | The paper describes the Egyptian Arabic-to-English statistical machine
translation (SMT) system that the QCRI-Columbia-NYUAD (QCN) group submitted to
the NIST OpenMT'2015 competition. The competition focused on informal dialectal
Arabic, as used in SMS, chat, and speech. Thus, our efforts focused on
processing and standardizing Arabic, e.g., using tools such as 3arrib and
MADAMIRA. We further trained a phrase-based SMT system using state-of-the-art
features and components such as operation sequence model, class-based language
model, sparse features, neural network joint model, genre-based
hierarchically-interpolated language model, unsupervised transliteration
mining, phrase-table merging, and hypothesis combination. Our system ranked
second on all three genres.
| 2,016 | Computation and Language |
Generalizing to Unseen Entities and Entity Pairs with Row-less Universal
Schema | Universal schema predicts the types of entities and relations in a knowledge
base (KB) by jointly embedding the union of all available schema types---not
only types from multiple structured databases (such as Freebase or Wikipedia
infoboxes), but also types expressed as textual patterns from raw text. This
prediction is typically modeled as a matrix completion problem, with one type
per column, and either one or two entities per row (in the case of entity types
or binary relation types, respectively). Factorizing this sparsely observed
matrix yields a learned vector embedding for each row and each column. In this
paper we explore the problem of making predictions for entities or entity-pairs
unseen at training time (and hence without a pre-learned row embedding). We
propose an approach having no per-row parameters at all; rather we produce a
row vector on the fly using a learned aggregation function of the vectors of
the observed columns for that row. We experiment with various aggregation
functions, including neural network attention models. Our approach can be
understood as a natural language database, in that questions about KB entities
are answered by attending to textual or database evidence. In experiments
predicting both relations and entity types, we demonstrate that despite having
an order of magnitude fewer parameters than traditional universal schema, we
can match the accuracy of the traditional model, and more importantly, we can
now make predictions about unseen rows with nearly the same accuracy as rows
available at training time.
| 2,017 | Computation and Language |
Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test | Recent progress in neural learning demonstrated that machines can do well in
regularized tasks, e.g., the game of Go. However, artistic activities such as
poem generation are still widely regarded as human's special capability. In
this paper, we demonstrate that a simple neural model can imitate human in some
tasks of art generation. We particularly focus on traditional Chinese poetry,
and show that machines can do as well as many contemporary poets and weakly
pass the Feigenbaum Test, a variant of Turing test in professional domains. Our
method is based on an attention-based recurrent neural network, which accepts a
set of keywords as the theme and generates poems by looking at each keyword
during the generation. A number of techniques are proposed to improve the
model, including character vector initialization, attention to input and
hybrid-style training. Compared to existing poetry generation methods, our
model can generate much more theme-consistent and semantic-rich poems.
| 2,016 | Computation and Language |
Full-Time Supervision based Bidirectional RNN for Factoid Question
Answering | Recently, bidirectional recurrent neural network (BRNN) has been widely used
for question answering (QA) tasks with promising performance. However, most
existing BRNN models extract the information of questions and answers by
directly using a pooling operation to generate the representation for loss or
similarity calculation. Hence, these existing models don't put supervision
(loss or similarity calculation) at every time step, which will lose some
useful information. In this paper, we propose a novel BRNN model called
full-time supervision based BRNN (FTS-BRNN), which can put supervision at every
time step. Experiments on the factoid QA task show that our FTS-BRNN can
outperform other baselines to achieve the state-of-the-art accuracy.
| 2,016 | Computation and Language |
A Nonparametric Bayesian Approach for Spoken Term detection by Example
Query | State of the art speech recognition systems use data-intensive
context-dependent phonemes as acoustic units. However, these approaches do not
translate well to low resourced languages where large amounts of training data
is not available. For such languages, automatic discovery of acoustic units is
critical. In this paper, we demonstrate the application of nonparametric
Bayesian models to acoustic unit discovery. We show that the discovered units
are correlated with phonemes and therefore are linguistically meaningful. We
also present a spoken term detection (STD) by example query algorithm based on
these automatically learned units. We show that our proposed system produces a
P@N of 61.2% and an EER of 13.95% on the TIMIT dataset. The improvement in the
EER is 5% while P@N is only slightly lower than the best reported system in the
literature.
| 2,016 | Computation and Language |
The Role of CNL and AMR in Scalable Abstractive Summarization for
Multilingual Media Monitoring | In the era of Big Data and Deep Learning, there is a common view that machine
learning approaches are the only way to cope with the robust and scalable
information extraction and summarization. It has been recently proposed that
the CNL approach could be scaled up, building on the concept of embedded CNL
and, thus, allowing for CNL-based information extraction from e.g. normative or
medical texts that are rather controlled by nature but still infringe the
boundaries of CNL. Although it is arguable if CNL can be exploited to approach
the robust wide-coverage semantic parsing for use cases like media monitoring,
its potential becomes much more obvious in the opposite direction: generation
of story highlights from the summarized AMR graphs, which is in the focus of
this position paper.
| 2,016 | Computation and Language |
The LAMBADA dataset: Word prediction requiring a broad discourse context | We introduce LAMBADA, a dataset to evaluate the capabilities of computational
models for text understanding by means of a word prediction task. LAMBADA is a
collection of narrative passages sharing the characteristic that human subjects
are able to guess their last word if they are exposed to the whole passage, but
not if they only see the last sentence preceding the target word. To succeed on
LAMBADA, computational models cannot simply rely on local context, but must be
able to keep track of information in the broader discourse. We show that
LAMBADA exemplifies a wide range of linguistic phenomena, and that none of
several state-of-the-art language models reaches accuracy above 1% on this
novel benchmark. We thus propose LAMBADA as a challenging test set, meant to
encourage the development of new models capable of genuine understanding of
broad context in natural language text.
| 2,016 | Computation and Language |
Uncertainty in Neural Network Word Embedding: Exploration of Threshold
for Similarity | Word embedding, specially with its recent developments, promises a
quantification of the similarity between terms. However, it is not clear to
which extent this similarity value can be genuinely meaningful and useful for
subsequent tasks. We explore how the similarity score obtained from the models
is really indicative of term relatedness. We first observe and quantify the
uncertainty factor of the word embedding models regarding to the similarity
value. Based on this factor, we introduce a general threshold on various
dimensions which effectively filters the highly related terms. Our evaluation
on four information retrieval collections supports the effectiveness of our
approach as the results of the introduced threshold are significantly better
than the baseline while being equal to or statistically indistinguishable from
the optimal results.
| 2,018 | Computation and Language |
Quantifying and Reducing Stereotypes in Word Embeddings | Machine learning algorithms are optimized to model statistical properties of
the training data. If the input data reflects stereotypes and biases of the
broader society, then the output of the learning algorithm also captures these
stereotypes. In this paper, we initiate the study of gender stereotypes in {\em
word embedding}, a popular framework to represent text data. As their use
becomes increasingly common, applications can inadvertently amplify unwanted
stereotypes. We show across multiple datasets that the embeddings contain
significant gender stereotypes, especially with regard to professions. We
created a novel gender analogy task and combined it with crowdsourcing to
systematically quantify the gender bias in a given embedding. We developed an
efficient algorithm that reduces gender stereotype using just a handful of
training examples while preserving the useful geometric properties of the
embedding. We evaluated our algorithm on several metrics. While we focus on
male/female stereotypes, our framework may be applicable to other types of
embedding biases.
| 2,016 | Computation and Language |
Introducing a Calculus of Effects and Handlers for Natural Language
Semantics | In compositional model-theoretic semantics, researchers assemble
truth-conditions or other kinds of denotations using the lambda calculus. It
was previously observed that the lambda terms and/or the denotations studied
tend to follow the same pattern: they are instances of a monad. In this paper,
we present an extension of the simply-typed lambda calculus that exploits this
uniformity using the recently discovered technique of effect handlers. We prove
that our calculus exhibits some of the key formal properties of the lambda
calculus and we use it to construct a modular semantics for a small fragment
that involves multiple distinct semantic phenomena.
| 2,016 | Computation and Language |
Pragmatic factors in image description: the case of negations | We provide a qualitative analysis of the descriptions containing negations
(no, not, n't, nobody, etc) in the Flickr30K corpus, and a categorization of
negation uses. Based on this analysis, we provide a set of requirements that an
image description system should have in order to generate negation sentences.
As a pilot experiment, we used our categorization to manually annotate
sentences containing negations in the Flickr30K corpus, with an agreement score
of K=0.67. With this paper, we hope to open up a broader discussion of
subjective language in image descriptions.
| 2,016 | Computation and Language |
MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis
in Online Opinion Videos | People are sharing their opinions, stories and reviews through online video
sharing websites every day. Studying sentiment and subjectivity in these
opinion videos is experiencing a growing attention from academia and industry.
While sentiment analysis has been successful for text, it is an understudied
research question for videos and multimedia content. The biggest setbacks for
studies in this direction are lack of a proper dataset, methodology, baselines
and statistical analysis of how information from different modality sources
relate to each other. This paper introduces to the scientific community the
first opinion-level annotated corpus of sentiment and subjectivity analysis in
online videos called Multimodal Opinion-level Sentiment Intensity dataset
(MOSI). The dataset is rigorously annotated with labels for subjectivity,
sentiment intensity, per-frame and per-opinion annotated visual features, and
per-milliseconds annotated audio features. Furthermore, we present baselines
for future studies in this direction as well as a new multimodal fusion
approach that jointly models spoken words and visual gestures.
| 2,016 | Computation and Language |
A Data-Driven Approach for Semantic Role Labeling from Induced Grammar
Structures in Language | Semantic roles play an important role in extracting knowledge from text.
Current unsupervised approaches utilize features from grammar structures, to
induce semantic roles. The dependence on these grammars, however, makes it
difficult to adapt to noisy and new languages. In this paper we develop a
data-driven approach to identifying semantic roles, the approach is entirely
unsupervised up to the point where rules need to be learned to identify the
position the semantic role occurs. Specifically we develop a modified-ADIOS
algorithm based on ADIOS Solan et al. (2005) to learn grammar structures, and
use these grammar structures to learn the rules for identifying the semantic
roles based on the context in which the grammar structures appeared. The
results obtained are comparable with the current state-of-art models that are
inherently dependent on human annotated data.
| 2,016 | Computation and Language |
A Probabilistic Generative Grammar for Semantic Parsing | Domain-general semantic parsing is a long-standing goal in natural language
processing, where the semantic parser is capable of robustly parsing sentences
from domains outside of which it was trained. Current approaches largely rely
on additional supervision from new domains in order to generalize to those
domains. We present a generative model of natural language utterances and
logical forms and demonstrate its application to semantic parsing. Our approach
relies on domain-independent supervision to generalize to new domains. We
derive and implement efficient algorithms for training, parsing, and sentence
generation. The work relies on a novel application of hierarchical Dirichlet
processes (HDPs) for structured prediction, which we also present in this
manuscript.
This manuscript is an excerpt of chapter 4 from the Ph.D. thesis of Saparov
(2022), where the model plays a central role in a larger natural language
understanding system.
This manuscript provides a new simplified and more complete presentation of
the work first introduced in Saparov, Saraswat, and Mitchell (2017). The
description and proofs of correctness of the training algorithm, parsing
algorithm, and sentence generation algorithm are much simplified in this new
presentation. We also describe the novel application of hierarchical Dirichlet
processes for structured prediction. In addition, we extend the earlier work
with a new model of word morphology, which utilizes the comprehensive
morphological data from Wiktionary.
| 2,022 | Computation and Language |
Incremental Parsing with Minimal Features Using Bi-Directional LSTM | Recently, neural network approaches for parsing have largely automated the
combination of individual features, but still rely on (often a larger number
of) atomic features created from human linguistic intuition, and potentially
omitting important global context. To further reduce feature engineering to the
bare minimum, we use bi-directional LSTM sentence representations to model a
parser state with only three sentence positions, which automatically identifies
important aspects of the entire sentence. This model achieves state-of-the-art
results among greedy dependency parsers for English. We also introduce a novel
transition system for constituency parsing which does not require binarization,
and together with the above architecture, achieves state-of-the-art results
among greedy parsers for both English and Chinese.
| 2,016 | Computation and Language |
Neighborhood Mixture Model for Knowledge Base Completion | Knowledge bases are useful resources for many natural language processing
tasks, however, they are far from complete. In this paper, we define a novel
entity representation as a mixture of its neighborhood in the knowledge base
and apply this technique on TransE-a well-known embedding model for knowledge
base completion. Experimental results show that the neighborhood information
significantly helps to improve the results of the TransE model, leading to
better performance than obtained by other state-of-the-art embedding models on
three benchmark datasets for triple classification, entity prediction and
relation prediction tasks.
| 2,017 | Computation and Language |
An empirical study on large scale text classification with skip-gram
embeddings | We investigate the integration of word embeddings as classification features
in the setting of large scale text classification. Such representations have
been used in a plethora of tasks, however their application in classification
scenarios with thousands of classes has not been extensively researched,
partially due to hardware limitations. In this work, we examine efficient
composition functions to obtain document-level from word-level embeddings and
we subsequently investigate their combination with the traditional
one-hot-encoding representations. By presenting empirical evidence on large,
multi-class, multi-label classification problems, we demonstrate the efficiency
and the performance benefits of this combination.
| 2,016 | Computation and Language |
Neural Morphological Tagging from Characters for Morphologically Rich
Languages | This paper investigates neural character-based morphological tagging for
languages with complex morphology and large tag sets. We systematically explore
a variety of neural architectures (DNN, CNN, CNNHighway, LSTM, BLSTM) to obtain
character-based word vectors combined with bidirectional LSTMs to model
across-word context in an end-to-end setting. We explore supplementary use of
word-based vectors trained on large amounts of unlabeled data. Our experiments
for morphological tagging suggest that for "simple" model configurations, the
choice of the network architecture (CNN vs. CNNHighway vs. LSTM vs. BLSTM) or
the augmentation with pre-trained word embeddings can be important and clearly
impact the accuracy. Increasing the model capacity by adding depth, for
example, and carefully optimizing the neural networks can lead to substantial
improvements, and the differences in accuracy (but not training time) become
much smaller or even negligible. Overall, our best morphological taggers for
German and Czech outperform the best results reported in the literature by a
large margin.
| 2,016 | Computation and Language |
Correlation-based Intrinsic Evaluation of Word Vector Representations | We introduce QVEC-CCA--an intrinsic evaluation metric for word vector
representations based on correlations of learned vectors with features
extracted from linguistic resources. We show that QVEC-CCA scores are an
effective proxy for a range of extrinsic semantic and syntactic tasks. We also
show that the proposed evaluation obtains higher and more consistent
correlations with downstream tasks, compared to existing approaches to
intrinsic evaluation of word vectors that are based on word similarity.
| 2,016 | Computation and Language |
Divergent discourse between protests and counter-protests:
#BlackLivesMatter and #AllLivesMatter | Since the shooting of Black teenager Michael Brown by White police officer
Darren Wilson in Ferguson, Missouri, the protest hashtag #BlackLivesMatter has
amplified critiques of extrajudicial killings of Black Americans. In response
to #BlackLivesMatter, other Twitter users have adopted #AllLivesMatter, a
counter-protest hashtag whose content argues that equal attention should be
given to all lives regardless of race. Through a multi-level analysis of over
860,000 tweets, we study how these protests and counter-protests diverge by
quantifying aspects of their discourse. We find that #AllLivesMatter
facilitates opposition between #BlackLivesMatter and hashtags such as
#PoliceLivesMatter and #BlueLivesMatter in such a way that historically echoes
the tension between Black protesters and law enforcement. In addition, we show
that a significant portion of #AllLivesMatter use stems from hijacking by
#BlackLivesMatter advocates. Beyond simply injecting #AllLivesMatter with
#BlackLivesMatter content, these hijackers use the hashtag to directly confront
the counter-protest notion of "All lives matter." Our findings suggest that
Black Lives Matter movement was able to grow, exhibit diverse conversations,
and avoid derailment on social media by making discussion of counter-protest
opinions a central topic of #AllLivesMatter, rather than the movement itself.
| 2,018 | Computation and Language |
A Curriculum Learning Method for Improved Noise Robustness in Automatic
Speech Recognition | The performance of automatic speech recognition systems under noisy
environments still leaves room for improvement. Speech enhancement or feature
enhancement techniques for increasing noise robustness of these systems usually
add components to the recognition system that need careful optimization. In
this work, we propose the use of a relatively simple curriculum training
strategy called accordion annealing (ACCAN). It uses a multi-stage training
schedule where samples at signal-to-noise ratio (SNR) values as low as 0dB are
first added and samples at increasing higher SNR values are gradually added up
to an SNR value of 50dB. We also use a method called per-epoch noise mixing
(PEM) that generates noisy training samples online during training and thus
enables dynamically changing the SNR of our training data. Both the ACCAN and
the PEM methods are evaluated on a end-to-end speech recognition pipeline on
the Wall Street Journal corpus. ACCAN decreases the average word error rate
(WER) on the 20dB to -10dB SNR range by up to 31.4% when compared to a
conventional multi-condition training method.
| 2,016 | Computation and Language |
Inferring Logical Forms From Denotations | A core problem in learning semantic parsers from denotations is picking out
consistent logical forms--those that yield the correct denotation--from a
combinatorially large space. To control the search space, previous work relied
on restricted set of rules, which limits expressivity. In this paper, we
consider a much more expressive class of logical forms, and show how to use
dynamic programming to efficiently represent the complete set of consistent
logical forms. Expressivity also introduces many more spurious logical forms
which are consistent with the correct denotation but do not represent the
meaning of the utterance. To address this, we generate fictitious worlds and
use crowdsourced denotations on these worlds to filter out spurious logical
forms. On the WikiTableQuestions dataset, we increase the coverage of
answerable questions from 53.5% to 76%, and the additional crowdsourced
supervision lets us rule out 92.1% of spurious logical forms.
| 2,016 | Computation and Language |
Learning text representation using recurrent convolutional neural
network with highway layers | Recently, the rapid development of word embedding and neural networks has
brought new inspiration to various NLP and IR tasks. In this paper, we describe
a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN)
with highway layers. The highway network module is incorporated in the middle
takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module
in the first stage and provides the Convolutional Neural Network (CNN) module
in the last stage with the input. The experiment shows that our model
outperforms common neural network models (CNN, RNN, Bi-RNN) on a sentiment
analysis task. Besides, the analysis of how sequence length influences the RCNN
with highway layers shows that our model could learn good representation for
the long text.
| 2,016 | Computation and Language |
A segmental framework for fully-unsupervised large-vocabulary speech
recognition | Zero-resource speech technology is a growing research area that aims to
develop methods for speech processing in the absence of transcriptions,
lexicons, or language modelling text. Early term discovery systems focused on
identifying isolated recurring patterns in a corpus, while more recent
full-coverage systems attempt to completely segment and cluster the audio into
word-like units---effectively performing unsupervised speech recognition. This
article presents the first attempt we are aware of to apply such a system to
large-vocabulary multi-speaker data. Our system uses a Bayesian modelling
framework with segmental word representations: each word segment is represented
as a fixed-dimensional acoustic embedding obtained by mapping the sequence of
feature frames to a single embedding vector. We compare our system on English
and Xitsonga datasets to state-of-the-art baselines, using a variety of
measures including word error rate (obtained by mapping the unsupervised output
to ground truth transcriptions). Very high word error rates are reported---in
the order of 70--80% for speaker-dependent and 80--95% for speaker-independent
systems---highlighting the difficulty of this task. Nevertheless, in terms of
cluster quality and word segmentation metrics, we show that by imposing a
consistent top-down segmentation while also using bottom-up knowledge from
detected syllable boundaries, both single-speaker and multi-speaker versions of
our system outperform a purely bottom-up single-speaker syllable-based
approach. We also show that the discovered clusters can be made less speaker-
and gender-specific by using an unsupervised autoencoder-like feature extractor
to learn better frame-level features (prior to embedding). Our system's
discovered clusters are still less pure than those of unsupervised term
discovery systems, but provide far greater coverage.
| 2,017 | Computation and Language |
The word entropy of natural languages | The average uncertainty associated with words is an information-theoretic
concept at the heart of quantitative and computational linguistics. The entropy
has been established as a measure of this average uncertainty - also called
average information content. We here use parallel texts of 21 languages to
establish the number of tokens at which word entropies converge to stable
values. These convergence points are then used to select texts from a massively
parallel corpus, and to estimate word entropies across more than 1000
languages. Our results help to establish quantitative language comparisons, to
understand the performance of multilingual translation systems, and to
normalize semantic similarity measures.
| 2,016 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.