Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Building a Fine-Grained Entity Typing System Overnight for a New X (X =
Language, Domain, Genre) | Recent research has shown great progress on fine-grained entity typing. Most
existing methods require pre-defining a set of types and training a multi-class
classifier from a large labeled data set based on multi-level linguistic
features. They are thus limited to certain domains, genres and languages. In
this paper, we propose a novel unsupervised entity typing framework by
combining symbolic and distributional semantics. We start from learning general
embeddings for each entity mention, compose the embeddings of specific contexts
using linguistic structures, link the mention to knowledge bases and learn its
related knowledge representations. Then we develop a novel joint hierarchical
clustering and linking algorithm to type all mentions using these
representations. This framework doesn't rely on any annotated data, predefined
typing schema, or hand-crafted features, therefore it can be quickly adapted to
a new domain, genre and language. Furthermore, it has great flexibility at
incorporating linguistic structures (e.g., Abstract Meaning Representation
(AMR), dependency relations) to improve specific context representation.
Experiments on genres (news and discussion forum) show comparable performance
with state-of-the-art supervised typing systems trained from a large amount of
labeled data. Results on various languages (English, Chinese, Japanese, Hausa,
and Yoruba) and domains (general and biomedical) demonstrate the portability of
our framework.
| 2,016 | Computation and Language |
Part-of-Speech Tagging for Historical English | As more historical texts are digitized, there is interest in applying natural
language processing tools to these archives. However, the performance of these
tools is often unsatisfactory, due to language change and genre differences.
Spelling normalization heuristics are the dominant solution for dealing with
historical texts, but this approach fails to account for changes in usage and
vocabulary. In this empirical paper, we assess the capability of domain
adaptation techniques to cope with historical texts, focusing on the classic
benchmark task of part-of-speech tagging. We evaluate several domain adaptation
methods on the task of tagging Early Modern English and Modern British English
texts in the Penn Corpora of Historical English. We demonstrate that the
Feature Embedding method for unsupervised domain adaptation outperforms word
embeddings and Brown clusters, showing the importance of embedding the entire
feature space, rather than just individual words. Feature Embeddings also give
better performance than spelling normalization, but the combination of the two
methods is better still, yielding a 5% raw improvement in tagging accuracy on
Early Modern English texts.
| 2,016 | Computation and Language |
Personalized Speech recognition on mobile devices | We describe a large vocabulary speech recognition system that is accurate,
has low latency, and yet has a small enough memory and computational footprint
to run faster than real-time on a Nexus 5 Android smartphone. We employ a
quantized Long Short-Term Memory (LSTM) acoustic model trained with
connectionist temporal classification (CTC) to directly predict phoneme
targets, and further reduce its memory footprint using an SVD-based compression
scheme. Additionally, we minimize our memory footprint by using a single
language model for both dictation and voice command domains, constructed using
Bayesian interpolation. Finally, in order to properly handle device-specific
information, such as proper names and other context-dependent information, we
inject vocabulary items into the decoder graph and bias the language model
on-the-fly. Our system achieves 13.5% word error rate on an open-ended
dictation task, running with a median speed that is seven times faster than
real-time.
| 2,016 | Computation and Language |
Sieve-based Coreference Resolution in the Biomedical Domain | We describe challenges and advantages unique to coreference resolution in the
biomedical domain, and a sieve-based architecture that leverages domain
knowledge for both entity and event coreference resolution. Domain-general
coreference resolution algorithms perform poorly on biomedical documents,
because the cues they rely on such as gender are largely absent in this domain,
and because they do not encode domain-specific knowledge such as the number and
type of participants required in chemical reactions. Moreover, it is difficult
to directly encode this knowledge into most coreference resolution algorithms
because they are not rule-based. Our rule-based architecture uses sequentially
applied hand-designed "sieves", with the output of each sieve informing and
constraining subsequent sieves. This architecture provides a 3.2% increase in
throughput to our Reach event extraction system with precision parallel to that
of the stricter system that relies solely on syntactic patterns for extraction.
| 2,016 | Computation and Language |
Towards using social media to identify individuals at risk for
preventable chronic illness | We describe a strategy for the acquisition of training data necessary to
build a social-media-driven early detection system for individuals at risk for
(preventable) type 2 diabetes mellitus (T2DM). The strategy uses a game-like
quiz with data and questions acquired semi-automatically from Twitter. The
questions are designed to inspire participant engagement and collect relevant
data to train a public-health model applied to individuals. Prior systems
designed to use social media such as Twitter to predict obesity (a risk factor
for T2DM) operate on entire communities such as states, counties, or cities,
based on statistics gathered by government agencies. Because there is
considerable variation among individuals within these groups, training data on
the individual level would be more effective, but this data is difficult to
acquire. The approach proposed here aims to address this issue. Our strategy
has two steps. First, we trained a random forest classifier on data gathered
from (public) Twitter statuses and state-level statistics with state-of-the-art
accuracy. We then converted this classifier into a 20-questions-style quiz and
made it available online. In doing so, we achieved high engagement with
individuals that took the quiz, while also building a training set of
voluntarily supplied individual-level data for future classification.
| 2,016 | Computation and Language |
Training with Exploration Improves a Greedy Stack-LSTM Parser | We adapt the greedy Stack-LSTM dependency parser of Dyer et al. (2015) to
support a training-with-exploration procedure using dynamic oracles(Goldberg
and Nivre, 2013) instead of cross-entropy minimization. This form of training,
which accounts for model predictions at training time rather than assuming an
error-free action history, improves parsing accuracies for both English and
Chinese, obtaining very strong results for both languages. We discuss some
modifications needed in order to get training with exploration to work well for
a probabilistic neural-network.
| 2,016 | Computation and Language |
Sequential Short-Text Classification with Recurrent and Convolutional
Neural Networks | Recent approaches based on artificial neural networks (ANNs) have shown
promising results for short-text classification. However, many short texts
occur in sequences (e.g., sentences in a document or utterances in a dialog),
and most existing ANN-based systems do not leverage the preceding short texts
when classifying a subsequent one. In this work, we present a model based on
recurrent neural networks and convolutional neural networks that incorporates
the preceding short texts. Our model achieves state-of-the-art results on three
different datasets for dialog act prediction.
| 2,016 | Computation and Language |
Neural Discourse Relation Recognition with Semantic Memory | Humans comprehend the meanings and relations of discourses heavily relying on
their semantic memory that encodes general knowledge about concepts and facts.
Inspired by this, we propose a neural recognizer for implicit discourse
relation analysis, which builds upon a semantic memory that stores knowledge in
a distributed fashion. We refer to this recognizer as SeMDER. Starting from
word embeddings of discourse arguments, SeMDER employs a shallow encoder to
generate a distributed surface representation for a discourse. A semantic
encoder with attention to the semantic memory matrix is further established
over surface representations. It is able to retrieve a deep semantic meaning
representation for the discourse from the memory. Using the surface and
semantic representations as input, SeMDER finally predicts implicit discourse
relations via a neural recognizer. Experiments on the benchmark data set show
that SeMDER benefits from the semantic memory and achieves substantial
improvements of 2.56\% on average over current state-of-the-art baselines in
terms of F1-score.
| 2,017 | Computation and Language |
Variational Neural Discourse Relation Recognizer | Implicit discourse relation recognition is a crucial component for automatic
discourselevel analysis and nature language understanding. Previous studies
exploit discriminative models that are built on either powerful manual features
or deep discourse representations. In this paper, instead, we explore
generative models and propose a variational neural discourse relation
recognizer. We refer to this model as VarNDRR. VarNDRR establishes a directed
probabilistic model with a latent continuous variable that generates both a
discourse and the relation between the two arguments of the discourse. In order
to perform efficient inference and learning, we introduce neural discourse
relation models to approximate the prior and posterior distributions of the
latent variable, and employ these approximated distributions to optimize a
reparameterized variational lower bound. This allows VarNDRR to be trained with
standard stochastic gradient methods. Experiments on the benchmark data set
show that VarNDRR can achieve comparable results against stateof- the-art
baselines without using any manual features.
| 2,016 | Computation and Language |
Interactive Tools and Tasks for the Hebrew Bible | This contribution to a special issue on "Computer-aided processing of
intertextuality" in ancient texts will illustrate how using digital tools to
interact with the Hebrew Bible offers new promising perspectives for
visualizing the texts and for performing tasks in education and research. This
contribution explores how the corpus of the Hebrew Bible created and maintained
by the Eep Talstra Centre for Bible and Computer can support new methods for
modern knowledge workers within the field of digital humanities and theology be
applied to ancient texts, and how this can be envisioned as a new field of
digital intertextuality. The article first describes how the corpus was used to
develop the Bible Online Learner as a persuasive technology to enhance language
learning with, in, and around a database that acts as the engine driving
interactive tasks for learners. Intertextuality in this case is a matter of
active exploration and ongoing practice. Furthermore, interactive
corpus-technology has an important bearing on the task of textual criticism as
a specialized area of research that depends increasingly on the availability of
digital resources. Commercial solutions developed by software companies like
Logos and Accordance offer a market-based intertextuality defined by the
production of advanced digital resources for scholars and students as useful
alternatives to often inaccessible and expensive printed versions. It is
reasonable to expect that in the future interactive corpus technology will
allow scholars to do innovative academic tasks in textual criticism and
interpretation. We have already seen the emergence of promising tools for text
categorization, analysis of translation shifts, and interpretation. Broadly
speaking, interactive tools and tasks within the three areas of language
learning, textual criticism, and Biblical studies illustrate a new kind of
intertextuality emerging within digital humanities.
| 2,017 | Computation and Language |
Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature
Representations | We present a simple and effective scheme for dependency parsing which is
based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with
a BiLSTM vector representing the token in its sentential context, and feature
vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is
trained jointly with the parser objective, resulting in very effective feature
extractors for parsing. We demonstrate the effectiveness of the approach by
applying it to a greedy transition-based parser as well as to a globally
optimized graph-based parser. The resulting parsers have very simple
architectures, and match or surpass the state-of-the-art accuracies on English
and Chinese.
| 2,016 | Computation and Language |
Multichannel Variable-Size Convolution for Sentence Classification | We propose MVCNN, a convolution neural network (CNN) architecture for
sentence classification. It (i) combines diverse versions of pretrained word
embeddings and (ii) extracts features of multigranular phrases with
variable-size convolution filters. We also show that pretraining MVCNN is
critical for good performance. MVCNN achieves state-of-the-art performance on
four tasks: on small-scale binary, small-scale multi-class and largescale
Twitter sentiment prediction and on subjectivity classification.
| 2,016 | Computation and Language |
Unsupervised Ranking Model for Entity Coreference Resolution | Coreference resolution is one of the first stages in deep language
understanding and its importance has been well recognized in the natural
language processing community. In this paper, we propose a generative,
unsupervised ranking model for entity coreference resolution by introducing
resolution mode variables. Our unsupervised system achieves 58.44% F1 score of
the CoNLL metric on the English data from the CoNLL-2012 shared task (Pradhan
et al., 2012), outperforming the Stanford deterministic system (Lee et al.,
2013) by 3.01%.
| 2,016 | Computation and Language |
Topic Modeling Using Distributed Word Embeddings | We propose a new algorithm for topic modeling, Vec2Topic, that identifies the
main topics in a corpus using semantic information captured via
high-dimensional distributed word embeddings. Our technique is unsupervised and
generates a list of topics ranked with respect to importance. We find that it
works better than existing topic modeling techniques such as Latent Dirichlet
Allocation for identifying key topics in user-generated content, such as
emails, chats, etc., where topics are diffused across the corpus. We also find
that Vec2Topic works equally well for non-user generated content, such as
papers, reports, etc., and for small corpora such as a single-document.
| 2,016 | Computation and Language |
Evaluating the word-expert approach for Named-Entity Disambiguation | Named Entity Disambiguation (NED) is the task of linking a named-entity
mention to an instance in a knowledge-base, typically Wikipedia. This task is
closely related to word-sense disambiguation (WSD), where the supervised
word-expert approach has prevailed. In this work we present the results of the
word-expert approach to NED, where one classifier is built for each target
entity mention string. The resources necessary to build the system, a
dictionary and a set of training instances, have been automatically derived
from Wikipedia. We provide empirical evidence of the value of this approach, as
well as a study of the differences between WSD and NED, including ambiguity and
synonymy statistics.
| 2,016 | Computation and Language |
Recurrent Dropout without Memory Loss | This paper presents a novel approach to recurrent neural network (RNN)
regularization. Differently from the widely adopted dropout method, which is
applied to \textit{forward} connections of feed-forward architectures or RNNs,
we propose to drop neurons directly in \textit{recurrent} connections in a way
that does not cause loss of long-term memory. Our approach is as easy to
implement and apply as the regular feed-forward dropout and we demonstrate its
effectiveness for Long Short-Term Memory network, the most popular type of RNN
cells. Our experiments on NLP benchmarks show consistent improvements even when
combined with conventional feed-forward dropout.
| 2,016 | Computation and Language |
Comparing Convolutional Neural Networks to Traditional Models for Slot
Filling | We address relation classification in the context of slot filling, the task
of finding and evaluating fillers like "Steve Jobs" for the slot X in "X
founded Apple". We propose a convolutional neural network which splits the
input sentence into three parts according to the relation arguments and compare
it to state-of-the-art and traditional approaches of relation classification.
Finally, we combine different methods and show that the combination is better
than individual approaches. We also analyze the effect of genre differences on
performance.
| 2,016 | Computation and Language |
Self-organization of vocabularies under different interaction orders | Traditionally, the formation of vocabularies has been studied by agent-based
models (specially, the Naming Game) in which random pairs of agents negotiate
word-meaning associations at each discrete time step. This paper proposes a
first approximation to a novel question: To what extent the negotiation of
word-meaning associations is influenced by the order in which the individuals
interact? Automata Networks provide the adequate mathematical framework to
explore this question. Computer simulations suggest that on two-dimensional
lattices the typical features of the formation of word-meaning associations are
recovered under random schemes that update small fractions of the population at
the same time.
| 2,016 | Computation and Language |
Modeling self-organization of vocabularies under phonological similarity
effects | This work develops a computational model (by Automata Networks) of
phonological similarity effects involved in the formation of word-meaning
associations on artificial populations of speakers. Classical studies show that
in recalling experiments memory performance was impaired for phonologically
similar words versus dissimilar ones. Here, the individuals confound
phonologically similar words according to a predefined parameter. The main
hypothesis is that there is a critical range of the parameter, and with this,
of working-memory mechanisms, which implies drastic changes in the final
consensus of the entire population. Theoretical results present proofs of
convergence for a particular case of the model within a worst-case complexity
framework. Computer simulations describe the evolution of an energy function
that measures the amount of local agreement between individuals. The main
finding is the appearance of sudden changes in the energy function at critical
parameters.
| 2,016 | Computation and Language |
Predicate Gradual Logic and Linguistics | There are several major proposals for treating donkey anaphora such as
discourse representation theory and the likes, or E-Type theories and the
likes. Every one of them works well for a set of specific examples that they
use to demonstrate validity of their approaches. As I show in this paper,
however, they are not very generalisable and do not account for essentially the
same problem that they remedy when it manifests in other examples. I propose
another logical approach. I develoop logic that extends a recent, propositional
gradual logic, and show that it can treat donkey anaphora generally. I also
identify and address a problem around the modern convention on existential
import. Furthermore, I show that Aristotle's syllogisms and conversion are
realisable in this logic.
| 2,016 | Computation and Language |
Bank distress in the news: Describing events through deep learning | While many models are purposed for detecting the occurrence of significant
events in financial systems, the task of providing qualitative detail on the
developments is not usually as well automated. We present a deep learning
approach for detecting relevant discussion in text and extracting natural
language descriptions of events. Supervised by only a small set of event
information, comprising entity names and dates, the model is leveraged by
unsupervised learning of semantic vector representations on extensive text
data. We demonstrate applicability to the study of financial risk based on news
(6.6M articles), particularly bank distress and government interventions (243
events), where indices can signal the level of bank-stress-related reporting at
the entity level, or aggregated at national or European level, while being
coupled with explanations. Thus, we exemplify how text, as timely, widely
available and descriptive data, can serve as a useful complementary source of
information for financial and systemic risk analytics.
| 2,017 | Computation and Language |
Predicting health inspection results from online restaurant reviews | Informatics around public health are increasingly shifting from the
professional to the public spheres. In this work, we apply linguistic analytics
to restaurant reviews, from Yelp, in order to automatically predict official
health inspection reports. We consider two types of feature sets, i.e., keyword
detection and topic model features, and use these in several classification
methods. Our empirical analysis shows that these extracted features can predict
public health inspection reports with over 90% accuracy using simple support
vector machines.
| 2,016 | Computation and Language |
A Readability Analysis of Campaign Speeches from the 2016 US
Presidential Campaign | Readability is defined as the reading level of the speech from grade 1 to
grade 12. It results from the use of the REAP readability analysis (vocabulary
- Collins-Thompson and Callan, 2004; syntax - Heilman et al ,2006, 2007), which
use the lexical contents and grammatical structure of the sentences in a
document to predict the reading level. After analysis, results were grouped
into the average readability of each candidate, the evolution of the
candidate's speeches' readability over time and the standard deviation, or how
much each candidate varied their speech from one venue to another. For
comparison, one speech from four past presidents and the Gettysburg Address
were also analyzed.
| 2,016 | Computation and Language |
Readability-based Sentence Ranking for Evaluating Text Simplification | We propose a new method for evaluating the readability of simplified
sentences through pair-wise ranking. The validity of the method is established
through in-corpus and cross-corpus evaluation experiments. The approach
correctly identifies the ranking of simplified and unsimplified sentences in
terms of their reading level with an accuracy of over 80%, significantly
outperforming previous results. To gain qualitative insights into the nature of
simplification at the sentence level, we studied the impact of specific
linguistic features. We empirically confirm that both word-level and syntactic
features play a role in comparing the degree of simplification of authentic
data. To carry out this research, we created a new sentence-aligned corpus from
professionally simplified news articles. The new corpus resource enriches the
empirical basis of sentence-level simplification research, which so far relied
on a single resource. Most importantly, it facilitates cross-corpus evaluation
for simplification, a key step towards generalizable results.
| 2,016 | Computation and Language |
A Fast Unified Model for Parsing and Sentence Understanding | Tree-structured neural networks exploit valuable syntactic parse information
as they interpret the meanings of sentences. However, they suffer from two key
technical problems that make them slow and unwieldy for large-scale NLP tasks:
they usually operate on parsed sentences and they do not directly support
batched computation. We address these issues by introducing the Stack-augmented
Parser-Interpreter Neural Network (SPINN), which combines parsing and
interpretation within a single tree-sequence hybrid model by integrating
tree-structured sentence interpretation into the linear sequential structure of
a shift-reduce parser. Our model supports batched computation for a speedup of
up to 25 times over other tree-structured models, and its integrated parser can
operate on unparsed data with little loss in accuracy. We evaluate it on the
Stanford NLI entailment task and show that it significantly outperforms other
sentence-encoding models.
| 2,016 | Computation and Language |
Globally Normalized Transition-Based Neural Networks | We introduce a globally normalized transition-based neural network model that
achieves state-of-the-art part-of-speech tagging, dependency parsing and
sentence compression results. Our model is a simple feed-forward neural network
that operates on a task-specific transition system, yet achieves comparable or
better accuracies than recurrent models. We discuss the importance of global as
opposed to local normalization: a key insight is that the label bias problem
implies that globally normalized models can be strictly more expressive than
locally normalized models.
| 2,016 | Computation and Language |
Generating Natural Questions About an Image | There has been an explosion of work in the vision & language community during
the past few years from image captioning to video transcription, and answering
questions about images. These tasks have focused on literal descriptions of the
image. To move beyond the literal, we choose to explore how questions about an
image are often directed at commonsense inference and the abstract events
evoked by objects in the image. In this paper, we introduce the novel task of
Visual Question Generation (VQG), where the system is tasked with asking a
natural and engaging question when shown an image. We provide three datasets
which cover a variety of images from object-centric to event-centric, with
considerably more abstract training data than provided to state-of-the-art
captioning systems thus far. We train and test several generative and retrieval
models to tackle the task of VQG. Evaluation results show that while such
models ask reasonable questions for a variety of images, there is still a wide
gap with human performance which motivates further work on connecting images
with commonsense knowledge and pragmatics. Our proposed task offers a new
challenge to the community which we hope furthers interest in exploring deeper
connections between vision & language.
| 2,016 | Computation and Language |
Adaptive Joint Learning of Compositional and Non-Compositional Phrase
Embeddings | We present a novel method for jointly learning compositional and
non-compositional phrase embeddings by adaptively weighting both types of
embeddings using a compositionality scoring function. The scoring function is
used to quantify the level of compositionality of each phrase, and the
parameters of the function are jointly optimized with the objective for
learning phrase embeddings. In experiments, we apply the adaptive joint
learning method to the task of learning embeddings of transitive verb phrases,
and show that the compositionality scores have strong correlation with human
ratings for verb-object compositionality, substantially outperforming the
previous state of the art. Moreover, our embeddings improve upon the previous
best model on a transitive verb disambiguation task. We also show that a simple
ensemble technique further improves the results for both tasks.
| 2,016 | Computation and Language |
Tree-to-Sequence Attentional Neural Machine Translation | Most of the existing Neural Machine Translation (NMT) models focus on the
conversion of sequential data and do not directly use syntactic information. We
propose a novel end-to-end syntactic NMT model, extending a
sequence-to-sequence model with the source-side phrase structure. Our model has
an attention mechanism that enables the decoder to generate a translated word
while softly aligning it with phrases as well as words of the source sentence.
Experimental results on the WAT'15 English-to-Japanese dataset demonstrate that
our proposed model considerably outperforms sequence-to-sequence attentional
NMT models and compares favorably with the state-of-the-art tree-to-string SMT
system.
| 2,016 | Computation and Language |
Improving Hypernymy Detection with an Integrated Path-based and
Distributional Method | Detecting hypernymy relations is a key task in NLP, which is addressed in the
literature using two complementary approaches. Distributional methods, whose
supervised variants are the current best performers, and path-based methods,
which received less research attention. We suggest an improved path-based
algorithm, in which the dependency paths are encoded using a recurrent neural
network, that achieves results comparable to distributional methods. We then
extend the approach to integrate both path-based and distributional signals,
significantly improving upon the state-of-the-art on this task.
| 2,016 | Computation and Language |
How Transferable are Neural Networks in NLP Applications? | Transfer learning is aimed to make use of valuable knowledge in a source
domain to help model performance in a target domain. It is particularly
important to neural networks, which are very likely to be overfitting. In some
fields like image processing, many studies have shown the effectiveness of
neural network-based transfer learning. For neural NLP, however, existing
studies have only casually applied transfer learning, and conclusions are
inconsistent. In this paper, we conduct systematic case studies and provide an
illuminating picture on the transferability of neural networks in NLP.
| 2,016 | Computation and Language |
Sentence Pair Scoring: Towards Unified Framework for Text Comprehension | We review the task of Sentence Pair Scoring, popular in the literature in
various forms - viewed as Answer Sentence Selection, Semantic Text Scoring,
Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a
component of Memory Networks.
We argue that all such tasks are similar from the model perspective and
propose new baselines by comparing the performance of common IR metrics and
popular convolutional, recurrent and attention-based neural models across many
Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating
randomized models, propose a statistically grounded methodology, and attempt to
improve comparisons by releasing new datasets that are much harder than some of
the currently used well explored benchmarks. We introduce a unified open source
software framework with easily pluggable models and tasks, which enables us to
experiment with multi-task reusability of trained sentence model. We set a new
state-of-art in performance on the Ubuntu Dialogue dataset.
| 2,016 | Computation and Language |
A Character-Level Decoder without Explicit Segmentation for Neural
Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru.
| 2,016 | Computation and Language |
A Persona-Based Neural Conversation Model | We present persona-based models for handling the issue of speaker consistency
in neural response generation. A speaker model encodes personas in distributed
embeddings that capture individual characteristics such as background
information and speaking style. A dyadic speaker-addressee model captures
properties of interactions between two interlocutors. Our models yield
qualitative performance improvements in both perplexity and BLEU scores over
baseline sequence-to-sequence models, with similar gains in speaker consistency
as measured by human judges.
| 2,016 | Computation and Language |
Multi-Task Cross-Lingual Sequence Tagging from Scratch | We present a deep hierarchical recurrent neural network for sequence tagging.
Given a sequence of words, our model employs deep gated recurrent units on both
character and word levels to encode morphology and context information, and
applies a conditional random field layer to predict the tags. Our model is task
independent, language independent, and feature engineering free. We further
extend our model to multi-task and cross-lingual joint training by sharing the
architecture and parameters. Our model achieves state-of-the-art results in
multiple languages on several benchmark tasks including POS tagging, chunking,
and NER. We also demonstrate that multi-task and cross-lingual joint training
can improve the performance in various cases.
| 2,016 | Computation and Language |
Incorporating Copying Mechanism in Sequence-to-Sequence Learning | We address an important problem in sequence-to-sequence (Seq2Seq) learning
referred to as copying, in which certain segments in the input sequence are
selectively replicated in the output sequence. A similar phenomenon is
observable in human language communication. For example, humans tend to repeat
entity names or even long phrases in conversation. The challenge with regard to
copying in Seq2Seq is that new machinery is needed to decide when to perform
the operation. In this paper, we incorporate copying into neural network-based
Seq2Seq learning and propose a new model called CopyNet with encoder-decoder
structure. CopyNet can nicely integrate the regular way of word generation in
the decoder with the new copying mechanism which can choose sub-sequences in
the input sequence and put them at proper places in the output sequence. Our
empirical study on both synthetic data sets and real world data sets
demonstrates the efficacy of CopyNet. For example, CopyNet can outperform
regular RNN-based model with remarkable margins on text summarization tasks.
| 2,016 | Computation and Language |
Static and Dynamic Feature Selection in Morphosyntactic Analyzers | We study the use of greedy feature selection methods for morphosyntactic
tagging under a number of different conditions. We compare a static ordering of
features to a dynamic ordering based on mutual information statistics, and we
apply the techniques to standalone taggers as well as joint systems for tagging
and parsing. Experiments on five languages show that feature selection can
result in more compact models as well as higher accuracy under all conditions,
but also that a dynamic ordering works better than a static ordering and that
joint systems benefit more than standalone taggers. We also show that the same
techniques can be used to select which morphosyntactic categories to predict in
order to maximize syntactic accuracy in a joint system. Our final results
represent a substantial improvement of the state of the art for several
languages, while at the same time reducing both the number of features and the
running time by up to 80% in some cases.
| 2,016 | Computation and Language |
Bayesian Neural Word Embedding | Recently, several works in the domain of natural language processing
presented successful methods for word embedding. Among them, the Skip-Gram with
negative sampling, known also as word2vec, advanced the state-of-the-art of
various linguistics tasks. In this paper, we propose a scalable Bayesian neural
word embedding algorithm. The algorithm relies on a Variational Bayes solution
for the Skip-Gram objective and a detailed step by step description is
provided. We present experimental results that demonstrate the performance of
the proposed algorithm for word analogy and similarity tasks on six different
datasets and show it is competitive with the original Skip-Gram method.
| 2,017 | Computation and Language |
Stack-propagation: Improved Representation Learning for Syntax | Traditional syntax models typically leverage part-of-speech (POS) information
by constructing features from hand-tuned templates. We demonstrate that a
better approach is to utilize POS tags as a regularizer of learned
representations. We propose a simple method for learning a stacked pipeline of
models which we call "stack-propagation". We apply this to dependency parsing
and tagging, where we use the hidden layer of the tagger network as a
representation of the input tokens for the parser. At test time, our parser
does not require predicted POS tags. On 19 languages from the Universal
Dependencies, our method is 1.3% (absolute) more accurate than a
state-of-the-art graph-based approach and 2.7% more accurate than the most
comparable greedy model.
| 2,016 | Computation and Language |
Learning Executable Semantic Parsers for Natural Language Understanding | For building question answering systems and natural language interfaces,
semantic parsing has emerged as an important and powerful paradigm. Semantic
parsers map natural language into logical forms, the classic representation for
many important linguistic phenomena. The modern twist is that we are interested
in learning semantic parsers from data, which introduces a new layer of
statistical and computational issues. This article lays out the components of a
statistical semantic parser, highlighting the key challenges. We will see that
semantic parsing is a rich fusion of the logical and the statistical world, and
that this fusion will play an integral role in the future of natural language
understanding systems.
| 2,016 | Computation and Language |
Recursive Neural Conditional Random Fields for Aspect-based Sentiment
Analysis | In aspect-based sentiment analysis, extracting aspect terms along with the
opinions being expressed from user-generated content is one of the most
important subtasks. Previous studies have shown that exploiting connections
between aspect and opinion terms is promising for this task. In this paper, we
propose a novel joint model that integrates recursive neural networks and
conditional random fields into a unified framework for explicit aspect and
opinion terms co-extraction. The proposed model learns high-level
discriminative features and double propagate information between aspect and
opinion terms, simultaneously. Moreover, it is flexible to incorporate
hand-crafted features into the proposed model to further boost its information
extraction performance. Experimental results on the SemEval Challenge 2014
dataset show the superiority of our proposed model over several baseline
methods as well as the winning systems of the challenge.
| 2,016 | Computation and Language |
Latent Predictor Networks for Code Generation | Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation, for example characters
or tokens, to be marginalised, thus permitting scalable and effective training.
Using this framework, we address the problem of generating programming code
from a mixed natural language and structured specification. We create two new
data sets for this paradigm derived from the collectible trading card games
Magic the Gathering and Hearthstone. On these, and a third preexisting corpus,
we demonstrate that marginalising multiple predictors allows our model to
outperform strong benchmarks.
| 2,016 | Computation and Language |
Multi-domain machine translation enhancements by parallel data
extraction from comparable corpora | Parallel texts are a relatively rare language resource, however, they
constitute a very useful research material with a wide range of applications.
This study presents and analyses new methodologies we developed for obtaining
such data from previously built comparable corpora. The methodologies are
automatic and unsupervised which makes them good for large scale research. The
task is highly practical as non-parallel multilingual data occur much more
frequently than parallel corpora and accessing them is easy, although parallel
sentences are a considerably more useful resource. In this study, we propose a
method of automatic web crawling in order to build topic-aligned comparable
corpora, e.g. based on the Wikipedia or Euronews.com. We also developed new
methods of obtaining parallel sentences from comparable data and proposed
methods of filtration of corpora capable of selecting inconsistent or only
partially equivalent translations. Our methods are easily scalable to other
languages. Evaluation of the quality of the created corpora was performed by
analysing the impact of their use on statistical machine translation systems.
Experiments were presented on the basis of the Polish-English language pair for
texts from different domains, i.e. lectures, phrasebooks, film dialogues,
European Parliament proceedings and texts contained medicines leaflets. We also
tested a second method of creating parallel corpora based on data from
comparable corpora which allows for automatically expanding the existing corpus
of sentences about a given domain on the basis of analogies found between them.
It does not require, therefore, having past parallel resources in order to
train a classifier.
| 2,016 | Computation and Language |
Generating Factoid Questions With Recurrent Neural Networks: The 30M
Factoid Question-Answer Corpus | Over the past decade, large-scale supervised learning corpora have enabled
machine learning researchers to make substantial advances. However, to this
date, there are no large-scale question-answer corpora available. In this paper
we present the 30M Factoid Question-Answer Corpus, an enormous question answer
pair corpus produced by applying a novel neural network architecture on the
knowledge base Freebase to transduce facts into natural language questions. The
produced question answer pairs are evaluated both by human evaluators and using
automatic evaluation metrics, including well-established machine translation
and sentence similarity metrics. Across all evaluation criteria the
question-generation model outperforms the competing template-based baseline.
Furthermore, when presented to human evaluators, the generated questions appear
comparable in quality to real human-generated questions.
| 2,016 | Computation and Language |
Semi-supervised Word Sense Disambiguation with Neural Models | Determining the intended sense of words in text - word sense disambiguation
(WSD) - is a long standing problem in natural language processing. Recently,
researchers have shown promising results using word vectors extracted from a
neural network language model as features in WSD algorithms. However, a simple
average or concatenation of word vectors for each word in a text loses the
sequential and syntactic information of the text. In this paper, we study WSD
with a sequence learning neural net, LSTM, to better capture the sequential and
syntactic patterns of the text. To alleviate the lack of training data in
all-words WSD, we employ the same LSTM in a semi-supervised label propagation
classifier. We demonstrate state-of-the-art results, especially on verbs.
| 2,016 | Computation and Language |
Recurrent Neural Network Encoder with Attention for Community Question
Answering | We apply a general recurrent neural network (RNN) encoder framework to
community question answering (cQA) tasks. Our approach does not rely on any
linguistic processing, and can be applied to different languages or domains.
Further improvements are observed when we extend the RNN encoders with a neural
attention mechanism that encourages reasoning over entire sequences. To deal
with practical issues such as data sparsity and imbalanced labels, we apply
various techniques such as transfer learning and multitask learning. Our
experiments on the SemEval-2016 cQA task show 10% improvement on a MAP score
compared to an information retrieval-based approach, and achieve comparable
performance to a strong handcrafted feature-based method.
| 2,016 | Computation and Language |
Enabling Cognitive Intelligence Queries in Relational Databases using
Low-dimensional Word Embeddings | We apply distributed language embedding methods from Natural Language
Processing to assign a vector to each database entity associated token (for
example, a token may be a word occurring in a table row, or the name of a
column). These vectors, of typical dimension 200, capture the meaning of tokens
based on the contexts in which the tokens appear together. To form vectors, we
apply a learning method to a token sequence derived from the database. We
describe various techniques for extracting token sequences from a database. The
techniques differ in complexity, in the token sequences they output and in the
database information used (e.g., foreign keys). The vectors can be used to
algebraically quantify semantic relationships between the tokens such as
similarities and analogies. Vectors enable a dual view of the data: relational
and (meaningful rather than purely syntactical) text. We introduce and explore
a new class of queries called cognitive intelligence (CI) queries that extract
information from the database based, in part, on the relationships encoded by
vectors. We have implemented a prototype system on top of Spark to exhibit the
power of CI queries. Here, CI queries are realized via SQL UDFs. This power
goes far beyond text extensions to relational systems due to the information
encoded in vectors. We also consider various extensions to the basic scheme,
including using a collection of views derived from the database to focus on a
domain of interest, utilizing vectors and/or text from external sources,
maintaining vectors as the database evolves and exploring a database without
utilizing its schema. For the latter, we consider minimal extensions to SQL to
vastly improve query expressiveness.
| 2,016 | Computation and Language |
Neural Summarization by Extracting Sentences and Words | Traditional approaches to extractive summarization rely heavily on
human-engineered features. In this work we propose a data-driven approach based
on neural networks and continuous sentence features. We develop a general
framework for single-document summarization composed of a hierarchical document
encoder and an attention-based extractor. This architecture allows us to
develop different classes of summarization models which can extract sentences
or words. We train our models on large scale corpora containing hundreds of
thousands of document-summary pairs. Experimental results on two summarization
datasets demonstrate that our models obtain results comparable to the state of
the art without any access to linguistic annotation.
| 2,016 | Computation and Language |
Evaluating semantic models with word-sentence relatedness | Semantic textual similarity (STS) systems are designed to encode and evaluate
the semantic similarity between words, phrases, sentences, and documents. One
method for assessing the quality or authenticity of semantic information
encoded in these systems is by comparison with human judgments. A data set for
evaluating semantic models was developed consisting of 775 English
word-sentence pairs, each annotated for semantic relatedness by human raters
engaged in a Maximum Difference Scaling (MDS) task, as well as a faster
alternative task. As a sample application of this relatedness data,
behavior-based relatedness was compared to the relatedness computed via four
off-the-shelf STS models: n-gram, Latent Semantic Analysis (LSA), Word2Vec, and
UMBC Ebiquity. Some STS models captured much of the variance in the human
judgments collected, but they were not sensitive to the implicatures and
entailments that were processed and considered by the participants. All text
stimuli and judgment data have been made freely available.
| 2,017 | Computation and Language |
Semantic Regularities in Document Representations | Recent work exhibited that distributed word representations are good at
capturing linguistic regularities in language. This allows vector-oriented
reasoning based on simple linear algebra between words. Since many different
methods have been proposed for learning document representations, it is natural
to ask whether there is also linear structure in these learned representations
to allow similar reasoning at document level. To answer this question, we
design a new document analogy task for testing the semantic regularities in
document representations, and conduct empirical evaluations over several
state-of-the-art document representation models. The results reveal that neural
embedding based document representations work better on this analogy task than
conventional methods, and we provide some preliminary explanations over these
observations.
| 2,016 | Computation and Language |
Contrastive Analysis with Predictive Power: Typology Driven Estimation
of Grammatical Error Distributions in ESL | This work examines the impact of cross-linguistic transfer on grammatical
errors in English as Second Language (ESL) texts. Using a computational
framework that formalizes the theory of Contrastive Analysis (CA), we
demonstrate that language specific error distributions in ESL writing can be
predicted from the typological properties of the native language and their
relation to the typology of English. Our typology driven model enables to
obtain accurate estimates of such distributions without access to any ESL data
for the target languages. Furthermore, we present a strategy for adjusting our
method to low-resource languages that lack typological documentation using a
bootstrapping approach which approximates native language typology from ESL
texts. Finally, we show that our framework is instrumental for linguistic
inquiry seeking to identify first language factors that contribute to a wide
range of difficulties in second language acquisition.
| 2,015 | Computation and Language |
Semantic Properties of Customer Sentiment in Tweets | An increasing number of people are using online social networking services
(SNSs), and a significant amount of information related to experiences in
consumption is shared in this new media form. Text mining is an emerging
technique for mining useful information from the web. We aim at discovering in
particular tweets semantic patterns in consumers' discussions on social media.
Specifically, the purposes of this study are twofold: 1) finding similarity and
dissimilarity between two sets of textual documents that include consumers'
sentiment polarities, two forms of positive vs. negative opinions and 2)
driving actual content from the textual data that has a semantic trend. The
considered tweets include consumers opinions on US retail companies (e.g.,
Amazon, Walmart). Cosine similarity and K-means clustering methods are used to
achieve the former goal, and Latent Dirichlet Allocation (LDA), a popular topic
modeling algorithm, is used for the latter purpose. This is the first study
which discover semantic properties of textual data in consumption context
beyond sentiment analysis. In addition to major findings, we apply LDA (Latent
Dirichlet Allocations) to the same data and drew latent topics that represent
consumers' positive opinions and negative opinions on social media.
| 2,016 | Computation and Language |
Part-of-Speech Relevance Weights for Learning Word Embeddings | This paper proposes a model to learn word embeddings with weighted contexts
based on part-of-speech (POS) relevance weights. POS is a fundamental element
in natural language. However, state-of-the-art word embedding models fail to
consider it. This paper proposes to use position-dependent POS relevance
weighting matrices to model the inherent syntactic relationship among words
within a context window. We utilize the POS relevance weights to model each
word-context pairs during the word embedding training process. The model
proposed in this paper paper jointly optimizes word vectors and the POS
relevance matrices. Experiments conducted on popular word analogy and word
similarity tasks all demonstrated the effectiveness of the proposed method.
| 2,016 | Computation and Language |
Neural Text Generation from Structured Data with Application to the
Biography Domain | This paper introduces a neural model for concept-to-text generation that
scales to large, rich domains. We experiment with a new dataset of biographies
from Wikipedia that is an order of magnitude larger than existing resources
with over 700k samples. The dataset is also vastly more diverse with a 400k
vocabulary, compared to a few hundred words for Weathergov or Robocup. Our
model builds upon recent work on conditional neural language model for text
generation. To deal with the large vocabulary, we extend these models to mix a
fixed vocabulary with copy actions that transfer sample-specific words from the
input database to the generated output sentence. Our neural model significantly
out-performs a classical Kneser-Ney language model adapted to this task by
nearly 15 BLEU.
| 2,016 | Computation and Language |
Improving Information Extraction by Acquiring External Evidence with
Reinforcement Learning | Most successful information extraction systems operate with access to a large
collection of documents. In this work, we explore the task of acquiring and
incorporating external evidence to improve extraction accuracy in domains where
the amount of training data is scarce. This process entails issuing search
queries, extraction from new sources and reconciliation of extracted values,
which are repeated until sufficient evidence is collected. We approach the
problem using a reinforcement learning framework where our model learns to
select optimal actions based on contextual information. We employ a deep
Q-network, trained to optimize a reward function that reflects extraction
accuracy while penalizing extra effort. Our experiments on two databases -- of
shooting incidents, and food adulteration cases -- demonstrate that our system
significantly outperforms traditional extractors and a competitive
meta-classifier baseline.
| 2,016 | Computation and Language |
Classifying Syntactic Regularities for Hundreds of Languages | This paper presents a comparison of classification methods for linguistic
typology for the purpose of expanding an extensive, but sparse language
resource: the World Atlas of Language Structures (WALS) (Dryer and Haspelmath,
2013). We experimented with a variety of regression and nearest-neighbor
methods for use in classification over a set of 325 languages and six syntactic
rules drawn from WALS. To classify each rule, we consider the typological
features of the other five rules; linguistic features extracted from a
word-aligned Bible in each language; and genealogical features (genus and
family) of each language. In general, we find that propagating the majority
label among all languages of the same genus achieves the best accuracy in label
pre- diction. Following this, a logistic regression model that combines
typological and linguistic features offers the next best performance.
Interestingly, this model actually outperforms the majority labels among all
languages of the same family.
| 2,016 | Computation and Language |
How NOT To Evaluate Your Dialogue System: An Empirical Study of
Unsupervised Evaluation Metrics for Dialogue Response Generation | We investigate evaluation metrics for dialogue response generation systems
where supervised labels, such as task completion, are not available. Recent
works in response generation have adopted metrics from machine translation to
compare a model's generated response to a single target response. We show that
these metrics correlate very weakly with human judgements in the non-technical
Twitter domain, and not at all in the technical Ubuntu domain. We provide
quantitative and qualitative results highlighting specific weaknesses in
existing metrics, and provide recommendations for future development of better
automatic evaluation metrics for dialogue systems.
| 2,017 | Computation and Language |
On the Compression of Recurrent Neural Networks with an Application to
LVCSR acoustic modeling for Embedded Speech Recognition | We study the problem of compressing recurrent neural networks (RNNs). In
particular, we focus on the compression of RNN acoustic models, which are
motivated by the goal of building compact and accurate speech recognition
systems which can be run efficiently on mobile devices. In this work, we
present a technique for general recurrent model compression that jointly
compresses both recurrent and non-recurrent inter-layer weight matrices. We
find that the proposed technique allows us to reduce the size of our Long
Short-Term Memory (LSTM) acoustic model to a third of its original size with
negligible loss in accuracy.
| 2,016 | Computation and Language |
"Did I Say Something Wrong?" A Word-Level Analysis of Wikipedia Articles
for Deletion Discussions | This thesis focuses on gaining linguistic insights into textual discussions
on a word level. It was of special interest to distinguish messages that
constructively contribute to a discussion from those that are detrimental to
them. Thereby, we wanted to determine whether "I"- and "You"-messages are
indicators for either of the two discussion styles. These messages are nowadays
often used in guidelines for successful communication. Although their effects
have been successfully evaluated multiple times, a large-scale analysis has
never been conducted.
Thus, we used Wikipedia Articles for Deletion (short: AfD) discussions
together with the records of blocked users and developed a fully automated
creation of an annotated data set. In this data set, messages were labelled
either constructive or disruptive. We applied binary classifiers to the data to
determine characteristic words for both discussion styles. Thereby, we also
investigated whether function words like pronouns and conjunctions play an
important role in distinguishing the two.
We found that "You"-messages were a strong indicator for disruptive messages
which matches their attributed effects on communication. However, we found
"I"-messages to be indicative for disruptive messages as well which is contrary
to their attributed effects. The importance of function words could neither be
confirmed nor refuted. Other characteristic words for either communication
style were not found. Yet, the results suggest that a different model might
represent disruptive and constructive messages in textual discussions better.
| 2,016 | Computation and Language |
Pointing the Unknown Words | The problem of rare and unknown words is an important issue that can
potentially influence the performance of many NLP systems, including both the
traditional count-based and the deep learning models. We propose a novel way to
deal with the rare and unseen words for the neural network models using
attention. Our model uses two softmax layers in order to predict the next word
in conditional language models: one predicts the location of a word in the
source sentence, and the other predicts a word in the shortlist vocabulary. At
each time-step, the decision of which softmax layer to use choose adaptively
made by an MLP which is conditioned on the context.~We motivate our work from a
psychological evidence that humans naturally have a tendency to point towards
objects in the context or the environment when the name of an object is not
known.~We observe improvements on two tasks, neural machine translation on the
Europarl English to French parallel corpora and text summarization on the
Gigaword dataset using our proposed model.
| 2,016 | Computation and Language |
Longitudinal Analysis of Discussion Topics in an Online Breast Cancer
Community using Convolutional Neural Networks | Identifying topics of discussions in online health communities (OHC) is
critical to various applications, but can be difficult because topics of OHC
content are usually heterogeneous and domain-dependent. In this paper, we
provide a multi-class schema, an annotated dataset, and supervised classifiers
based on convolutional neural network (CNN) and other models for the task of
classifying discussion topics. We apply the CNN classifier to the most popular
breast cancer online community, and carry out a longitudinal analysis to show
topic distributions and topic changes throughout members' participation. Our
experimental results suggest that CNN outperforms other classifiers in the task
of topic classification, and that certain trajectories can be detected with
respect to topic changes.
| 2,016 | Computation and Language |
Deep Embedding for Spatial Role Labeling | This paper introduces the visually informed embedding of word (VIEW), a
continuous vector representation for a word extracted from a deep neural model
trained using the Microsoft COCO data set to forecast the spatial arrangements
between visual objects, given a textual description. The model is composed of a
deep multilayer perceptron (MLP) stacked on the top of a Long Short Term Memory
(LSTM) network, the latter being preceded by an embedding layer. The VIEW is
applied to transferring multimodal background knowledge to Spatial Role
Labeling (SpRL) algorithms, which recognize spatial relations between objects
mentioned in the text. This work also contributes with a new method to select
complementary features and a fine-tuning method for MLP that improves the $F1$
measure in classifying the words into spatial roles. The VIEW is evaluated with
the Task 3 of SemEval-2013 benchmark data set, SpaceEval.
| 2,016 | Computation and Language |
Prepositional Attachment Disambiguation Using Bilingual Parsing and
Alignments | In this paper, we attempt to solve the problem of Prepositional Phrase (PP)
attachments in English. The motivation for the work comes from NLP applications
like Machine Translation, for which, getting the correct attachment of
prepositions is very crucial. The idea is to correct the PP-attachments for a
sentence with the help of alignments from parallel data in another language.
The novelty of our work lies in the formulation of the problem into a dual
decomposition based algorithm that enforces agreement between the parse trees
from two languages as a constraint. Experiments were performed on the
English-Hindi language pair and the performance improved by 10% over the
baseline, where the baseline is the attachment predicted by the MSTParser model
trained for English.
| 2,016 | Computation and Language |
What a Nerd! Beating Students and Vector Cosine in the ESL and TOEFL
Datasets | In this paper, we claim that Vector Cosine, which is generally considered one
of the most efficient unsupervised measures for identifying word similarity in
Vector Space Models, can be outperformed by a completely unsupervised measure
that evaluates the extent of the intersection among the most associated
contexts of two target words, weighting such intersection according to the rank
of the shared contexts in the dependency ranked lists. This claim comes from
the hypothesis that similar words do not simply occur in similar contexts, but
they share a larger portion of their most relevant contexts compared to other
related words. To prove it, we describe and evaluate APSyn, a variant of
Average Precision that, independently of the adopted parameters, outperforms
the Vector Cosine and the co-occurrence on the ESL and TOEFL test sets. In the
best setting, APSyn reaches 0.73 accuracy on the ESL dataset and 0.70 accuracy
in the TOEFL dataset, beating therefore the non-English US college applicants
(whose average, as reported in the literature, is 64.50%) and several
state-of-the-art approaches.
| 2,016 | Computation and Language |
Nine Features in a Random Forest to Learn Taxonomical Semantic Relations | ROOT9 is a supervised system for the classification of hypernyms, co-hyponyms
and random words that is derived from the already introduced ROOT13 (Santus et
al., 2016). It relies on a Random Forest algorithm and nine unsupervised
corpus-based features. We evaluate it with a 10-fold cross validation on 9,600
pairs, equally distributed among the three classes and involving several
Parts-Of-Speech (i.e. adjectives, nouns and verbs). When all the classes are
present, ROOT9 achieves an F1 score of 90.7%, against a baseline of 57.2%
(vector cosine). When the classification is binary, ROOT9 achieves the
following results against the baseline: hypernyms-co-hyponyms 95.7% vs. 69.8%,
hypernyms-random 91.8% vs. 64.1% and co-hyponyms-random 97.8% vs. 79.4%. In
order to compare the performance with the state-of-the-art, we have also
evaluated ROOT9 in subsets of the Weeds et al. (2014) datasets, proving that it
is in fact competitive. Finally, we investigated whether the system learns the
semantic relation or it simply learns the prototypical hypernyms, as claimed by
Levy et al. (2015). The second possibility seems to be the most likely, even
though ROOT9 can be trained on negative examples (i.e., switched hypernyms) to
drastically reduce this bias.
| 2,016 | Computation and Language |
ROOT13: Spotting Hypernyms, Co-Hyponyms and Randoms | In this paper, we describe ROOT13, a supervised system for the classification
of hypernyms, co-hyponyms and random words. The system relies on a Random
Forest algorithm and 13 unsupervised corpus-based features. We evaluate it with
a 10-fold cross validation on 9,600 pairs, equally distributed among the three
classes and involving several Parts-Of-Speech (i.e. adjectives, nouns and
verbs). When all the classes are present, ROOT13 achieves an F1 score of 88.3%,
against a baseline of 57.6% (vector cosine). When the classification is binary,
ROOT13 achieves the following results: hypernyms-co-hyponyms (93.4% vs. 60.2%),
hypernymsrandom (92.3% vs. 65.5%) and co-hyponyms-random (97.3% vs. 81.5%). Our
results are competitive with stateof-the-art models.
| 2,016 | Computation and Language |
Shirtless and Dangerous: Quantifying Linguistic Signals of Gender Bias
in an Online Fiction Writing Community | Imagine a princess asleep in a castle, waiting for her prince to slay the
dragon and rescue her. Tales like the famous Sleeping Beauty clearly divide up
gender roles. But what about more modern stories, borne of a generation
increasingly aware of social constructs like sexism and racism? Do these
stories tend to reinforce gender stereotypes, or counter them? In this paper,
we present a technique that combines natural language processing with a
crowdsourced lexicon of stereotypes to capture gender biases in fiction. We
apply this technique across 1.8 billion words of fiction from the Wattpad
online writing community, investigating gender representation in stories, how
male and female characters behave and are described, and how authors' use of
gender stereotypes is associated with the community's ratings. We find that
male over-representation and traditional gender stereotypes (e.g., dominant men
and submissive women) are common throughout nearly every genre in our corpus.
However, only some of these stereotypes, like sexual or violent men, are
associated with highly rated stories. Finally, despite women often being the
target of negative stereotypes, female authors are equally likely to write such
stereotypes as men.
| 2,016 | Computation and Language |
Compilation as a Typed EDSL-to-EDSL Transformation | This article is about an implementation and compilation technique that is
used in RAW-Feldspar which is a complete rewrite of the Feldspar embedded
domain-specific language (EDSL) (Axelsson et al. 2010). Feldspar is high-level
functional language that generates efficient C code to run on embedded targets.
The gist of the technique presented in this post is the following: rather
writing a back end that converts pure Feldspar expressions directly to C, we
translate them to a low-level monadic EDSL. From the low-level EDSL, C code is
then generated. This approach has several advantages:
1. The translation is simpler to write than a complete C back end.
2. The translation is between two typed EDSLs, which rules out many potential
errors.
3. The low-level EDSL is reusable and can be shared between several
high-level EDSLs.
Although the article contains a lot of code, most of it is in fact reusable.
As mentioned in Discussion, we can write the same implementation in less than
50 lines of code using generic libraries that we have developed to support
Feldspar.
| 2,018 | Computation and Language |
A Readable Read: Automatic Assessment of Language Learning Materials
based on Linguistic Complexity | Corpora and web texts can become a rich language learning resource if we have
a means of assessing whether they are linguistically appropriate for learners
at a given proficiency level. In this paper, we aim at addressing this issue by
presenting the first approach for predicting linguistic complexity for Swedish
second language learning material on a 5-point scale. After showing that the
traditional Swedish readability measure, L\"asbarhetsindex (LIX), is not
suitable for this task, we propose a supervised machine learning model, based
on a range of linguistic features, that can reliably classify texts according
to their difficulty level. Our model obtained an accuracy of 81.3% and an
F-score of 0.8, which is comparable to the state of the art in English and is
considerably higher than previously reported results for other languages. We
further studied the utility of our features with single sentences instead of
full texts since sentences are a common linguistic unit in language learning
exercises. We trained a separate model on sentence-level data with five
classes, which yielded 63.4% accuracy. Although this is lower than the document
level performance, we achieved an adjacent accuracy of 92%. Furthermore, we
found that using a combination of different features, compared to using lexical
features alone, resulted in 7% improvement in classification accuracy at the
sentence level, whereas at the document level, lexical features were more
dominant. Our models are intended for use in a freely accessible web-based
language learning platform for the automatic generation of exercises.
| 2,016 | Computation and Language |
A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data | Understanding unstructured text is a major goal within natural language
processing. Comprehension tests pose questions based on short text passages to
evaluate such understanding. In this work, we investigate machine comprehension
on the challenging {\it MCTest} benchmark. Partly because of its limited size,
prior work on {\it MCTest} has focused mainly on engineering better features.
We tackle the dataset with a neural approach, harnessing simple neural networks
arranged in a parallel hierarchy. The parallel hierarchy enables our model to
compare the passage, question, and answer from a variety of trainable
perspectives, as opposed to using a manually designed, rigid feature set.
Perspectives range from the word level to sentence fragments to sequences of
sentences; the networks operate only on word-embedding representations of text.
When trained with a methodology designed to help cope with limited training
data, our Parallel-Hierarchical model sets a new state of the art for {\it
MCTest}, outperforming previous feature-engineered approaches slightly and
previous neural approaches by a significant margin (over 15\% absolute).
| 2,016 | Computation and Language |
Learning-Based Single-Document Summarization with Compression and
Anaphoricity Constraints | We present a discriminative model for single-document summarization that
integrally combines compression and anaphoricity constraints. Our model selects
textual units to include in the summary based on a rich set of sparse features
whose weights are learned on a large corpus. We allow for the deletion of
content within a sentence when that deletion is licensed by compression rules;
in our framework, these are implemented as dependencies between subsentential
units of text. Anaphoricity constraints then improve cross-sentence coherence
by guaranteeing that, for each pronoun included in the summary, the pronoun's
antecedent is included as well or the pronoun is rewritten as a full mention.
When trained end-to-end, our final system outperforms prior work on both ROUGE
as well as on human judgments of linguistic quality.
| 2,016 | Computation and Language |
Unsupervised Measure of Word Similarity: How to Outperform Co-occurrence
and Vector Cosine in VSMs | In this paper, we claim that vector cosine, which is generally considered
among the most efficient unsupervised measures for identifying word similarity
in Vector Space Models, can be outperformed by an unsupervised measure that
calculates the extent of the intersection among the most mutually dependent
contexts of the target words. To prove it, we describe and evaluate APSyn, a
variant of the Average Precision that, without any optimization, outperforms
the vector cosine and the co-occurrence on the standard ESL test set, with an
improvement ranging between +9.00% and +17.98%, depending on the number of
chosen top contexts.
| 2,016 | Computation and Language |
Bilingual Learning of Multi-sense Embeddings with Discrete Autoencoders | We present an approach to learning multi-sense word embeddings relying both
on monolingual and bilingual information. Our model consists of an encoder,
which uses monolingual and bilingual context (i.e. a parallel sentence) to
choose a sense for a given word, and a decoder which predicts context words
based on the chosen sense. The two components are estimated jointly. We observe
that the word representations induced from bilingual data outperform the
monolingual counterparts across a range of evaluation tasks, even though
crosslingual information is not available at test time.
| 2,016 | Computation and Language |
Model Interpolation with Trans-dimensional Random Field Language Models
for Speech Recognition | The dominant language models (LMs) such as n-gram and neural network (NN)
models represent sentence probabilities in terms of conditionals. In contrast,
a new trans-dimensional random field (TRF) LM has been recently introduced to
show superior performances, where the whole sentence is modeled as a random
field. In this paper, we examine how the TRF models can be interpolated with
the NN models, and obtain 12.1\% and 17.9\% relative error rate reductions over
6-gram LMs for English and Chinese speech recognition respectively through
log-linear combination.
| 2,016 | Computation and Language |
Unsupervised Visual Sense Disambiguation for Verbs using Multimodal
Embeddings | We introduce a new task, visual sense disambiguation for verbs: given an
image and a verb, assign the correct sense of the verb, i.e., the one that
describes the action depicted in the image. Just as textual word sense
disambiguation is useful for a wide range of NLP tasks, visual sense
disambiguation can be useful for multimodal tasks such as image retrieval,
image description, and text illustration. We introduce VerSe, a new dataset
that augments existing multimodal datasets (COCO and TUHOI) with sense labels.
We propose an unsupervised algorithm based on Lesk which performs visual sense
disambiguation using textual, visual, or multimodal embeddings. We find that
textual embeddings perform well when gold-standard textual annotations (object
labels and image descriptions) are available, while multimodal embeddings
perform well on unannotated images. We also verify our findings by using the
textual and multimodal embeddings as features in a supervised setting and
analyse the performance of visual sense disambiguation task. VerSe is made
publicly available and can be downloaded at:
https://github.com/spandanagella/verse.
| 2,016 | Computation and Language |
Enhancing Sentence Relation Modeling with Auxiliary Character-level
Embedding | Neural network based approaches for sentence relation modeling automatically
generate hidden matching features from raw sentence pairs. However, the quality
of matching feature representation may not be satisfied due to complex semantic
relations such as entailment or contradiction. To address this challenge, we
propose a new deep neural network architecture that jointly leverage
pre-trained word embedding and auxiliary character embedding to learn sentence
meanings. The two kinds of word sequence representations as inputs into
multi-layer bidirectional LSTM to learn enhanced sentence representation. After
that, we construct matching features followed by another temporal CNN to learn
high-level hidden matching feature representations. Experimental results
demonstrate that our approach consistently outperforms the existing methods on
standard evaluation datasets.
| 2,016 | Computation and Language |
LSTM based Conversation Models | In this paper, we present a conversational model that incorporates both
context and participant role for two-party conversations. Different
architectures are explored for integrating participant role and context
information into a Long Short-term Memory (LSTM) language model. The
conversational model can function as a language model or a language generation
model. Experiments on the Ubuntu Dialog Corpus show that our model can capture
multiple turn interaction between participants. The proposed method outperforms
a traditional LSTM model as measured by language model perplexity and response
ranking. Generated responses show characteristic differences between the two
participant roles.
| 2,016 | Computation and Language |
System Combination for Short Utterance Speaker Recognition | For text-independent short-utterance speaker recognition (SUSR), the
performance often degrades dramatically. This paper presents a combination
approach to the SUSR tasks with two phonetic-aware systems: one is the
DNN-based i-vector system and the other is our recently proposed
subregion-based GMM-UBM system. The former employs phone posteriors to
construct an i-vector model in which the shared statistics offers stronger
robustness against limited test data, while the latter establishes a
phone-dependent GMM-UBM system which represents speaker characteristics with
more details. A score-level fusion is implemented to integrate the respective
advantages from the two systems. Experimental results show that for the
text-independent SUSR task, both the DNN-based i-vector system and the
subregion-based GMM-UBM system outperform their respective baselines, and the
score-level system combination delivers performance improvement.
| 2,016 | Computation and Language |
Learning Multiscale Features Directly From Waveforms | Deep learning has dramatically improved the performance of speech recognition
systems through learning hierarchies of features optimized for the task at
hand. However, true end-to-end learning, where features are learned directly
from waveforms, has only recently reached the performance of hand-tailored
representations based on the Fourier transform. In this paper, we detail an
approach to use convolutional filters to push past the inherent tradeoff of
temporal and frequency resolution that exists for spectral representations. At
increased computational cost, we show that increasing temporal resolution via
reduced stride and increasing frequency resolution via additional filters
delivers significant performance improvements. Further, we find more efficient
representations by simultaneously learning at multiple scales, leading to an
overall decrease in word error rate on a difficult internal speech test set by
20.7% relative to networks with the same number of parameters trained on
spectrograms.
| 2,016 | Computation and Language |
Differentiable Pooling for Unsupervised Acoustic Model Adaptation | We present a deep neural network (DNN) acoustic model that includes
parametrised and differentiable pooling operators. Unsupervised acoustic model
adaptation is cast as the problem of updating the decision boundaries
implemented by each pooling operator. In particular, we experiment with two
types of pooling parametrisations: learned $L_p$-norm pooling and weighted
Gaussian pooling, in which the weights of both operators are treated as
speaker-dependent. We perform investigations using three different large
vocabulary speech recognition corpora: AMI meetings, TED talks and Switchboard
conversational telephone speech. We demonstrate that differentiable pooling
operators provide a robust and relatively low-dimensional way to adapt acoustic
models, with relative word error rates reductions ranging from 5--20% with
respect to unadapted systems, which themselves are better than the baseline
fully-connected DNN-based acoustic models. We also investigate how the proposed
techniques work under various adaptation conditions including the quality of
adaptation data and complementarity to other feature- and model-space
adaptation methods, as well as providing an analysis of the characteristics of
each of the proposed approaches.
| 2,016 | Computation and Language |
Data Collection for Interactive Learning through the Dialog | This paper presents a dataset collected from natural dialogs which enables to
test the ability of dialog systems to learn new facts from user utterances
throughout the dialog. This interactive learning will help with one of the most
prevailing problems of open domain dialog system, which is the sparsity of
facts a dialog system can reason about. The proposed dataset, consisting of
1900 collected dialogs, allows simulation of an interactive gaining of
denotations and questions explanations from users which can be used for the
interactive learning.
| 2,016 | Computation and Language |
Multi-task Recurrent Model for Speech and Speaker Recognition | Although highly correlated, speech and speaker recognition have been regarded
as two independent tasks and studied by two communities. This is certainly not
the way that people behave: we decipher both speech content and speaker traits
at the same time. This paper presents a unified model to perform speech and
speaker recognition simultaneously and altogether. The model is based on a
unified neural network where the output of one task is fed to the input of the
other, leading to a multi-task recurrent network. Experiments show that the
joint model outperforms the task-specific models on both the two tasks.
| 2,016 | Computation and Language |
Neural Language Correction with Character-Based Attention | Natural language correction has the potential to help language learners
improve their writing skills. While approaches with separate classifiers for
different error types have high precision, they do not flexibly handle errors
such as redundancy or non-idiomatic phrasing. On the other hand, word and
phrase-based machine translation methods are not designed to cope with
orthographic errors, and have recently been outpaced by neural models.
Motivated by these issues, we present a neural network-based approach to
language correction. The core component of our method is an encoder-decoder
recurrent neural network with an attention mechanism. By operating at the
character level, the network avoids the problem of out-of-vocabulary words. We
illustrate the flexibility of our approach on dataset of noisy, user-generated
text collected from an English learner forum. When combined with a language
model, our method achieves a state-of-the-art $F_{0.5}$-score on the CoNLL 2014
Shared Task. We further demonstrate that training the network on additional
data with synthesized errors can improve performance.
| 2,016 | Computation and Language |
Neural Attention Models for Sequence Classification: Analysis and
Application to Key Term Extraction and Dialogue Act Detection | Recurrent neural network architectures combining with attention mechanism, or
neural attention model, have shown promising performance recently for the tasks
including speech recognition, image caption generation, visual question
answering and machine translation. In this paper, neural attention model is
applied on two sequence classification tasks, dialogue act detection and key
term extraction. In the sequence labeling tasks, the model input is a sequence,
and the output is the label of the input sequence. The major difficulty of
sequence labeling is that when the input sequence is long, it can include many
noisy or irrelevant part. If the information in the whole sequence is treated
equally, the noisy or irrelevant part may degrade the classification
performance. The attention mechanism is helpful for sequence classification
task because it is capable of highlighting important part among the entire
sequence for the classification task. The experimental results show that with
the attention mechanism, discernible improvements were achieved in the sequence
labeling task considered here. The roles of the attention mechanism in the
tasks are further analyzed and visualized in this paper.
| 2,016 | Computation and Language |
A Compositional Approach to Language Modeling | Traditional language models treat language as a finite state automaton on a
probability space over words. This is a very strong assumption when modeling
something inherently complex such as language. In this paper, we challenge this
by showing how the linear chain assumption inherent in previous work can be
translated into a sequential composition tree. We then propose a new model that
marginalizes over all possible composition trees thereby removing any
underlying structural assumptions. As the partition function of this new model
is intractable, we use a recently proposed sentence level evaluation metric
Contrastive Entropy to evaluate our model. Given this new evaluation metric, we
report more than 100% improvement across distortion levels over current state
of the art recurrent neural network based language models.
| 2,016 | Computation and Language |
Domain Adaptation of Recurrent Neural Networks for Natural Language
Understanding | The goal of this paper is to use multi-task learning to efficiently scale
slot filling models for natural language understanding to handle multiple
target tasks or domains. The key to scalability is reducing the amount of
training data needed to learn a model for a new task. The proposed multi-task
model delivers better performance with less data by leveraging patterns that it
learns from the other tasks. The approach supports an open vocabulary, which
allows the models to generalize to unseen words, which is particularly
important when very little training data is used. A newly collected
crowd-sourced data set, covering four different domains, is used to demonstrate
the effectiveness of the domain adaptation and open vocabulary techniques.
| 2,016 | Computation and Language |
Semi-supervised and Unsupervised Methods for Categorizing Posts in Web
Discussion Forums | Web discussion forums are used by millions of people worldwide to share
information belonging to a variety of domains such as automotive vehicles,
pets, sports, etc. They typically contain posts that fall into different
categories such as problem, solution, feedback, spam, etc. Automatic
identification of these categories can aid information retrieval that is
tailored for specific user requirements. Previously, a number of supervised
methods have attempted to solve this problem; however, these depend on the
availability of abundant training data. A few existing unsupervised and
semi-supervised approaches are either focused on identifying a single category
or do not report category-specific performance. In contrast, this work proposes
unsupervised and semi-supervised methods that require no or minimal training
data to achieve this objective without compromising on performance. A
fine-grained analysis is also carried out to discuss their limitations. The
proposed methods are based on sequence models (specifically, Hidden Markov
Models) that can model language for each category using word and part-of-speech
probability distributions, and manually specified features. Empirical
evaluations across domains demonstrate that the proposed methods are better
suited for this task than existing ones.
| 2,016 | Computation and Language |
Nonparametric Spherical Topic Modeling with Word Embeddings | Traditional topic models do not account for semantic regularities in
language. Recent distributional representations of words exhibit semantic
consistency over directional metrics such as cosine similarity. However,
neither categorical nor Gaussian observational distributions used in existing
topic models are appropriate to leverage such correlations. In this paper, we
propose to use the von Mises-Fisher distribution to model the density of words
over a unit sphere. Such a representation is well-suited for directional data.
We use a Hierarchical Dirichlet Process for our base topic model and propose an
efficient inference algorithm based on Stochastic Variational Inference. This
model enables us to naturally exploit the semantic structures of word
embeddings while flexibly discovering the number of topics. Experiments
demonstrate that our method outperforms competitive approaches in terms of
topic coherence on two different text corpora while offering efficient
inference.
| 2,016 | Computation and Language |
A Semisupervised Approach for Language Identification based on Ladder
Networks | In this study we address the problem of training a neuralnetwork for language
identification using both labeled and unlabeled speech samples in the form of
i-vectors. We propose a neural network architecture that can also handle
out-of-set languages. We utilize a modified version of the recently proposed
Ladder Network semisupervised training procedure that optimizes the
reconstruction costs of a stack of denoising autoencoders. We show that this
approach can be successfully applied to the case where the training dataset is
composed of both labeled and unlabeled acoustic data. The results show enhanced
language identification on the NIST 2015 language identification dataset.
| 2,016 | Computation and Language |
Revisiting Summarization Evaluation for Scientific Articles | Evaluation of text summarization approaches have been mostly based on metrics
that measure similarities of system generated summaries with a set of human
written gold-standard summaries. The most widely used metric in summarization
evaluation has been the ROUGE family. ROUGE solely relies on lexical overlaps
between the terms and phrases in the sentences; therefore, in cases of
terminology variations and paraphrasing, ROUGE is not as effective. Scientific
article summarization is one such case that is different from general domain
summarization (e.g. newswire data). We provide an extensive analysis of ROUGE's
effectiveness as an evaluation metric for scientific summarization; we show
that, contrary to the common belief, ROUGE is not much reliable in evaluating
scientific summaries. We furthermore show how different variants of ROUGE
result in very different correlations with the manual Pyramid scores. Finally,
we propose an alternative metric for summarization evaluation which is based on
the content relevance between a system generated summary and the corresponding
human written summaries. We call our metric SERA (Summarization Evaluation by
Relevance Analysis). Unlike ROUGE, SERA consistently achieves high correlations
with manual scores which shows its effectiveness in evaluation of scientific
article summarization.
| 2,016 | Computation and Language |
Cross-lingual Models of Word Embeddings: An Empirical Comparison | Despite interest in using cross-lingual knowledge to learn word embeddings
for various tasks, a systematic comparison of the possible approaches is
lacking in the literature. We perform an extensive evaluation of four popular
approaches of inducing cross-lingual embeddings, each requiring a different
form of supervision, on four typographically different language pairs. Our
evaluation setup spans four different tasks, including intrinsic evaluation on
mono-lingual and cross-lingual similarity, and extrinsic evaluation on
downstream semantic and syntactic applications. We show that models which
require expensive cross-lingual knowledge almost always perform better, but
cheaply supervised models often prove competitive on certain tasks.
| 2,016 | Computation and Language |
Embedding Lexical Features via Low-Rank Tensors | Modern NLP models rely heavily on engineered features, which often combine
word and contextual information into complex lexical features. Such combination
results in large numbers of features, which can lead to over-fitting. We
present a new model that represents complex lexical features---comprised of
parts for words, contextual information and labels---in a tensor that captures
conjunction information among these parts. We apply low-rank tensor
approximations to the corresponding parameter tensors to reduce the parameter
space and improve prediction speed. Furthermore, we investigate two methods for
handling features that include $n$-grams of mixed lengths. Our model achieves
state-of-the-art results on tasks in relation extraction, PP-attachment, and
preposition disambiguation.
| 2,016 | Computation and Language |
Automatic Annotation of Structured Facts in Images | Motivated by the application of fact-level image understanding, we present an
automatic method for data collection of structured visual facts from images
with captions. Example structured facts include attributed objects (e.g.,
<flower, red>), actions (e.g., <baby, smile>), interactions (e.g., <man,
walking, dog>), and positional information (e.g., <vase, on, table>). The
collected annotations are in the form of fact-image pairs (e.g.,<man, walking,
dog> and an image region containing this fact). With a language approach, the
proposed method is able to collect hundreds of thousands of visual fact
annotations with accuracy of 83% according to human judgment. Our method
automatically collected more than 380,000 visual fact annotations and more than
110,000 unique visual facts from images with captions and localized them in
images in less than one day of processing time on standard CPU platforms.
| 2,016 | Computation and Language |
Online Updating of Word Representations for Part-of-Speech Tagging | We propose online unsupervised domain adaptation (DA), which is performed
incrementally as data comes in and is applicable when batch DA is not possible.
In a part-of-speech (POS) tagging evaluation, we find that online unsupervised
DA performs as well as batch DA.
| 2,016 | Computation and Language |
Discriminative Phrase Embedding for Paraphrase Identification | This work, concerning paraphrase identification task, on one hand contributes
to expanding deep learning embeddings to include continuous and discontinuous
linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN
to learn the discriminative weights of words and phrases specific to paraphrase
task, so that a weighted sum of embeddings can represent sentences more
effectively. Based on these two innovations we get competitive state-of-the-art
performance on paraphrase identification.
| 2,016 | Computation and Language |
Reasoning About Pragmatics with Neural Listeners and Speakers | We present a model for pragmatically describing scenes, in which contrastive
behavior results from a combination of inference-driven pragmatics and learned
semantics. Like previous learned approaches to language generation, our model
uses a simple feature-driven architecture (here a pair of neural "listener" and
"speaker" models) to ground language in the world. Like inference-driven
approaches to pragmatics, our model actively reasons about listener behavior
when selecting utterances. For training, our approach requires only ordinary
captions, annotated _without_ demonstration of the pragmatic behavior the model
ultimately exhibits. In human evaluations on a referring expression game, our
approach succeeds 81% of the time, compared to a 69% success rate using
existing techniques.
| 2,016 | Computation and Language |
Character-Level Question Answering with Attention | We show that a character-level encoder-decoder framework can be successfully
applied to question answering with a structured knowledge base. We use our
model for single-relation question answering and demonstrate the effectiveness
of our approach on the SimpleQuestions dataset (Bordes et al., 2015), where we
improve state-of-the-art accuracy from 63.9% to 70.9%, without use of
ensembles. Importantly, our character-level model has 16x fewer parameters than
an equivalent word-level model, can be learned with significantly less data
compared to previous work, which relies on data augmentation, and is robust to
new entities in testing.
| 2,016 | Computation and Language |
Capturing Semantic Similarity for Entity Linking with Convolutional
Neural Networks | A key challenge in entity linking is making effective use of contextual
information to disambiguate mentions that might refer to different entities in
different contexts. We present a model that uses convolutional neural networks
to capture semantic correspondence between a mention's context and a proposed
target entity. These convolutional networks operate at multiple granularities
to exploit various kinds of topic information, and their rich parameterization
gives them the capacity to learn which n-grams characterize different topics.
We combine these networks with a sparse linear model to achieve
state-of-the-art performance on multiple entity linking datasets, outperforming
the prior systems of Durrett and Klein (2014) and Nguyen et al. (2014).
| 2,016 | Computation and Language |
Achieving Open Vocabulary Neural Machine Translation with Hybrid
Word-Character Models | Nearly all previous work on neural machine translation (NMT) has used quite
restricted vocabularies, perhaps with a subsequent method to patch in unknown
words. This paper presents a novel word-character solution to achieving open
vocabulary NMT. We build hybrid systems that translate mostly at the word level
and consult the character components for rare words. Our character-level
recurrent neural networks compute source word representations and recover
unknown target words when needed. The twofold advantage of such a hybrid
approach is that it is much faster and easier to train than character-based
ones; at the same time, it never produces unknown words as in the case of
word-based models. On the WMT'15 English to Czech translation task, this hybrid
approach offers an addition boost of +2.1-11.4 BLEU points over models that
already handle unknown words. Our best system achieves a new state-of-the-art
result with 20.7 BLEU score. We demonstrate that our character models can
successfully learn to not only generate well-formed words for Czech, a
highly-inflected language with a very complex vocabulary, but also build
correct representations for English source words.
| 2,016 | Computation and Language |
In narrative texts punctuation marks obey the same statistics as words | From a grammar point of view, the role of punctuation marks in a sentence is
formally defined and well understood. In semantic analysis punctuation plays
also a crucial role as a method of avoiding ambiguity of the meaning. A
different situation can be observed in the statistical analyses of language
samples, where the decision on whether the punctuation marks should be
considered or should be neglected is seen rather as arbitrary and at present it
belongs to a researcher's preference. An objective of this work is to shed some
light onto this problem by providing us with an answer to the question whether
the punctuation marks may be treated as ordinary words and whether they should
be included in any analysis of the word co-occurences. We already know from our
previous study (S.~Dro\.zd\.z {\it et al.}, Inf. Sci. 331 (2016) 32-44) that
full stops that determine the length of sentences are the main carrier of
long-range correlations. Now we extend that study and analyze statistical
properties of the most common punctuation marks in a few Indo-European
languages, investigate their frequencies, and locate them accordingly in the
Zipf rank-frequency plots as well as study their role in the word-adjacency
networks. We show that, from a statistical viewpoint, the punctuation marks
reveal properties that are qualitatively similar to the properties of the most
frequent words like articles, conjunctions, pronouns, and prepositions. This
refers to both the Zipfian analysis and the network analysis. By adding the
punctuation marks to the Zipf plots, we also show that these plots that are
normally described by the Zipf-Mandelbrot distribution largely restore the
power-law Zipfian behaviour for the most frequent items.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.