Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Natural Language Semantics and Computability | This paper is a reflexion on the computability of natural language semantics.
It does not contain a new model or new results in the formal semantics of
natural language: it is rather a computational analysis of the logical models
and algorithms currently used in natural language semantics, defined as the
mapping of a statement to logical formulas - formulas, because a statement can
be ambiguous. We argue that as long as possible world semantics is left out,
one can compute the semantic representation(s) of a given statement, including
aspects of lexical meaning. We also discuss the algorithmic complexity of this
process.
| 2,016 | Computation and Language |
Semantic Spaces | Any natural language can be considered as a tool for producing large
databases (consisting of texts, written, or discursive). This tool for its
description in turn requires other large databases (dictionaries, grammars
etc.). Nowadays, the notion of database is associated with computer processing
and computer memory. However, a natural language resides also in human brains
and functions in human communication, from interpersonal to intergenerational
one. We discuss in this survey/research paper mathematical, in particular
geometric, constructions, which help to bridge these two worlds. In particular,
in this paper we consider the Vector Space Model of semantics based on
frequency matrices, as used in Natural Language Processing. We investigate
underlying geometries, formulated in terms of Grassmannians, projective spaces,
and flag varieties. We formulate the relation between vector space models and
semantic spaces based on semic axes in terms of projectability of subvarieties
in Grassmannians and projective spaces. We interpret Latent Semantics as a
geometric flow on Grassmannians. We also discuss how to formulate G\"ardenfors'
notion of "meeting of minds" in our geometric setting.
| 2,016 | Computation and Language |
Universal Dependencies for Learner English | We introduce the Treebank of Learner English (TLE), the first publicly
available syntactic treebank for English as a Second Language (ESL). The TLE
provides manually annotated POS tags and Universal Dependency (UD) trees for
5,124 sentences from the Cambridge First Certificate in English (FCE) corpus.
The UD annotations are tied to a pre-existing error annotation of the FCE,
whereby full syntactic analyses are provided for both the original and error
corrected versions of each sentence. Further on, we delineate ESL annotation
guidelines that allow for consistent syntactic treatment of ungrammatical
English. Finally, we benchmark POS tagging and dependency parsing performance
on the TLE dataset and measure the effect of grammatical errors on parsing
accuracy. We envision the treebank to support a wide range of linguistic and
computational research on second language acquisition as well as automatic
processing of ungrammatical language. The treebank is available at
universaldependencies.org. The annotation manual used in this project and a
graphical query engine are available at esltreebank.org.
| 2,016 | Computation and Language |
Occurrence Statistics of Entities, Relations and Types on the Web | The problem of collecting reliable estimates of occurrence of entities on the
open web forms the premise for this report. The models learned for tagging
entities cannot be expected to perform well when deployed on the web. This is
owing to the severe mismatch in the distributions of such entities on the web
and in the relatively diminutive training data. In this report, we build up the
case for maximum mean discrepancy for estimation of occurrence statistics of
entities on the web, taking a review of named entity disambiguation techniques
and related concepts along the way.
| 2,016 | Computation and Language |
Large-scale Analysis of Counseling Conversations: An Application of
Natural Language Processing to Mental Health | Mental illness is one of the most pressing public health issues of our time.
While counseling and psychotherapy can be effective treatments, our knowledge
about how to conduct successful counseling conversations has been limited due
to lack of large-scale data with labeled outcomes of the conversations. In this
paper, we present a large-scale, quantitative study on the discourse of
text-message-based counseling conversations. We develop a set of novel
computational discourse analysis methods to measure how various linguistic
aspects of conversations are correlated with conversation outcomes. Applying
techniques such as sequence-based conversation models, language model
comparisons, message clustering, and psycholinguistics-inspired word frequency
analyses, we discover actionable conversation strategies that are associated
with better conversation outcomes.
| 2,016 | Computation and Language |
Rationale-Augmented Convolutional Neural Networks for Text
Classification | We present a new Convolutional Neural Network (CNN) model for text
classification that jointly exploits labels on documents and their component
sentences. Specifically, we consider scenarios in which annotators explicitly
mark sentences (or snippets) that support their overall document
categorization, i.e., they provide rationales. Our model exploits such
supervision via a hierarchical approach in which each document is represented
by a linear combination of the vector representations of its component
sentences. We propose a sentence-level convolutional model that estimates the
probability that a given sentence is a rationale, and we then scale the
contribution of each sentence to the aggregate document representation in
proportion to these estimates. Experiments on five classification datasets that
have document labels and associated rationales demonstrate that our approach
consistently outperforms strong baselines. Moreover, our model naturally
provides explanations for its predictions.
| 2,016 | Computation and Language |
Capturing divergence in dependency trees to improve syntactic projection | Obtaining syntactic parses is a crucial part of many NLP pipelines. However,
most of the world's languages do not have large amounts of syntactically
annotated corpora available for building parsers. Syntactic projection
techniques attempt to address this issue by using parallel corpora consisting
of resource-poor and resource-rich language pairs, taking advantage of a parser
for the resource-rich language and word alignment between the languages to
project the parses onto the data for the resource-poor language. These
projection methods can suffer, however, when the two languages are divergent.
In this paper, we investigate the possibility of using small, parallel,
annotated corpora to automatically detect divergent structural patterns between
two languages. These patterns can then be used to improve structural projection
algorithms, allowing for better performing NLP tools for resource-poor
languages, in particular those that may not have large amounts of annotated
data necessary for traditional, fully-supervised methods. While this detection
process is not exhaustive, we demonstrate that common patterns of divergence
can be identified automatically without prior knowledge of a given language
pair, and the patterns can be used to improve performance of projection
algorithms.
| 2,016 | Computation and Language |
Anchoring and Agreement in Syntactic Annotations | We present a study on two key characteristics of human syntactic annotations:
anchoring and agreement. Anchoring is a well known cognitive bias in human
decision making, where judgments are drawn towards pre-existing values. We
study the influence of anchoring on a standard approach to creation of
syntactic resources where syntactic annotations are obtained via human editing
of tagger and parser output. Our experiments demonstrate a clear anchoring
effect and reveal unwanted consequences, including overestimation of parsing
performance and lower quality of annotations in comparison with human-based
annotations. Using sentences from the Penn Treebank WSJ, we also report
systematically obtained inter-annotator agreement estimates for English
dependency parsing. Our agreement results control for parser bias, and are
consequential in that they are on par with state of the art parsing performance
for English newswire. We discuss the impact of our findings on strategies for
future annotation efforts and parser evaluations.
| 2,016 | Computation and Language |
Machine Translation Evaluation Resources and Methods: A Survey | We introduce the Machine Translation (MT) evaluation survey that contains
both manual and automatic evaluation methods. The traditional human evaluation
criteria mainly include the intelligibility, fidelity, fluency, adequacy,
comprehension, and informativeness. The advanced human assessments include
task-oriented measures, post-editing, segment ranking, and extended criteriea,
etc. We classify the automatic evaluation methods into two categories,
including lexical similarity scenario and linguistic features application. The
lexical similarity methods contain edit distance, precision, recall, F-measure,
and word order. The linguistic features can be divided into syntactic features
and semantic features respectively. The syntactic features include part of
speech tag, phrase types and sentence structures, and the semantic features
include named entity, synonyms, textual entailment, paraphrase, semantic roles,
and language models. The deep learning models for evaluation are very newly
proposed. Subsequently, we also introduce the evaluation methods for MT
evaluation including different correlation scores, and the recent quality
estimation (QE) tasks for MT.
This paper differs from the existing works
\cite{GALEprogram2009,EuroMatrixProject2007} from several aspects, by
introducing some recent development of MT evaluation measures, the different
classifications from manual to automatic evaluation measures, the introduction
of recent QE tasks of MT, and the concise construction of the content.
We hope this work will be helpful for MT researchers to easily pick up some
metrics that are best suitable for their specific MT model development, and
help MT evaluation researchers to get a general clue of how MT evaluation
research developed. Furthermore, hopefully, this work can also shine some light
on other evaluation tasks, except for translation, of NLP fields.
| 2,018 | Computation and Language |
A Proposal for Linguistic Similarity Datasets Based on Commonality Lists | Similarity is a core notion that is used in psychology and two branches of
linguistics: theoretical and computational. The similarity datasets that come
from the two fields differ in design: psychological datasets are focused around
a certain topic such as fruit names, while linguistic datasets contain words
from various categories. The later makes humans assign low similarity scores to
the words that have nothing in common and to the words that have contrast in
meaning, making similarity scores ambiguous. In this work we discuss the
similarity collection procedure for a multi-category dataset that avoids score
ambiguity and suggest changes to the evaluation procedure to reflect the
insights of psychological literature for word, phrase and sentence similarity.
We suggest to ask humans to provide a list of commonalities and differences
instead of numerical similarity scores and employ the structure of human
judgements beyond pairwise similarity for model evaluation. We believe that the
proposed approach will give rise to datasets that test meaning representation
models more thoroughly with respect to the human treatment of similarity.
| 2,016 | Computation and Language |
Syntactically Guided Neural Machine Translation | We investigate the use of hierarchical phrase-based SMT lattices in
end-to-end neural machine translation (NMT). Weight pushing transforms the
Hiero scores for complete translation hypotheses, with the full translation
grammar score and full n-gram language model score, into posteriors compatible
with NMT predictive probabilities. With a slightly modified NMT beam-search
decoder we find gains over both Hiero and NMT decoding alone, with practical
advantages in extending NMT to very large input and output vocabularies.
| 2,017 | Computation and Language |
Joint Learning of Sentence Embeddings for Relevance and Entailment | We consider the problem of Recognizing Textual Entailment within an
Information Retrieval context, where we must simultaneously determine the
relevancy as well as degree of entailment for individual pieces of evidence to
determine a yes/no answer to a binary natural language question.
We compare several variants of neural networks for sentence embeddings in a
setting of decision-making based on evidence of varying relevance. We propose a
basic model to integrate evidence for entailment, show that joint training of
the sentence embeddings to model relevance and entailment is feasible even with
no explicit per-evidence supervision, and show the importance of evaluating
strong baselines. We also demonstrate the benefit of carrying over text
comprehension model trained on an unrelated task for our small datasets.
Our research is motivated primarily by a new open dataset we introduce,
consisting of binary questions and news-based evidence snippets. We also apply
the proposed relevance-entailment model on a similar task of ranking
multiple-choice test answers, evaluating it on a preliminary dataset of school
test questions as well as the standard MCTest dataset, where we improve the
neural model state-of-art.
| 2,016 | Computation and Language |
Log-linear Combinations of Monolingual and Bilingual Neural Machine
Translation Models for Automatic Post-Editing | This paper describes the submission of the AMU (Adam Mickiewicz University)
team to the Automatic Post-Editing (APE) task of WMT 2016. We explore the
application of neural translation models to the APE problem and achieve good
results by treating different models as components in a log-linear model,
allowing for multiple inputs (the MT-output and the source) that are decoded to
the same target language (post-edited translations). A simple string-matching
penalty integrated within the log-linear model is used to control for higher
faithfulness with regard to the raw machine translation output. To overcome the
problem of too little training data, we generate large amounts of artificial
data. Our submission improves over the uncorrected baseline on the unseen test
set by -3.2\% TER and +5.5\% BLEU and outperforms any other system submitted to
the shared-task by a large margin.
| 2,016 | Computation and Language |
The AMU-UEDIN Submission to the WMT16 News Translation Task:
Attention-based NMT Models as Feature Functions in Phrase-based SMT | This paper describes the AMU-UEDIN submissions to the WMT 2016 shared task on
news translation. We explore methods of decode-time integration of
attention-based neural translation models with phrase-based statistical machine
translation. Efficient batch-algorithms for GPU-querying are proposed and
implemented. For English-Russian, our system stays behind the state-of-the-art
pure neural models in terms of BLEU. Among restricted systems, manual
evaluation places it in the first cluster tied with the pure neural model. For
the Russian-English task, our submission achieves the top BLEU result,
outperforming the best pure neural system by 1.1 BLEU points and our own
phrase-based baseline by 1.6 BLEU. After manual evaluation, this system is the
best restricted system in its own cluster. In follow-up experiments we improve
results by additional 0.8 BLEU.
| 2,016 | Computation and Language |
Recurrent Neural Network for Text Classification with Multi-Task
Learning | Neural network based methods have obtained great progress on a variety of
natural language processing tasks. However, in most previous works, the models
are learned based on single-task supervised objectives, which often suffer from
insufficient training data. In this paper, we use the multi-task learning
framework to jointly learn across multiple related tasks. Based on recurrent
neural network, we propose three different mechanisms of sharing information to
model text with task-specific and shared layers. The entire network is trained
jointly on all these tasks. Experiments on four benchmark text classification
tasks show that our proposed models can improve the performance of a task with
the help of other related tasks.
| 2,016 | Computation and Language |
Incorporating Loose-Structured Knowledge into Conversation Modeling via
Recall-Gate LSTM | Modeling human conversations is the essence for building satisfying chat-bots
with multi-turn dialog ability. Conversation modeling will notably benefit from
domain knowledge since the relationships between sentences can be clarified due
to semantic hints introduced by knowledge. In this paper, a deep neural network
is proposed to incorporate background knowledge for conversation modeling.
Through a specially designed Recall gate, domain knowledge can be transformed
into the extra global memory of Long Short-Term Memory (LSTM), so as to enhance
LSTM by cooperating with its local memory to capture the implicit semantic
relevance between sentences within conversations. In addition, this paper
introduces the loose structured domain knowledge base, which can be built with
slight amount of manual work and easily adopted by the Recall gate. Our model
is evaluated on the context-oriented response selecting task, and experimental
results on both two datasets have shown that our approach is promising for
modeling human conversations and building key components of automatic chatting
systems.
| 2,017 | Computation and Language |
Automatic Detection and Categorization of Election-Related Tweets | With the rise in popularity of public social media and micro-blogging
services, most notably Twitter, the people have found a venue to hear and be
heard by their peers without an intermediary. As a consequence, and aided by
the public nature of Twitter, political scientists now potentially have the
means to analyse and understand the narratives that organically form, spread
and decline among the public in a political campaign. However, the volume and
diversity of the conversation on Twitter, combined with its noisy and
idiosyncratic nature, make this a hard task. Thus, advanced data mining and
language processing techniques are required to process and analyse the data. In
this paper, we present and evaluate a technical framework, based on recent
advances in deep neural networks, for identifying and analysing
election-related conversation on Twitter on a continuous, longitudinal basis.
Our models can detect election-related tweets with an F-score of 0.92 and can
categorize these tweets into 22 topics with an F-score of 0.90.
| 2,016 | Computation and Language |
Tweet Acts: A Speech Act Classifier for Twitter | Speech acts are a way to conceptualize speech as action. This holds true for
communication on any platform, including social media platforms such as
Twitter. In this paper, we explored speech act recognition on Twitter by
treating it as a multi-class classification problem. We created a taxonomy of
six speech acts for Twitter and proposed a set of semantic and syntactic
features. We trained and tested a logistic regression classifier using a data
set of manually labelled tweets. Our method achieved a state-of-the-art
performance with an average F1 score of more than $0.70$. We also explored
classifiers with three different granularities (Twitter-wide, type-specific and
topic-specific) in order to find the right balance between generalization and
overfitting for our task.
| 2,016 | Computation and Language |
Siamese convolutional networks based on phonetic features for cognate
identification | In this paper, we explore the use of convolutional networks (ConvNets) for
the purpose of cognate identification. We compare our architecture with binary
classifiers based on string similarity measures on different language families.
Our experiments show that convolutional networks achieve competitive results
across concepts and across language families at the task of cognate
identification.
| 2,016 | Computation and Language |
Yelp Dataset Challenge: Review Rating Prediction | Review websites, such as TripAdvisor and Yelp, allow users to post online
reviews for various businesses, products and services, and have been recently
shown to have a significant influence on consumer shopping behaviour. An online
review typically consists of free-form text and a star rating out of 5. The
problem of predicting a user's star rating for a product, given the user's text
review for that product, is called Review Rating Prediction and has lately
become a popular, albeit hard, problem in machine learning. In this paper, we
treat Review Rating Prediction as a multi-class classification problem, and
build sixteen different prediction models by combining four feature extraction
methods, (i) unigrams, (ii) bigrams, (iii) trigrams and (iv) Latent Semantic
Indexing, with four machine learning algorithms, (i) logistic regression, (ii)
Naive Bayes classification, (iii) perceptrons, and (iv) linear Support Vector
Classification. We analyse the performance of each of these sixteen models to
come up with the best model for predicting the ratings from reviews. We use the
dataset provided by Yelp for training and testing the models.
| 2,016 | Computation and Language |
On the Evaluation of Dialogue Systems with Next Utterance Classification | An open challenge in constructing dialogue systems is developing methods for
automatically learning dialogue strategies from large amounts of unlabelled
data. Recent work has proposed Next-Utterance-Classification (NUC) as a
surrogate task for building dialogue systems from text data. In this paper we
investigate the performance of humans on this task to validate the relevance of
NUC as a method of evaluation. Our results show three main findings: (1) humans
are able to correctly classify responses at a rate much better than chance,
thus confirming that the task is feasible, (2) human performance levels vary
across task domains (we consider 3 datasets) and expertise levels (novice vs
experts), thus showing that a range of performance is possible on this type of
task, (3) automated dialogue systems built using state-of-the-art machine
learning methods have similar performance to the human novices, but worse than
the experts, thus confirming the utility of this class of tasks for driving
further research in automated dialogue systems.
| 2,016 | Computation and Language |
Leveraging Lexical Resources for Learning Entity Embeddings in
Multi-Relational Data | Recent work in learning vector-space embeddings for multi-relational data has
focused on combining relational information derived from knowledge bases with
distributional information derived from large text corpora. We propose a simple
approach that leverages the descriptions of entities or phrases available in
lexical resources, in conjunction with distributional semantics, in order to
derive a better initialization for training relational models. Applying this
initialization to the TransE model results in significant new state-of-the-art
performances on the WordNet dataset, decreasing the mean rank from the previous
best of 212 to 51. It also results in faster convergence of the entity
representations. We find that there is a trade-off between improving the mean
rank and the hits@10 with this approach. This illustrates that much remains to
be understood regarding performance improvements in relational models.
| 2,016 | Computation and Language |
Relations such as Hypernymy: Identifying and Exploiting Hearst Patterns
in Distributional Vectors for Lexical Entailment | We consider the task of predicting lexical entailment using distributional
vectors. We perform a novel qualitative analysis of one existing model which
was previously shown to only measure the prototypicality of word pairs. We find
that the model strongly learns to identify hypernyms using Hearst patterns,
which are well known to be predictive of lexical relations. We present a novel
model which exploits this behavior as a method of feature extraction in an
iterative procedure similar to Principal Component Analysis. Our model combines
the extracted features with the strengths of other proposed models in the
literature, and matches or outperforms prior work on multiple data sets.
| 2,016 | Computation and Language |
Modelling Interaction of Sentence Pair with coupled-LSTMs | Recently, there is rising interest in modelling the interactions of two
sentences with deep neural networks. However, most of the existing methods
encode two sequences with separate encoders, in which a sentence is encoded
with little or no information from the other sentence. In this paper, we
propose a deep architecture to model the strong interaction of sentence pair
with two coupled-LSTMs. Specifically, we introduce two coupled ways to model
the interdependences of two LSTMs, coupling the local contextualized
interactions of two sentences. We then aggregate these interactions and use a
dynamic pooling to select the most informative features. Experiments on two
very large datasets demonstrate the efficacy of our proposed architecture and
its superiority to state-of-the-art methods.
| 2,016 | Computation and Language |
Twitter as a Lifeline: Human-annotated Twitter Corpora for NLP of
Crisis-related Messages | Microblogging platforms such as Twitter provide active communication channels
during mass convergence and emergency events such as earthquakes, typhoons.
During the sudden onset of a crisis situation, affected people post useful
information on Twitter that can be used for situational awareness and other
humanitarian disaster response efforts, if processed timely and effectively.
Processing social media information pose multiple challenges such as parsing
noisy, brief and informal messages, learning information categories from the
incoming stream of messages and classifying them into different classes among
others. One of the basic necessities of many of these tasks is the availability
of data, in particular human-annotated data. In this paper, we present
human-annotated Twitter corpora collected during 19 different crises that took
place between 2013 and 2015. To demonstrate the utility of the annotations, we
train machine learning classifiers. Moreover, we publish first largest word2vec
word embeddings trained on 52 million crisis-related tweets. To deal with
tweets language issues, we present human-annotated normalized lexical resources
for different lexical variations.
| 2,016 | Computation and Language |
Automatic TM Cleaning through MT and POS Tagging: Autodesk's Submission
to the NLP4TM 2016 Shared Task | We describe a machine learning based method to identify incorrect entries in
translation memories. It extends previous work by Barbu (2015) through
incorporating recall-based machine translation and part-of-speech-tagging
features. Our system ranked first in the Binary Classification (II) task for
two out of three language pairs: English-Italian and English-Spanish.
| 2,016 | Computation and Language |
A Hierarchical Latent Variable Encoder-Decoder Model for Generating
Dialogues | Sequential data often possesses a hierarchical structure with complex
dependencies between subsequences, such as found between the utterances in a
dialogue. In an effort to model this kind of generative process, we propose a
neural network-based generative architecture, with latent stochastic variables
that span a variable number of time steps. We apply the proposed model to the
task of dialogue response generation and compare it with recent neural network
architectures. We evaluate the model performance through automatic evaluation
metrics and by carrying out a human evaluation. The experiments demonstrate
that our model improves upon recently proposed models and that the latent
variables facilitate the generation of long outputs and maintain the context.
| 2,016 | Computation and Language |
Stereotyping and Bias in the Flickr30K Dataset | An untested assumption behind the crowdsourced descriptions of the images in
the Flickr30K dataset (Young et al., 2014) is that they "focus only on the
information that can be obtained from the image alone" (Hodosh et al., 2013, p.
859). This paper presents some evidence against this assumption, and provides a
list of biases and unwarranted inferences that can be found in the Flickr30K
dataset. Finally, it considers methods to find examples of these, and discusses
how we should deal with stereotype-driven descriptions in future applications.
| 2,016 | Computation and Language |
As Cool as a Cucumber: Towards a Corpus of Contemporary Similes in
Serbian | Similes are natural language expressions used to compare unlikely things,
where the comparison is not taken literally. They are often used in everyday
communication and are an important part of cultural heritage. Having an
up-to-date corpus of similes is challenging, as they are constantly coined
and/or adapted to the contemporary times. In this paper we present a
methodology for semi-automated collection of similes from the world wide web
using text mining techniques. We expanded an existing corpus of traditional
similes (containing 333 similes) by collecting 446 additional expressions. We,
also, explore how crowdsourcing can be used to extract and curate new similes.
| 2,016 | Computation and Language |
Phrase-based Machine Translation is State-of-the-Art for Automatic
Grammatical Error Correction | In this work, we study parameter tuning towards the M^2 metric, the standard
metric for automatic grammar error correction (GEC) tasks. After implementing
M^2 as a scorer in the Moses tuning framework, we investigate interactions of
dense and sparse features, different optimizers, and tuning strategies for the
CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse
feature weights with M^2 and offer partial solutions. We find that a bare-bones
phrase-based SMT setup with task-specific parameter-tuning outperforms all
previously published results for the CoNLL-2014 test set by a large margin
(46.37% M^2 over previously 41.75%, by an SMT system with neural features)
while being trained on the same, publicly available data. Our newly introduced
dense and sparse features widen that gap, and we improve the state-of-the-art
to 49.49% M^2.
| 2,016 | Computation and Language |
Latent Tree Models for Hierarchical Topic Detection | We present a novel method for hierarchical topic detection where topics are
obtained by clustering documents in multiple ways. Specifically, we model
document collections using a class of graphical models called hierarchical
latent tree models (HLTMs). The variables at the bottom level of an HLTM are
observed binary variables that represent the presence/absence of words in a
document. The variables at other levels are binary latent variables, with those
at the lowest latent level representing word co-occurrence patterns and those
at higher levels representing co-occurrence of patterns at the level below.
Each latent variable gives a soft partition of the documents, and document
clusters in the partitions are interpreted as topics. Latent variables at high
levels of the hierarchy capture long-range word co-occurrence patterns and
hence give thematically more general topics, while those at low levels of the
hierarchy capture short-range word co-occurrence patterns and give thematically
more specific topics. Unlike LDA-based topic models, HLTMs do not refer to a
document generation process and use word variables instead of token variables.
They use a tree structure to model the relationships between topics and words,
which is conducive to the discovery of meaningful topics and topic hierarchies.
| 2,016 | Computation and Language |
Automatic Construction of Discourse Corpora for Dialogue Translation | In this paper, a novel approach is proposed to automatically construct
parallel discourse corpus for dialogue machine translation. Firstly, the
parallel subtitle data and its corresponding monolingual movie script data are
crawled and collected from Internet. Then tags such as speaker and discourse
boundary from the script data are projected to its subtitle data via an
information retrieval approach in order to map monolingual discourse to
bilingual texts. We not only evaluate the mapping results, but also integrate
speaker information into the translation. Experiments show our proposed method
can achieve 81.79% and 98.64% accuracy on speaker and dialogue boundary
annotation, and speaker-based language model adaptation can obtain around 0.5
BLEU points improvement in translation qualities. Finally, we publicly release
around 100K parallel discourse data with manual speaker and dialogue boundary
annotation.
| 2,016 | Computation and Language |
Textual Paralanguage and its Implications for Marketing Communications | Both face-to-face communication and communication in online environments
convey information beyond the actual verbal message. In a traditional
face-to-face conversation, paralanguage, or the ancillary meaning- and
emotion-laden aspects of speech that are not actual verbal prose, gives
contextual information that allows interactors to more appropriately understand
the message being conveyed. In this paper, we conceptualize textual
paralanguage (TPL), which we define as written manifestations of nonverbal
audible, tactile, and visual elements that supplement or replace written
language and that can be expressed through words, symbols, images, punctuation,
demarcations, or any combination of these elements. We develop a typology of
textual paralanguage using data from Twitter, Facebook, and Instagram. We
present a conceptual framework of antecedents and consequences of brands' use
of textual paralanguage. Implications for theory and practice are discussed.
| 2,017 | Computation and Language |
Towards Multi-Agent Communication-Based Language Learning | We propose an interactive multimodal framework for language learning. Instead
of being passively exposed to large amounts of natural text, our learners
(implemented as feed-forward neural networks) engage in cooperative referential
games starting from a tabula rasa setup, and thus develop their own language
from the need to communicate in order to succeed at the game. Preliminary
experiments provide promising results, but also suggest that it is important to
ensure that agents trained in this way do not develop an adhoc communication
code only effective for the game they are playing
| 2,016 | Computation and Language |
Combining Recurrent and Convolutional Neural Networks for Relation
Classification | This paper investigates two different neural architectures for the task of
relation classification: convolutional neural networks and recurrent neural
networks. For both models, we demonstrate the effect of different architectural
choices. We present a new context representation for convolutional neural
networks for relation classification (extended middle context). Furthermore, we
propose connectionist bi-directional recurrent neural networks and introduce
ranking loss for their optimization. Finally, we show that combining
convolutional and recurrent neural networks using a simple voting scheme is
accurate enough to improve results. Our neural models achieve state-of-the-art
results on the SemEval 2010 relation classification task.
| 2,016 | Computation and Language |
Multi-Level Analysis and Annotation of Arabic Corpora for Text-to-Sign
Language MT | In this paper, we present an ongoing effort in lexical semantic analysis and
annotation of Modern Standard Arabic (MSA) text, a semi automatic annotation
tool concerned with the morphologic, syntactic, and semantic levels of
description.
| 2,016 | Computation and Language |
Experiments in Linear Template Combination using Genetic Algorithms | Natural Language Generation systems typically have two parts - strategic
('what to say') and tactical ('how to say'). We present our experiments in
building an unsupervised corpus-driven template based tactical NLG system. We
consider templates as a sequence of words containing gaps. Our idea is based on
the observation that templates are grammatical locally (within their textual
span). We posit the construction of a sentence as a highly restricted sequence
of such templates. This work is an attempt to explore the resulting search
space using Genetic Algorithms to arrive at acceptable solutions. We present a
baseline implementation of this approach which outputs gapped text.
| 2,016 | Computation and Language |
Neural Semantic Role Labeling with Dependency Path Embeddings | This paper introduces a novel model for semantic role labeling that makes use
of neural sequence modeling techniques. Our approach is motivated by the
observation that complex syntactic structures and related phenomena, such as
nested subordinations and nominal predicates, are not handled well by existing
models. Our model treats such instances as sub-sequences of lexicalized
dependency paths and learns suitable embedding representations. We
experimentally demonstrate that such embeddings can improve results over
previous state-of-the-art semantic role labelers, and showcase qualitative
improvements obtained by our method.
| 2,016 | Computation and Language |
On-line Active Reward Learning for Policy Optimisation in Spoken
Dialogue Systems | The ability to compute an accurate reward function is essential for
optimising a dialogue policy via reinforcement learning. In real-world
applications, using explicit user feedback as the reward signal is often
unreliable and costly to collect. This problem can be mitigated if the user's
intent is known in advance or data is available to pre-train a task success
predictor off-line. In practice neither of these apply for most real world
applications. Here we propose an on-line learning framework whereby the
dialogue policy is jointly trained alongside the reward model via active
learning with a Gaussian process model. This Gaussian process operates on a
continuous space dialogue representation generated in an unsupervised fashion
using a recurrent neural network encoder-decoder. The experimental results
demonstrate that the proposed framework is able to significantly reduce data
annotation costs and mitigate noisy user feedback in dialogue policy learning.
| 2,016 | Computation and Language |
Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service.
| 2,017 | Computation and Language |
Design and development a children's speech database | The report presents the process of planning, designing and the development of
a database of spoken children's speech whose native language is Bulgarian. The
proposed model is designed for children between the age of 4 and 6 without
speech disorders, and reflects their specific capabilities. At this age most
children cannot read, there is no sustained concentration, they are emotional,
etc. The aim is to unite all the media information accompanying the recording
and processing of spoken speech, thereby to facilitate the work of researchers
in the field of speech recognition. This database will be used for the
development of systems for children's speech recognition, children's speech
synthesis systems, games which allow voice control, etc. As a result of the
proposed model a prototype system for speech recognition is presented.
| 2,011 | Computation and Language |
Integrating Distributional Lexical Contrast into Word Embeddings for
Antonym-Synonym Distinction | We propose a novel vector representation that integrates lexical contrast
into distributional vectors and strengthens the most salient features for
determining degrees of word similarity. The improved vectors significantly
outperform standard models and distinguish antonyms from synonyms with an
average precision of 0.66-0.76 across word classes (adjectives, nouns, verbs).
Moreover, we integrate the lexical contrast vectors into the objective function
of a skip-gram model. The novel embedding outperforms state-of-the-art models
on predicting word similarities in SimLex-999, and on distinguishing antonyms
from synonyms.
| 2,016 | Computation and Language |
Unsupervised Word and Dependency Path Embeddings for Aspect Term
Extraction | In this paper, we develop a novel approach to aspect term extraction based on
unsupervised learning of distributed representations of words and dependency
paths. The basic idea is to connect two words (w1 and w2) with the dependency
path (r) between them in the embedding space. Specifically, our method
optimizes the objective w1 + r = w2 in the low-dimensional space, where the
multi-hop dependency paths are treated as a sequence of grammatical relations
and modeled by a recurrent neural network. Then, we design the embedding
features that consider linear context and dependency context information, for
the conditional random field (CRF) based aspect term extraction. Experimental
results on the SemEval datasets show that, (1) with only embedding features, we
can achieve state-of-the-art results; (2) our embedding method which
incorporates the syntactic information among words yields better performance
than other representative ones in aspect term extraction.
| 2,016 | Computation and Language |
Variational Neural Machine Translation | Models of neural machine translation are often from a discriminative family
of encoderdecoders that learn a conditional distribution of a target sentence
given a source sentence. In this paper, we propose a variational model to learn
this conditional distribution for neural machine translation: a variational
encoderdecoder model that can be trained end-to-end. Different from the vanilla
encoder-decoder model that generates target translations from hidden
representations of source sentences alone, the variational model introduces a
continuous latent variable to explicitly model underlying semantics of source
sentences and to guide the generation of target translations. In order to
perform efficient posterior inference and large-scale training, we build a
neural posterior approximator conditioned on both the source and the target
sides, and equip it with a reparameterization technique to estimate the
variational lower bound. Experiments on both Chinese-English and English-
German translation tasks show that the proposed variational neural machine
translation achieves significant improvements over the vanilla neural machine
translation baselines.
| 2,016 | Computation and Language |
BattRAE: Bidimensional Attention-Based Recursive Autoencoders for
Learning Bilingual Phrase Embeddings | In this paper, we propose a bidimensional attention based recursive
autoencoder (BattRAE) to integrate clues and sourcetarget interactions at
multiple levels of granularity into bilingual phrase representations. We employ
recursive autoencoders to generate tree structures of phrases with embeddings
at different levels of granularity (e.g., words, sub-phrases and phrases). Over
these embeddings on the source and target side, we introduce a bidimensional
attention network to learn their interactions encoded in a bidimensional
attention matrix, from which we extract two soft attention weight distributions
simultaneously. These weight distributions enable BattRAE to generate
compositive phrase representations via convolution. Based on the learned phrase
representations, we further use a bilinear neural model, trained via a
max-margin method, to measure bilingual semantic similarity. To evaluate the
effectiveness of BattRAE, we incorporate this semantic similarity as an
additional feature into a state-of-the-art SMT system. Extensive experiments on
NIST Chinese-English test sets show that our model achieves a substantial
improvement of up to 1.63 BLEU points on average over the baseline.
| 2,016 | Computation and Language |
Automatic Open Knowledge Acquisition via Long Short-Term Memory Networks
with Feedback Negative Sampling | Previous studies in Open Information Extraction (Open IE) are mainly based on
extraction patterns. They manually define patterns or automatically learn them
from a large corpus. However, these approaches are limited when grasping the
context of a sentence, and they fail to capture implicit relations. In this
paper, we address this problem with the following methods. First, we exploit
long short-term memory (LSTM) networks to extract higher-level features along
the shortest dependency paths, connecting headwords of relations and arguments.
The path-level features from LSTM networks provide useful clues regarding
contextual information and the validity of arguments. Second, we constructed
samples to train LSTM networks without the need for manual labeling. In
particular, feedback negative sampling picks highly negative samples among
non-positive samples through a model trained with positive samples. The
experimental results show that our approach produces more precise and abundant
extractions than state-of-the-art open IE systems. To the best of our
knowledge, this is the first work to apply deep learning to Open IE.
| 2,016 | Computation and Language |
Boosting Question Answering by Deep Entity Recognition | In this paper an open-domain factoid question answering system for Polish,
RAFAEL, is presented. The system goes beyond finding an answering sentence; it
also extracts a single string, corresponding to the required entity. Herein the
focus is placed on different approaches to entity recognition, essential for
retrieving information matching question constraints. Apart from traditional
approach, including named entity recognition (NER) solutions, a novel
technique, called Deep Entity Recognition (DeepER), is introduced and
implemented. It allows a comprehensive search of all forms of entity references
matching a given WordNet synset (e.g. an impressionist), based on a previously
assembled entity library. It has been created by analysing the first sentences
of encyclopaedia entries and disambiguation and redirect pages. DeepER also
provides automatic evaluation, which makes possible numerous experiments,
including over a thousand questions from a quiz TV show answered on the grounds
of Polish Wikipedia. The final results of a manual evaluation on a separate
question set show that the strength of DeepER approach lies in its ability to
answer questions that demand answers beyond the traditional categories of named
entities.
| 2,016 | Computation and Language |
Stacking With Auxiliary Features | Ensembling methods are well known for improving prediction accuracy. However,
they are limited in the sense that they cannot discriminate among component
models effectively. In this paper, we propose stacking with auxiliary features
that learns to fuse relevant information from multiple systems to improve
performance. Auxiliary features enable the stacker to rely on systems that not
just agree on an output but also the provenance of the output. We demonstrate
our approach on three very different and difficult problems -- the Cold Start
Slot Filling, the Tri-lingual Entity Discovery and Linking and the ImageNet
object detection tasks. We obtain new state-of-the-art results on the first two
tasks and substantial improvements on the detection task, thus verifying the
power and generality of our approach.
| 2,016 | Computation and Language |
Building an Evaluation Scale using Item Response Theory | Evaluation of NLP methods requires testing against a previously vetted
gold-standard test set and reporting standard metrics
(accuracy/precision/recall/F1). The current assumption is that all items in a
given test set are equal with regards to difficulty and discriminating power.
We propose Item Response Theory (IRT) from psychometrics as an alternative
means for gold-standard test-set generation and NLP system evaluation. IRT is
able to describe characteristics of individual items - their difficulty and
discriminating power - and can account for these characteristics in its
estimation of human intelligence or ability for an NLP task. In this paper, we
demonstrate IRT by generating a gold-standard test set for Recognizing Textual
Entailment. By collecting a large number of human responses and fitting our IRT
model, we show that our IRT model compares NLP systems with the performance in
a human population and is able to provide more insight into system performance
than standard evaluation metrics. We show that a high accuracy score does not
always imply a high IRT score, which depends on the item characteristics and
the response pattern.
| 2,016 | Computation and Language |
Aspect Level Sentiment Classification with Deep Memory Network | We introduce a deep memory network for aspect level sentiment classification.
Unlike feature-based SVM and sequential neural models such as LSTM, this
approach explicitly captures the importance of each context word when inferring
the sentiment polarity of an aspect. Such importance degree and text
representation are calculated with multiple computational layers, each of which
is a neural attention model over an external memory. Experiments on laptop and
restaurant datasets demonstrate that our approach performs comparable to
state-of-art feature based SVM system, and substantially better than LSTM and
attention-based LSTM architectures. On both datasets we show that multiple
computational layers could improve the performance. Moreover, our approach is
also fast. The deep memory network with 9 layers is 15 times faster than LSTM
with a CPU implementation.
| 2,016 | Computation and Language |
Learning Natural Language Inference using Bidirectional LSTM model and
Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin.
| 2,016 | Computation and Language |
Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change | Understanding how words change their meanings over time is key to models of
language and cultural evolution, but historical data on meaning is scarce,
making theories hard to develop and test. Word embeddings show promise as a
diachronic tool, but have not been carefully evaluated. We develop a robust
methodology for quantifying semantic change by evaluating word embeddings
(PPMI, SVD, word2vec) against known historical changes. We then use this
methodology to reveal statistical laws of semantic evolution. Using six
historical corpora spanning four languages and two centuries, we propose two
quantitative laws of semantic change: (i) the law of conformity---the rate of
semantic change scales with an inverse power-law of word frequency; (ii) the
law of innovation---independent of frequency, words that are more polysemous
have higher rates of semantic change.
| 2,018 | Computation and Language |
Does Multimodality Help Human and Machine for Translation and Image
Captioning? | This paper presents the systems developed by LIUM and CVC for the WMT16
Multimodal Machine Translation challenge. We explored various comparative
methods, namely phrase-based systems and attentional recurrent neural networks
models trained using monomodal or multimodal data. We also performed a human
evaluation in order to estimate the usefulness of multimodal data for human
machine translation and image description generation. Our systems obtained the
best results for both tasks according to the automatic evaluation metrics BLEU
and METEOR.
| 2,016 | Computation and Language |
Determining the Characteristic Vocabulary for a Specialized Dictionary
using Word2vec and a Directed Crawler | Specialized dictionaries are used to understand concepts in specific domains,
especially where those concepts are not part of the general vocabulary, or
having meanings that differ from ordinary languages. The first step in creating
a specialized dictionary involves detecting the characteristic vocabulary of
the domain in question. Classical methods for detecting this vocabulary involve
gathering a domain corpus, calculating statistics on the terms found there, and
then comparing these statistics to a background or general language corpus.
Terms which are found significantly more often in the specialized corpus than
in the background corpus are candidates for the characteristic vocabulary of
the domain. Here we present two tools, a directed crawler, and a distributional
semantics package, that can be used together, circumventing the need of a
background corpus. Both tools are available on the web.
| 2,016 | Computation and Language |
Implementing a Reverse Dictionary, based on word definitions, using a
Node-Graph Architecture | In this paper, we outline an approach to build graph-based reverse
dictionaries using word definitions. A reverse dictionary takes a phrase as an
input and outputs a list of words semantically similar to that phrase. It is a
solution to the Tip-of-the-Tongue problem. We use a distance-based similarity
measure, computed on a graph, to assess the similarity between a word and the
input phrase. We compare the performance of our approach with the Onelook
Reverse Dictionary and a distributional semantics method based on word2vec, and
show that our approach is much better than the distributional semantics method,
and as good as Onelook, on a 3k lexicon. This simple approach sets a new
performance baseline for reverse dictionaries.
| 2,016 | Computation and Language |
Neural Network Translation Models for Grammatical Error Correction | Phrase-based statistical machine translation (SMT) systems have previously
been used for the task of grammatical error correction (GEC) to achieve
state-of-the-art accuracy. The superiority of SMT systems comes from their
ability to learn text transformations from erroneous to corrected text, without
explicitly modeling error types. However, phrase-based SMT systems suffer from
limitations of discrete word representation, linear mapping, and lack of global
context. In this paper, we address these limitations by using two different yet
complementary neural network models, namely a neural network global lexicon
model and a neural network joint model. These neural networks can generalize
better by using continuous space representation of words and learn non-linear
mappings. Moreover, they can leverage contextual information from the source
sentence more effectively. By adding these two components, we achieve
statistically significant improvement in accuracy for grammatical error
correction over a state-of-the-art GEC system.
| 2,016 | Computation and Language |
Exploiting N-Best Hypotheses to Improve an SMT Approach to Grammatical
Error Correction | Grammatical error correction (GEC) is the task of detecting and correcting
grammatical errors in texts written by second language learners. The
statistical machine translation (SMT) approach to GEC, in which sentences
written by second language learners are translated to grammatically correct
sentences, has achieved state-of-the-art accuracy. However, the SMT approach is
unable to utilize global context. In this paper, we propose a novel approach to
improve the accuracy of GEC, by exploiting the n-best hypotheses generated by
an SMT approach. Specifically, we build a classifier to score the edits in the
n-best hypotheses. The classifier can be used to select appropriate edits or
re-rank the n-best hypotheses. We apply these methods to a state-of-the-art GEC
system that uses the SMT approach. Our experiments show that our methods
achieve statistically significant improvements in accuracy over the best
published results on a benchmark test dataset on GEC.
| 2,016 | Computation and Language |
On a Topic Model for Sentences | Probabilistic topic models are generative models that describe the content of
documents by discovering the latent topics underlying them. However, the
structure of the textual input, and for instance the grouping of words in
coherent text spans such as sentences, contains much information which is
generally lost with these models. In this paper, we propose sentenceLDA, an
extension of LDA whose goal is to overcome this limitation by incorporating the
structure of the text in the generative and inference processes. We illustrate
the advantages of sentenceLDA by comparing it with LDA using both intrinsic
(perplexity) and extrinsic (text classification) evaluation tasks on different
text collections.
| 2,016 | Computation and Language |
Improved Parsing for Argument-Clusters Coordination | Syntactic parsers perform poorly in prediction of Argument-Cluster
Coordination (ACC). We change the PTB representation of ACC to be more suitable
for learning by a statistical PCFG parser, affecting 125 trees in the training
set. Training on the modified trees yields a slight improvement in EVALB scores
on sections 22 and 23. The main evaluation is on a corpus of 4th grade science
exams, in which ACC structures are prevalent. On this corpus, we obtain an
impressive x2.7 improvement in recovering ACC structures compared to a parser
trained on the original PTB trees.
| 2,016 | Computation and Language |
Conversational Contextual Cues: The Case of Personalization and History
for Response Ranking | We investigate the task of modeling open-domain, multi-turn, unstructured,
multi-participant, conversational dialogue. We specifically study the effect of
incorporating different elements of the conversation. Unlike previous efforts,
which focused on modeling messages and responses, we extend the modeling to
long context and participant's history. Our system does not rely on handwritten
rules or engineered features; instead, we train deep neural networks on a large
conversational dataset. In particular, we exploit the structure of Reddit
comments and posts to extract 2.1 billion messages and 133 million
conversations. We evaluate our models on the task of predicting the next
response in a conversation, and we find that modeling both context and
participants improves prediction accuracy.
| 2,016 | Computation and Language |
On a Possible Similarity between Gene and Semantic Networks | In several domains such as linguistics, molecular biology or social sciences,
holistic effects are hardly well-defined by modeling with single units, but
more and more studies tend to understand macro structures with the help of
meaningful and useful associations in fields such as social networks, systems
biology or semantic web. A stochastic multi-agent system offers both accurate
theoretical framework and operational computing implementations to model
large-scale associations, their dynamics and patterns extraction. We show that
clustering around a target object in a set of associations of object prove some
similarity in specific data and two case studies about gene-gene and term-term
relationships leading to an idea of a common organizing principle of cognition
with random and deterministic effects.
| 2,016 | Computation and Language |
Generalizing and Hybridizing Count-based and Neural Language Models | Language models (LMs) are statistical models that calculate probabilities
over sequences of words or other discrete symbols. Currently two major
paradigms for language modeling exist: count-based n-gram models, which have
advantages of scalability and test-time speed, and neural LMs, which often
achieve superior modeling performance. We demonstrate how both varieties of
models can be unified in a single modeling framework that defines a set of
probability distributions over the vocabulary of words, and then dynamically
calculates mixture weights over these distributions. This formulation allows us
to create novel hybrid models that combine the desirable features of
count-based and neural LMs, and experiments demonstrate the advantages of these
approaches.
| 2,016 | Computation and Language |
Source-LDA: Enhancing probabilistic topic models using prior knowledge
sources | A popular approach to topic modeling involves extracting co-occurring n-grams
of a corpus into semantic themes. The set of n-grams in a theme represents an
underlying topic, but most topic modeling approaches are not able to label
these sets of words with a single n-gram. Such labels are useful for topic
identification in summarization systems. This paper introduces a novel approach
to labeling a group of n-grams comprising an individual topic. The approach
taken is to complement the existing topic distributions over words with a known
distribution based on a predefined set of topics. This is done by integrating
existing labeled knowledge sources representing known potential topics into the
probabilistic topic model. These knowledge sources are translated into a
distribution and used to set the hyperparameters of the Dirichlet generated
distribution over words. In the inference these modified distributions guide
the convergence of the latent topics to conform with the complementary
distributions. This approach ensures that the topic inference process is
consistent with existing knowledge. The label assignment from the complementary
knowledge sources are then transferred to the latent topics of the corpus. The
results show both accurate label assignment to topics as well as improved topic
generation than those obtained using various labeling approaches based off
Latent Dirichlet allocation (LDA).
| 2,017 | Computation and Language |
Single-Model Encoder-Decoder with Explicit Morphological Representation
for Reinflection | Morphological reinflection is the task of generating a target form given a
source form, a source tag and a target tag. We propose a new way of modeling
this task with neural encoder-decoder models. Our approach reduces the amount
of required training data for this architecture and achieves state-of-the-art
results, making encoder-decoder models applicable to morphological reinflection
even for low-resource languages. We further present a new automatic correction
method for the outputs based on edit trees.
| 2,016 | Computation and Language |
Stochastic Structured Prediction under Bandit Feedback | Stochastic structured prediction under bandit feedback follows a learning
protocol where on each of a sequence of iterations, the learner receives an
input, predicts an output structure, and receives partial feedback in form of a
task loss evaluation of the predicted structure. We present applications of
this learning scenario to convex and non-convex objectives for structured
prediction and analyze them as stochastic first-order methods. We present an
experimental evaluation on problems of natural language processing over
exponential output spaces, and compare convergence speed across different
objectives under the practical criterion of optimal task performance on
development data and the optimization-theoretic criterion of minimal squared
gradient norm. Best results under both criteria are obtained for a non-convex
objective for pairwise preference learning under bandit feedback.
| 2,017 | Computation and Language |
Multiresolution Recurrent Neural Networks: An Application to Dialogue
Response Generation | We introduce the multiresolution recurrent neural network, which extends the
sequence-to-sequence framework to model natural language generation as two
parallel discrete stochastic processes: a sequence of high-level coarse tokens,
and a sequence of natural language tokens. There are many ways to estimate or
learn the high-level coarse tokens, but we argue that a simple extraction
procedure is sufficient to capture a wealth of high-level discourse semantics.
Such procedure allows training the multiresolution recurrent neural network by
maximizing the exact joint log-likelihood over both sequences. In contrast to
the standard log- likelihood objective w.r.t. natural language tokens (word
perplexity), optimizing the joint log-likelihood biases the model towards
modeling high-level abstractions. We apply the proposed model to the task of
dialogue response generation in two challenging domains: the Ubuntu technical
support domain, and Twitter conversations. On Ubuntu, the model outperforms
competing approaches by a substantial margin, achieving state-of-the-art
results according to both automatic evaluation metrics and a human evaluation
study. On Twitter, the model appears to generate more relevant and on-topic
responses according to automatic evaluation metrics. Finally, our experiments
demonstrate that the proposed model is more adept at overcoming the sparsity of
natural language and is better able to capture long-term structure.
| 2,016 | Computation and Language |
Matrix Factorization using Window Sampling and Negative Sampling for
Improved Word Representations | In this paper, we propose LexVec, a new method for generating distributed
word representations that uses low-rank, weighted factorization of the Positive
Point-wise Mutual Information matrix via stochastic gradient descent, employing
a weighting scheme that assigns heavier penalties for errors on frequent
co-occurrences while still accounting for negative co-occurrence. Evaluation on
word similarity and analogy tasks shows that LexVec matches and often
outperforms state-of-the-art methods on many of these tasks.
| 2,016 | Computation and Language |
Using Neural Generative Models to Release Synthetic Twitter Corpora with
Reduced Stylometric Identifiability of Users | We present a method for generating synthetic versions of Twitter data using
neural generative models. The goal is protecting individuals in the source data
from stylometric re-identification attacks while still releasing data that
carries research value. Specifically, we generate tweet corpora that maintain
user-level word distributions by augmenting the neural language models with
user-specific components. We compare our approach to two standard text data
protection methods: redaction and iterative translation. We evaluate the three
methods on measures of risk and utility. We define risk following the
stylometric models of re-identification, and we define utility based on two
general word distribution measures and two common text analysis research tasks.
We find that neural models are able to significantly lower risk over previous
methods with little cost to utility. We also demonstrate that the neural models
allow data providers to actively control the risk-utility trade-off through
model tuning parameters. This work presents promising results for a new tool
addressing the problem of privacy for free text and sharing social media data
in a way that respects privacy and is ethically responsible.
| 2,018 | Computation and Language |
Exploiting Multi-typed Treebanks for Parsing with Deep Multi-task
Learning | Various treebanks have been released for dependency parsing. Despite that
treebanks may belong to different languages or have different annotation
schemes, they contain syntactic knowledge that is potential to benefit each
other. This paper presents an universal framework for exploiting these
multi-typed treebanks to improve parsing with deep multi-task learning. We
consider two kinds of treebanks as source: the multilingual universal treebanks
and the monolingual heterogeneous treebanks. Multiple treebanks are trained
jointly and interacted with multi-level parameter sharing. Experiments on
several benchmark datasets in various languages demonstrate that our approach
can make effective use of arbitrary source treebanks to improve target parsing
models.
| 2,016 | Computation and Language |
Learning Stylometric Representations for Authorship Analysis | Authorship analysis (AA) is the study of unveiling the hidden properties of
authors from a body of exponentially exploding textual data. It extracts an
author's identity and sociolinguistic characteristics based on the reflected
writing styles in the text. It is an essential process for various areas, such
as cybercrime investigation, psycholinguistics, political socialization, etc.
However, most of the previous techniques critically depend on the manual
feature engineering process. Consequently, the choice of feature set has been
shown to be scenario- or dataset-dependent. In this paper, to mimic the human
sentence composition process using a neural network approach, we propose to
incorporate different categories of linguistic features into distributed
representation of words in order to learn simultaneously the writing style
representations based on unlabeled texts for authorship analysis. In
particular, the proposed models allow topical, lexical, syntactical, and
character-level feature vectors of each document to be extracted as
stylometrics. We evaluate the performance of our approach on the problems of
authorship characterization and authorship verification with the Twitter,
novel, and essay datasets. The experiments suggest that our proposed text
representation outperforms the bag-of-lexical-n-grams, Latent Dirichlet
Allocation, Latent Semantic Analysis, PVDM, PVDBOW, and word2vec
representations.
| 2,016 | Computation and Language |
End-to-end LSTM-based dialog control optimized with supervised and
reinforcement learning | This paper presents a model for end-to-end learning of task-oriented dialog
systems. The main component of the model is a recurrent neural network (an
LSTM), which maps from raw dialog history directly to a distribution over
system actions. The LSTM automatically infers a representation of dialog
history, which relieves the system developer of much of the manual feature
engineering of dialog state. In addition, the developer can provide software
that expresses business rules and provides access to programmatic APIs,
enabling the LSTM to take actions in the real world on behalf of the user. The
LSTM can be optimized using supervised learning (SL), where a domain expert
provides example dialogs which the LSTM should imitate; or using reinforcement
learning (RL), where the system improves by interacting directly with end
users. Experiments show that SL and RL are complementary: SL alone can derive a
reasonable initial policy from a small number of training dialogs; and starting
RL optimization with a policy trained with SL substantially accelerates the
learning rate of RL.
| 2,016 | Computation and Language |
Dependency Parsing as Head Selection | Conventional graph-based dependency parsers guarantee a tree structure both
during training and inference. Instead, we formalize dependency parsing as the
problem of independently selecting the head of each word in a sentence. Our
model which we call \textsc{DeNSe} (as shorthand for {\bf De}pendency {\bf
N}eural {\bf Se}lection) produces a distribution over possible heads for each
word using features obtained from a bidirectional recurrent neural network.
Without enforcing structural constraints during training, \textsc{DeNSe}
generates (at inference time) trees for the overwhelming majority of sentences,
while non-tree outputs can be adjusted with a maximum spanning tree algorithm.
We evaluate \textsc{DeNSe} on four languages (English, Chinese, Czech, and
German) with varying degrees of non-projectivity. Despite the simplicity of the
approach, our parsers are on par with the state of the art.
| 2,016 | Computation and Language |
Enhancing the LexVec Distributed Word Representation Model Using
Positional Contexts and External Memory | In this paper we take a state-of-the-art model for distributed word
representation that explicitly factorizes the positive pointwise mutual
information (PPMI) matrix using window sampling and negative sampling and
address two of its shortcomings. We improve syntactic performance by using
positional contexts, and solve the need to store the PPMI matrix in memory by
working on aggregate data in external memory. The effectiveness of both
modifications is shown using word similarity and analogy tasks.
| 2,016 | Computation and Language |
An Attentional Neural Conversation Model with Improved Specificity | In this paper we propose a neural conversation model for conducting
dialogues. We demonstrate the use of this model to generate help desk
responses, where users are asking questions about PC applications. Our model is
distinguished by two characteristics. First, it models intention across turns
with a recurrent network, and incorporates an attention model that is
conditioned on the representation of intention. Secondly, it avoids generating
non-specific responses by incorporating an IDF term in the objective function.
The model is evaluated both as a pure generation model in which a help-desk
response is generated from scratch, and as a retrieval model with performance
measured using recall rates of the correct response. Experimental results
indicate that the model outperforms previously proposed neural conversation
architectures, and that using specificity in the objective function
significantly improves performances for both generation and retrieval.
| 2,016 | Computation and Language |
Improving Coreference Resolution by Learning Entity-Level Distributed
Representations | A long-standing challenge in coreference resolution has been the
incorporation of entity-level information - features defined over clusters of
mentions instead of mention pairs. We present a neural network based
coreference system that produces high-dimensional vector representations for
pairs of coreference clusters. Using these representations, our system learns
when combining clusters is desirable. We train the system with a
learning-to-search algorithm that teaches it which local decisions (cluster
merges) will lead to a high-scoring final coreference partition. The system
substantially outperforms the current state-of-the-art on the English and
Chinese portions of the CoNLL 2012 Shared Task dataset despite using few
hand-engineered features.
| 2,016 | Computation and Language |
Neural Architectures for Fine-grained Entity Type Classification | In this work, we investigate several neural network architectures for
fine-grained entity type classification. Particularly, we consider extensions
to a recently proposed attentive neural architecture and make three key
contributions. Previous work on attentive neural architectures do not consider
hand-crafted features, we combine learnt and hand-crafted features and observe
that they complement each other. Additionally, through quantitative analysis we
establish that the attention mechanism is capable of learning to attend over
syntactic heads and the phrase containing the mention, where both are known
strong hand-crafted features for our task. We enable parameter sharing through
a hierarchical label encoding method, that in low-dimensional projections show
clear clusters for each type hierarchy. Lastly, despite using the same
evaluation dataset, the literature frequently compare models trained using
different data. We establish that the choice of training data has a drastic
impact on performance, with decreases by as much as 9.85% loose micro F1 score
for a previously proposed method. Despite this, our best model achieves
state-of-the-art results with 75.36% loose micro F1 score on the well-
established FIGER (GOLD) dataset.
| 2,017 | Computation and Language |
Generating Natural Language Inference Chains | The ability to reason with natural language is a fundamental prerequisite for
many NLP tasks such as information extraction, machine translation and question
answering. To quantify this ability, systems are commonly tested whether they
can recognize textual entailment, i.e., whether one sentence can be inferred
from another one. However, in most NLP applications only single source
sentences instead of sentence pairs are available. Hence, we propose a new task
that measures how well a model can generate an entailed sentence from a source
sentence. We take entailment-pairs of the Stanford Natural Language Inference
corpus and train an LSTM with attention. On a manually annotated test set we
found that 82% of generated sentences are correct, an improvement of 10.3% over
an LSTM baseline. A qualitative analysis shows that this model is not only
capable of shortening input sentences, but also inferring new statements via
paraphrasing and phrase entailment. We then apply this model recursively to
input-output pairs, thereby generating natural language inference chains that
can be used to automatically construct an entailment graph from source
sentences. Finally, by swapping source and target sentences we can also train a
model that given an input sentence invents additional information to generate a
new sentence.
| 2,016 | Computation and Language |
Brundlefly at SemEval-2016 Task 12: Recurrent Neural Networks vs. Joint
Inference for Clinical Temporal Information Extraction | We submitted two systems to the SemEval-2016 Task 12: Clinical TempEval
challenge, participating in Phase 1, where we identified text spans of time and
event expressions in clinical notes and Phase 2, where we predicted a relation
between an event and its parent document creation time.
For temporal entity extraction, we find that a joint inference-based approach
using structured prediction outperforms a vanilla recurrent neural network that
incorporates word embeddings trained on a variety of large clinical document
sets. For document creation time relations, we find that a combination of date
canonicalization and distant supervision rules for predicting relations on both
events and time expressions improves classification, though gains are limited,
likely due to the small scale of training data.
| 2,016 | Computation and Language |
Coordination in Categorical Compositional Distributional Semantics | An open problem with categorical compositional distributional semantics is
the representation of words that are considered semantically vacuous from a
distributional perspective, such as determiners, prepositions, relative
pronouns or coordinators. This paper deals with the topic of coordination
between identical syntactic types, which accounts for the majority of
coordination cases in language. By exploiting the compact closed structure of
the underlying category and Frobenius operators canonically induced over the
fixed basis of finite-dimensional vector spaces, we provide a morphism as
representation of a coordinator tensor, and we show how it lifts from atomic
types to compound types. Linguistic intuitions are provided, and the importance
of the Frobenius operators as an addition to the compact closed setting with
regard to language is discussed.
| 2,016 | Computation and Language |
Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues.
| 2,016 | Computation and Language |
Neural Net Models for Open-Domain Discourse Coherence | Discourse coherence is strongly associated with text quality, making it
important to natural language generation and understanding. Yet existing models
of coherence focus on measuring individual aspects of coherence (lexical
overlap, rhetorical structure, entity centering) in narrow domains.
In this paper, we describe domain-independent neural models of discourse
coherence that are capable of measuring multiple aspects of coherence in
existing sentences and can maintain coherence while generating new sentences.
We study both discriminative models that learn to distinguish coherent from
incoherent discourse, and generative models that produce coherent text,
including a novel neural latent-variable Markovian generative model that
captures the latent discourse dependencies between sentences in a text.
Our work achieves state-of-the-art performance on multiple coherence
evaluations, and marks an initial step in generating coherent texts given
discourse contexts.
| 2,017 | Computation and Language |
Gated-Attention Readers for Text Comprehension | In this paper we study the problem of answering cloze-style questions over
documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop
architecture with a novel attention mechanism, which is based on multiplicative
interactions between the query embedding and the intermediate states of a
recurrent neural network document reader. This enables the reader to build
query-specific representations of tokens in the document for accurate answer
selection. The GA Reader obtains state-of-the-art results on three benchmarks
for this task--the CNN \& Daily Mail news stories and the Who Did What dataset.
The effectiveness of multiplicative interaction is demonstrated by an ablation
study, and by comparing to alternative compositional operators for implementing
the gated-attention. The code is available at
https://github.com/bdhingra/ga-reader.
| 2,017 | Computation and Language |
Generating and Exploiting Large-scale Pseudo Training Data for Zero
Pronoun Resolution | Most existing approaches for zero pronoun resolution are heavily relying on
annotated data, which is often released by shared task organizers. Therefore,
the lack of annotated data becomes a major obstacle in the progress of zero
pronoun resolution task. Also, it is expensive to spend manpower on labeling
the data for better performance. To alleviate the problem above, in this paper,
we propose a simple but novel approach to automatically generate large-scale
pseudo training data for zero pronoun resolution. Furthermore, we successfully
transfer the cloze-style reading comprehension neural network model into zero
pronoun resolution task and propose a two-step training mechanism to overcome
the gap between the pseudo training data and the real one. Experimental results
show that the proposed approach significantly outperforms the state-of-the-art
systems with an absolute improvements of 3.1% F-score on OntoNotes 5.0 data.
| 2,017 | Computation and Language |
Adversarial Deep Averaging Networks for Cross-Lingual Sentiment
Classification | In recent years great success has been achieved in sentiment classification
for English, thanks in part to the availability of copious annotated resources.
Unfortunately, most languages do not enjoy such an abundance of labeled data.
To tackle the sentiment classification problem in low-resource languages
without adequate annotated data, we propose an Adversarial Deep Averaging
Network (ADAN) to transfer the knowledge learned from labeled data on a
resource-rich source language to low-resource languages where only unlabeled
data exists. ADAN has two discriminative branches: a sentiment classifier and
an adversarial language discriminator. Both branches take input from a shared
feature extractor to learn hidden representations that are simultaneously
indicative for the classification task and invariant across languages.
Experiments on Chinese and Arabic sentiment classification demonstrate that
ADAN significantly outperforms state-of-the-art systems.
| 2,018 | Computation and Language |
Gated Word-Character Recurrent Language Model | We introduce a recurrent neural network language model (RNN-LM) with long
short-term memory (LSTM) units that utilizes both character-level and
word-level inputs. Our model has a gate that adaptively finds the optimal
mixture of the character-level and word-level inputs. The gate creates the
final vector representation of a word by combining two distinct representations
of the word. The character-level inputs are converted into vector
representations of words using a bidirectional LSTM. The word-level inputs are
projected into another high-dimensional space by a word lookup table. The final
vector representations of words are used in the LSTM language model which
predicts the next word given all the preceding words. Our model with the gating
mechanism effectively utilizes the character-level inputs for rare and
out-of-vocabulary words and outperforms word-level language models on several
English corpora.
| 2,016 | Computation and Language |
Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing.
| 2,017 | Computation and Language |
Neural Machine Translation with External Phrase Memory | In this paper, we propose phraseNet, a neural machine translator with a
phrase memory which stores phrase pairs in symbolic form, mined from corpus or
specified by human experts. For any given source sentence, phraseNet scans the
phrase memory to determine the candidate phrase pairs and integrates tagging
information in the representation of source sentence accordingly. The decoder
utilizes a mixture of word-generating component and phrase-generating
component, with a specifically designed strategy to generate a sequence of
multiple words all at once. The phraseNet not only approaches one step towards
incorporating external knowledge into neural machine translation, but also
makes an effort to extend the word-by-word generation mechanism of recurrent
neural network. Our empirical study on Chinese-to-English translation shows
that, with carefully-chosen phrase table in memory, phraseNet yields 3.45 BLEU
improvement over the generic neural machine translator.
| 2,016 | Computation and Language |
A Decomposable Attention Model for Natural Language Inference | We propose a simple neural architecture for natural language inference. Our
approach uses attention to decompose the problem into subproblems that can be
solved separately, thus making it trivially parallelizable. On the Stanford
Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results
with almost an order of magnitude fewer parameters than previous work and
without relying on any word-order information. Adding intra-sentence attention
that takes a minimum amount of order into account yields further improvements.
| 2,016 | Computation and Language |
Neural Network Models for Implicit Discourse Relation Classification in
English and Chinese without Surface Features | Inferring implicit discourse relations in natural language text is the most
difficult subtask in discourse parsing. Surface features achieve good
performance, but they are not readily applicable to other languages without
semantic lexicons. Previous neural models require parses, surface features, or
a small label set to work well. Here, we propose neural network models that are
based on feedforward and long-short term memory architecture without any
surface features. To our surprise, our best configured feedforward architecture
outperforms LSTM-based model in most cases despite thorough tuning. Under
various fine-grained label sets and a cross-linguistic setting, our feedforward
models perform consistently better or at least just as well as systems that
require hand-crafted surface features. Our models present the first neural
Chinese discourse parser in the style of Chinese Discourse Treebank, showing
that our results hold cross-linguistically.
| 2,016 | Computation and Language |
CFO: Conditional Focused Neural Question Answering with Large-scale
Knowledge Bases | How can we enable computers to automatically answer questions like "Who
created the character Harry Potter"? Carefully built knowledge bases provide
rich sources of facts. However, it remains a challenge to answer factoid
questions raised in natural language due to numerous expressions of one
question. In particular, we focus on the most common questions --- ones that
can be answered with a single fact in the knowledge base. We propose CFO, a
Conditional Focused neural-network-based approach to answering factoid
questions with knowledge bases. Our approach first zooms in a question to find
more probable candidate subject mentions, and infers the final answers with a
unified conditional probabilistic framework. Powered by deep recurrent neural
networks and neural embeddings, our proposed CFO achieves an accuracy of 75.7%
on a dataset of 108k questions - the largest public one to date. It outperforms
the current state of the art by an absolute margin of 11.8%.
| 2,016 | Computation and Language |
Memory-enhanced Decoder for Neural Machine Translation | We propose to enhance the RNN decoder in a neural machine translator (NMT)
with external memory, as a natural but powerful extension to the state in the
decoding RNN. This memory-enhanced RNN decoder is called \textsc{MemDec}. At
each time during decoding, \textsc{MemDec} will read from this memory and write
to this memory once, both with content-based addressing. Unlike the unbounded
memory in previous work\cite{RNNsearch} to store the representation of source
sentence, the memory in \textsc{MemDec} is a matrix with pre-determined size
designed to better capture the information important for the decoding process
at each time step. Our empirical study on Chinese-English translation shows
that it can improve by $4.8$ BLEU upon Groundhog and $5.3$ BLEU upon on Moses,
yielding the best performance achieved with the same training set.
| 2,016 | Computation and Language |
Incorporating Discrete Translation Lexicons into Neural Machine
Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time.
| 2,016 | Computation and Language |
Can neural machine translation do simultaneous translation? | We investigate the potential of attention-based neural machine translation in
simultaneous translation. We introduce a novel decoding algorithm, called
simultaneous greedy decoding, that allows an existing neural machine
translation model to begin translating before a full source sentence is
received. This approach is unique from previous works on simultaneous
translation in that segmentation and translation are done jointly to maximize
the translation quality and that translating each segment is strongly
conditioned on all the previous segments. This paper presents a first step
toward building a full simultaneous translation system based on neural machine
translation.
| 2,016 | Computation and Language |
Supervised Syntax-based Alignment between English Sentences and Abstract
Meaning Representation Graphs | As alignment links are not given between English sentences and Abstract
Meaning Representation (AMR) graphs in the AMR annotation, automatic alignment
becomes indispensable for training an AMR parser. Previous studies formalize it
as a string-to-string problem and solve it in an unsupervised way, which
suffers from data sparseness due to the small size of training data for
English-AMR alignment. In this paper, we formalize it as a syntax-based
alignment problem and solve it in a supervised manner based on syntax trees,
which can address the data sparseness problem by generalizing English-AMR
tokens to syntax tags. Experiments verify the effectiveness of the proposed
method not only for English-AMR alignment, but also for AMR parsing.
| 2,017 | Computation and Language |
Iterative Alternating Neural Attention for Machine Reading | We propose a novel neural attention architecture to tackle machine
comprehension tasks, such as answering Cloze-style queries with respect to a
document. Unlike previous models, we do not collapse the query into a single
vector, instead we deploy an iterative alternating attention mechanism that
allows a fine-grained exploration of both the query and the document. Our model
outperforms state-of-the-art baselines in standard machine comprehension
benchmarks such as CNN news articles and the Children's Book Test (CBT)
dataset.
| 2,016 | Computation and Language |
Natural Language Comprehension with the EpiReader | We present the EpiReader, a novel model for machine comprehension of text.
Machine comprehension of unstructured, real-world text is a major research goal
for natural language processing. Current tests of machine comprehension pose
questions whose answers can be inferred from some supporting text, and evaluate
a model's response to the questions. The EpiReader is an end-to-end neural
model comprising two components: the first component proposes a small set of
candidate answers after comparing a question to its supporting text, and the
second component formulates hypotheses using the proposed candidates and the
question, then reranks the hypotheses based on their estimated concordance with
the supporting text. We present experiments demonstrating that the EpiReader
sets a new state-of-the-art on the CNN and Children's Book Test machine
comprehension benchmarks, outperforming previous neural models by a significant
margin.
| 2,016 | Computation and Language |
Multilingual Visual Sentiment Concept Matching | The impact of culture in visual emotion perception has recently captured the
attention of multimedia research. In this study, we pro- vide powerful
computational linguistics tools to explore, retrieve and browse a dataset of
16K multilingual affective visual concepts and 7.3M Flickr images. First, we
design an effective crowdsourc- ing experiment to collect human judgements of
sentiment connected to the visual concepts. We then use word embeddings to
repre- sent these concepts in a low dimensional vector space, allowing us to
expand the meaning around concepts, and thus enabling insight about
commonalities and differences among different languages. We compare a variety
of concept representations through a novel evaluation task based on the notion
of visual semantic relatedness. Based on these representations, we design
clustering schemes to group multilingual visual concepts, and evaluate them
with novel metrics based on the crowdsourced sentiment annotations as well as
visual semantic relatedness. The proposed clustering framework enables us to
analyze the full multilingual dataset in-depth and also show an application on
a facial data subset, exploring cultural in- sights of portrait-related
affective visual concepts.
| 2,016 | Computation and Language |
Optimizing Spectral Learning for Parsing | We describe a search algorithm for optimizing the number of latent states
when estimating latent-variable PCFGs with spectral methods. Our results show
that contrary to the common belief that the number of latent states for each
nonterminal in an L-PCFG can be decided in isolation with spectral methods,
parsing results significantly improve if the number of latent states for each
nonterminal is globally optimized, while taking into account interactions
between the different nonterminals. In addition, we contribute an empirical
analysis of spectral algorithms on eight morphologically rich languages:
Basque, French, German, Hebrew, Hungarian, Korean, Polish and Swedish. Our
results show that our estimation consistently performs better or close to
coarse-to-fine expectation-maximization techniques for these languages.
| 2,016 | Computation and Language |
On the Place of Text Data in Lifelogs, and Text Analysis via Semantic
Facets | Current research in lifelog data has not paid enough attention to analysis of
cognitive activities in comparison to physical activities. We argue that as we
look into the future, wearable devices are going to be cheaper and more
prevalent and textual data will play a more significant role. Data captured by
lifelogging devices will increasingly include speech and text, potentially
useful in analysis of intellectual activities. Analyzing what a person hears,
reads, and sees, we should be able to measure the extent of cognitive activity
devoted to a certain topic or subject by a learner. Test-based lifelog records
can benefit from semantic analysis tools developed for natural language
processing. We show how semantic analysis of such text data can be achieved
through the use of taxonomic subject facets and how these facets might be
useful in quantifying cognitive activity devoted to various topics in a
person's day. We are currently developing a method to automatically create
taxonomic topic vocabularies that can be applied to this detection of
intellectual activity.
| 2,016 | Computation and Language |
Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players.
| 2,016 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.