Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Semantic Parsing to Probabilistic Programs for Situated Question
Answering | Situated question answering is the problem of answering questions about an
environment such as an image or diagram. This problem requires jointly
interpreting a question and an environment using background knowledge to select
the correct answer. We present Parsing to Probabilistic Programs (P3), a novel
situated question answering model that can use background knowledge and global
features of the question/environment interpretation while retaining efficient
approximate inference. Our key insight is to treat semantic parses as
probabilistic programs that execute nondeterministically and whose possible
executions represent environmental uncertainty. We evaluate our approach on a
new, publicly-released data set of 5000 science diagram questions,
outperforming several competitive classical and neural baselines.
| 2,016 | Computation and Language |
Gender and Interest Targeting for Sponsored Post Advertising at Tumblr | As one of the leading platforms for creative content, Tumblr offers
advertisers a unique way of creating brand identity. Advertisers can tell their
story through images, animation, text, music, video, and more, and promote that
content by sponsoring it to appear as an advertisement in the streams of Tumblr
users. In this paper we present a framework that enabled one of the key
targeted advertising components for Tumblr, specifically gender and interest
targeting. We describe the main challenges involved in development of the
framework, which include creating the ground truth for training gender
prediction models, as well as mapping Tumblr content to an interest taxonomy.
For purposes of inferring user interests we propose a novel semi-supervised
neural language model for categorization of Tumblr content (i.e., post tags and
post keywords). The model was trained on a large-scale data set consisting of
6.8 billion user posts, with very limited amount of categorized keywords, and
was shown to have superior performance over the bag-of-words model. We
successfully deployed gender and interest targeting capability in Yahoo
production systems, delivering inference for users that cover more than 90% of
daily activities at Tumblr. Online performance results indicate advantages of
the proposed approach, where we observed 20% lift in user engagement with
sponsored posts as compared to untargeted campaigns.
| 2,015 | Computation and Language |
Explaining Predictions of Non-Linear Classifiers in NLP | Layer-wise relevance propagation (LRP) is a recently proposed technique for
explaining predictions of complex non-linear classifiers in terms of input
variables. In this paper, we apply LRP for the first time to natural language
processing (NLP). More precisely, we use it to explain the predictions of a
convolutional neural network (CNN) trained on a topic categorization task. Our
analysis highlights which words are relevant for a specific prediction of the
CNN. We compare our technique to standard sensitivity analysis, both
qualitatively and quantitatively, using a "word deleting" perturbation
experiment, a PCA analysis, and various visualizations. All experiments
validate the suitability of LRP for explaining the CNN predictions, which is
also in line with results reported in recent image classification studies.
| 2,016 | Computation and Language |
Analyzing the Behavior of Visual Question Answering Models | Recently, a number of deep-learning based models have been proposed for the
task of Visual Question Answering (VQA). The performance of most models is
clustered around 60-70%. In this paper we propose systematic methods to analyze
the behavior of these models as a first step towards recognizing their
strengths and weaknesses, and identifying the most fruitful directions for
progress. We analyze two models, one each from two major classes of VQA models
-- with-attention and without-attention and show the similarities and
differences in the behavior of these models. We also analyze the winning entry
of the VQA Challenge 2016.
Our behavior analysis reveals that despite recent progress, today's VQA
models are "myopic" (tend to fail on sufficiently novel instances), often "jump
to conclusions" (converge on a predicted answer after 'listening' to just half
the question), and are "stubborn" (do not change their answers across images).
| 2,016 | Computation and Language |
LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in
Recurrent Neural Networks | Recurrent neural networks, and in particular long short-term memory (LSTM)
networks, are a remarkably effective tool for sequence modeling that learn a
dense black-box hidden representation of their sequential input. Researchers
interested in better understanding these models have studied the changes in
hidden state representations over time and noticed some interpretable patterns
but also significant noise. In this work, we present LSTMVIS, a visual analysis
tool for recurrent neural networks with a focus on understanding these hidden
state dynamics. The tool allows users to select a hypothesis input range to
focus on local state changes, to match these states changes to similar patterns
in a large data set, and to align these results with structural annotations
from their domain. We show several use cases of the tool for analyzing specific
hidden state properties on dataset containing nesting, phrase structure, and
chord progressions, and demonstrate how the tool can be used to isolate
patterns for further statistical analysis. We characterize the domain, the
different stakeholders, and their goals and tasks.
| 2,017 | Computation and Language |
NN-grams: Unifying neural network and n-gram language models for Speech
Recognition | We present NN-grams, a novel, hybrid language model integrating n-grams and
neural networks (NN) for speech recognition. The model takes as input both word
histories as well as n-gram counts. Thus, it combines the memorization capacity
and scalability of an n-gram model with the generalization ability of neural
networks. We report experiments where the model is trained on 26B words.
NN-grams are efficient at run-time since they do not include an output soft-max
layer. The model is trained using noise contrastive estimation (NCE), an
approach that transforms the estimation problem of neural networks into one of
binary classification between data samples and noise samples. We present
results with noise samples derived from either an n-gram distribution or from
speech recognition lattices. NN-grams outperforms an n-gram model on an Italian
speech recognition dictation task.
| 2,016 | Computation and Language |
CUNI System for WMT16 Automatic Post-Editing and Multimodal Translation
Tasks | Neural sequence to sequence learning recently became a very promising
paradigm in machine translation, achieving competitive results with statistical
phrase-based systems. In this system description paper, we attempt to utilize
several recently published methods used for neural sequential learning in order
to build systems for WMT 2016 shared tasks of Automatic Post-Editing and
Multimodal Machine Translation.
| 2,016 | Computation and Language |
Sort Story: Sorting Jumbled Images and Captions into Stories | Temporal common sense has applications in AI tasks such as QA, multi-document
summarization, and human-AI communication. We propose the task of sequencing --
given a jumbled set of aligned image-caption pairs that belong to a story, the
task is to sort them such that the output sequence forms a coherent story. We
present multiple approaches, via unary (position) and pairwise (order)
predictions, and their ensemble-based combinations, achieving strong results on
this task. We use both text-based and image-based features, which depict
complementary improvements. Using qualitative examples, we demonstrate that our
models have learnt interesting aspects of temporal common sense.
| 2,016 | Computation and Language |
Interactive Semantic Featuring for Text Classification | In text classification, dictionaries can be used to define
human-comprehensible features. We propose an improvement to dictionary features
called smoothed dictionary features. These features recognize document contexts
instead of n-grams. We describe a principled methodology to solicit dictionary
features from a teacher, and present results showing that models built using
these human-comprehensible features are competitive with models trained with
Bag of Words features.
| 2,016 | Computation and Language |
A Sentence Compression Based Framework to Query-Focused Multi-Document
Summarization | We consider the problem of using sentence compression techniques to
facilitate query-focused multi-document summarization. We present a
sentence-compression-based framework for the task, and design a series of
learning-based compression models built on parse trees. An innovative beam
search decoder is proposed to efficiently find highly probable compressions.
Under this framework, we show how to integrate various indicative metrics such
as linguistic motivation and query relevance into the compression process by
deriving a novel formulation of a compression scoring function. Our best model
achieves statistically significant improvement over the state-of-the-art
systems on several metrics (e.g. 8.0% and 5.4% improvements in ROUGE-2
respectively) for the DUC 2006 and 2007 summarization task.
| 2,016 | Computation and Language |
Evaluation method of word embedding by roots and affixes | Word embedding has been shown to be remarkably effective in a lot of Natural
Language Processing tasks. However, existing models still have a couple of
limitations in interpreting the dimensions of word vector. In this paper, we
provide a new approach---roots and affixes model(RAAM)---to interpret it from
the intrinsic structures of natural language. Also it can be used as an
evaluation measure of the quality of word embedding. We introduce the
information entropy into our model and divide the dimensions into two
categories, just like roots and affixes in lexical semantics. Then considering
each category as a whole rather than individually. We experimented with English
Wikipedia corpus. Our result show that there is a negative linear relation
between the two attributes and a high positive correlation between our model
and downstream semantic evaluation tasks.
| 2,016 | Computation and Language |
Issues in evaluating semantic spaces using word analogies | The offset method for solving word analogies has become a standard evaluation
tool for vector-space semantic models: it is considered desirable for a space
to represent semantic relations as consistent vector offsets. We show that the
method's reliance on cosine similarity conflates offset consistency with
largely irrelevant neighborhood structure, and propose simple baselines that
should be used to improve the utility of the method in vector space evaluation.
| 2,016 | Computation and Language |
The emotional arcs of stories are dominated by six basic shapes | Advances in computing power, natural language processing, and digitization of
text now make it possible to study a culture's evolution through its texts
using a "big data" lens. Our ability to communicate relies in part upon a
shared emotional experience, with stories often following distinct emotional
trajectories and forming patterns that are meaningful to us. Here, by
classifying the emotional arcs for a filtered subset of 1,327 stories from
Project Gutenberg's fiction collection, we find a set of six core emotional
arcs which form the essential building blocks of complex emotional
trajectories. We strengthen our findings by separately applying Matrix
decomposition, supervised learning, and unsupervised learning. For each of
these six core emotional arcs, we examine the closest characteristic stories in
publication today and find that particular emotional arcs enjoy greater
success, as measured by downloads.
| 2,016 | Computation and Language |
Sequential Convolutional Neural Networks for Slot Filling in Spoken
Language Understanding | We investigate the usage of convolutional neural networks (CNNs) for the slot
filling task in spoken language understanding. We propose a novel CNN
architecture for sequence labeling which takes into account the previous
context words with preserved order information and pays special attention to
the current word with its surrounding context. Moreover, it combines the
information from the past and the future words for classification. Our proposed
CNN architecture outperforms even the previously best ensembling recurrent
neural network model and achieves state-of-the-art results with an F1-score of
95.61% on the ATIS benchmark dataset without using any additional linguistic
knowledge and resources.
| 2,016 | Computation and Language |
Efficient Parallel Learning of Word2Vec | Since its introduction, Word2Vec and its variants are widely used to learn
semantics-preserving representations of words or entities in an embedding
space, which can be used to produce state-of-art results for various Natural
Language Processing tasks. Existing implementations aim to learn efficiently by
running multiple threads in parallel while operating on a single model in
shared memory, ignoring incidental memory update collisions. We show that these
collisions can degrade the efficiency of parallel learning, and propose a
straightforward caching strategy that improves the efficiency by a factor of 4.
| 2,016 | Computation and Language |
Unsupervised Topic Modeling Approaches to Decision Summarization in
Spoken Meetings | We present a token-level decision summarization framework that utilizes the
latent topic structures of utterances to identify "summary-worthy" words.
Concretely, a series of unsupervised topic models is explored and experimental
results show that fine-grained topic models, which discover topics at the
utterance-level rather than the document-level, can better identify the gist of
the decision-making process. Moreover, our proposed token-level summarization
approach, which is able to remove redundancies within utterances, outperforms
existing utterance ranking based summarization methods. Finally, context
information is also investigated to add additional relevant information to the
summary.
| 2,016 | Computation and Language |
Focused Meeting Summarization via Unsupervised Relation Extraction | We present a novel unsupervised framework for focused meeting summarization
that views the problem as an instance of relation extraction. We adapt an
existing in-domain relation learner (Chen et al., 2011) by exploiting a set of
task-specific constraints and features. We evaluate the approach on a decision
summarization task and show that it outperforms unsupervised utterance-level
extractive summarization baselines as well as an existing generic
relation-extraction-based summarization method. Moreover, our approach produces
summaries competitive with those generated by supervised methods in terms of
the standard ROUGE score.
| 2,016 | Computation and Language |
Corpus-level Fine-grained Entity Typing Using Contextual Information | This paper addresses the problem of corpus-level entity typing, i.e.,
inferring from a large corpus that an entity is a member of a class such as
"food" or "artist". The application of entity typing we are interested in is
knowledge base completion, specifically, to learn which classes an entity is a
member of. We propose FIGMENT to tackle this problem. FIGMENT is
embedding-based and combines (i) a global model that scores based on aggregated
contextual information of an entity and (ii) a context model that first scores
the individual occurrences of an entity and then aggregates the scores. In our
evaluation, FIGMENT strongly outperforms an approach to entity typing that
relies on relations obtained by an open information extraction system.
| 2,016 | Computation and Language |
Intrinsic Subspace Evaluation of Word Embedding Representations | We introduce a new methodology for intrinsic evaluation of word
representations. Specifically, we identify four fundamental criteria based on
the characteristics of natural language that pose difficulties to NLP systems;
and develop tests that directly show whether or not representations contain the
subspaces necessary to satisfy these criteria. Current intrinsic evaluations
are mostly based on the overall similarity or full-space similarity of words
and thus view vector representations as points. We show the limits of these
point-based intrinsic evaluations. We apply our evaluation methodology to the
comparison of a count vector model and several neural network models and
demonstrate important properties of these models.
| 2,016 | Computation and Language |
Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU.
| 2,016 | Computation and Language |
Word sense disambiguation: a complex network approach | In recent years, concepts and methods of complex networks have been employed
to tackle the word sense disambiguation (WSD) task by representing words as
nodes, which are connected if they are semantically similar. Despite the
increasingly number of studies carried out with such models, most of them use
networks just to represent the data, while the pattern recognition performed on
the attribute space is performed using traditional learning techniques. In
other words, the structural relationship between words have not been explicitly
used in the pattern recognition process. In addition, only a few investigations
have probed the suitability of representations based on bipartite networks and
graphs (bigraphs) for the problem, as many approaches consider all possible
links between words. In this context, we assess the relevance of a bipartite
network model representing both feature words (i.e. the words characterizing
the context) and target (ambiguous) words to solve ambiguities in written
texts. Here, we focus on the semantical relationships between these two type of
words, disregarding the relationships between feature words. In special, the
proposed method not only serves to represent texts as graphs, but also
constructs a structure on which the discrimination of senses is accomplished.
Our results revealed that the proposed learning algorithm in such bipartite
networks provides excellent results mostly when topical features are employed
to characterize the context. Surprisingly, our method even outperformed the
support vector machine algorithm in particular cases, with the advantage of
being robust even if a small training dataset is available. Taken together, the
results obtained here show that the proposed representation/classification
method might be useful to improve the semantical characterization of written
texts.
| 2,018 | Computation and Language |
Bidirectional Recurrent Neural Networks for Medical Event Detection in
Electronic Health Records | Sequence labeling for extraction of medical events and their attributes from
unstructured text in Electronic Health Record (EHR) notes is a key step towards
semantic understanding of EHRs. It has important applications in health
informatics including pharmacovigilance and drug surveillance. The state of the
art supervised machine learning models in this domain are based on Conditional
Random Fields (CRFs) with features calculated from fixed context windows. In
this application, we explored various recurrent neural network frameworks and
show that they significantly outperformed the CRF models.
| 2,016 | Computation and Language |
Summarizing Decisions in Spoken Meetings | This paper addresses the problem of summarizing decisions in spoken meetings:
our goal is to produce a concise {\it decision abstract} for each meeting
decision. We explore and compare token-level and dialogue act-level automatic
summarization methods using both unsupervised and supervised learning
frameworks. In the supervised summarization setting, and given true clusterings
of decision-related utterances, we find that token-level summaries that employ
discourse context can approach an upper bound for decision abstracts derived
directly from dialogue acts. In the unsupervised summarization setting,we find
that summaries based on unsupervised partitioning of decision-related
utterances perform comparably to those based on partitions generated using
supervised techniques (0.22 ROUGE-F1 using LDA-based topic models vs. 0.23
using SVMs).
| 2,016 | Computation and Language |
Leveraging Semantic Web Search and Browse Sessions for Multi-Turn Spoken
Dialog Systems | Training statistical dialog models in spoken dialog systems (SDS) requires
large amounts of annotated data. The lack of scalable methods for data mining
and annotation poses a significant hurdle for state-of-the-art statistical
dialog managers. This paper presents an approach that directly leverage
billions of web search and browse sessions to overcome this hurdle. The key
insight is that task completion through web search and browse sessions is (a)
predictable and (b) generalizes to spoken dialog task completion. The new
method automatically mines behavioral search and browse patterns from web logs
and translates them into spoken dialog models. We experiment with naturally
occurring spoken dialogs and large scale web logs. Our session-based models
outperform the state-of-the-art method for entity extraction task in SDS. We
also achieve better performance for both entity and relation extraction on web
search queries when compared with nontrivial baselines.
| 2,016 | Computation and Language |
Learning for Biomedical Information Extraction: Methodological Review of
Recent Advances | Biomedical information extraction (BioIE) is important to many applications,
including clinical decision support, integrative biology, and
pharmacovigilance, and therefore it has been an active research. Unlike
existing reviews covering a holistic view on BioIE, this review focuses on
mainly recent advances in learning based approaches, by systematically
summarizing them into different aspects of methodological development. In
addition, we dive into open information extraction and deep learning, two
emerging and influential techniques and envision next generation of BioIE.
| 2,016 | Computation and Language |
Functional Distributional Semantics | Vector space models have become popular in distributional semantics, despite
the challenges they face in capturing various semantic phenomena. We propose a
novel probabilistic framework which draws on both formal semantics and recent
advances in machine learning. In particular, we separate predicates from the
entities they refer to, allowing us to perform Bayesian inference based on
logical forms. We describe an implementation of this framework using a
combination of Restricted Boltzmann Machines and feedforward neural networks.
Finally, we demonstrate the feasibility of this approach by training it on a
parsed corpus and evaluating it on established similarity datasets.
| 2,016 | Computation and Language |
This before That: Causal Precedence in the Biomedical Domain | Causal precedence between biochemical interactions is crucial in the
biomedical domain, because it transforms collections of individual
interactions, e.g., bindings and phosphorylations, into the causal mechanisms
needed to inform meaningful search and inference. Here, we analyze causal
precedence in the biomedical domain as distinct from open-domain, temporal
precedence. First, we describe a novel, hand-annotated text corpus of causal
precedence in the biomedical domain. Second, we use this corpus to investigate
a battery of models of precedence, covering rule-based, feature-based, and
latent representation models. The highest-performing individual model achieved
a micro F1 of 43 points, approaching the best performers on the simpler
temporal-only precedence tasks. Feature-based and latent representation models
each outperform the rule-based models, but their performance is complementary
to one another. We apply a sieve-based architecture to capitalize on this lack
of overlap, achieving a micro F1 score of 46 points.
| 2,016 | Computation and Language |
STransE: a novel embedding model of entities and relationships in
knowledge bases | Knowledge bases of real-world facts about entities and their relationships
are useful resources for a variety of natural language processing tasks.
However, because knowledge bases are typically incomplete, it is useful to be
able to perform link prediction or knowledge base completion, i.e., predict
whether a relationship not in the knowledge base is likely to be true. This
paper combines insights from several previous link prediction models into a new
embedding model STransE that represents each entity as a low-dimensional
vector, and each relation by two matrices and a translation vector. STransE is
a simple combination of the SE and TransE models, but it obtains better link
prediction performance on two benchmark datasets than previous embedding
models. Thus, STransE can serve as a new baseline for the more complex models
in the link prediction task.
| 2,017 | Computation and Language |
Evaluating Informal-Domain Word Representations With UrbanDictionary | Existing corpora for intrinsic evaluation are not targeted towards tasks in
informal domains such as Twitter or news comment forums. We want to test
whether a representation of informal words fulfills the promise of eliding
explicit text normalization as a preprocessing step. One possible evaluation
metric for such domains is the proximity of spelling variants. We propose how
such a metric might be computed and how a spelling variant dataset can be
collected using UrbanDictionary.
| 2,016 | Computation and Language |
Topic Aware Neural Response Generation | We consider incorporating topic information into the sequence-to-sequence
framework to generate informative and interesting responses for chatbots. To
this end, we propose a topic aware sequence-to-sequence (TA-Seq2Seq) model. The
model utilizes topics to simulate prior knowledge of human that guides them to
form informative and interesting responses in conversation, and leverages the
topic information in generation by a joint attention mechanism and a biased
generation probability. The joint attention mechanism summarizes the hidden
vectors of an input message as context vectors by message attention,
synthesizes topic vectors by topic attention from the topic words of the
message obtained from a pre-trained LDA model, and let these vectors jointly
affect the generation of words in decoding. To increase the possibility of
topic words appearing in responses, the model modifies the generation
probability of topic words by adding an extra probability item to bias the
overall distribution. Empirical study on both automatic evaluation metrics and
human annotations shows that TA-Seq2Seq can generate more informative and
interesting responses, and significantly outperform the-state-of-the-art
response generation models.
| 2,016 | Computation and Language |
Predicting the Relative Difficulty of Single Sentences With and Without
Surrounding Context | The problem of accurately predicting relative reading difficulty across a set
of sentences arises in a number of important natural language applications,
such as finding and curating effective usage examples for intelligent language
tutoring systems. Yet while significant research has explored document- and
passage-level reading difficulty, the special challenges involved in assessing
aspects of readability for single sentences have received much less attention,
particularly when considering the role of surrounding passages. We introduce
and evaluate a novel approach for estimating the relative reading difficulty of
a set of sentences, with and without surrounding context. Using different sets
of lexical and grammatical features, we explore models for predicting pairwise
relative difficulty using logistic regression, and examine rankings generated
by aggregating pairwise difficulty labels using a Bayesian rating system to
form a final ranking. We also compare rankings derived for sentences assessed
with and without context, and find that contextual features can help predict
differences in relative difficulty judgments across these two conditions.
| 2,016 | Computation and Language |
Network-Efficient Distributed Word2vec Training System for Large
Vocabularies | Word2vec is a popular family of algorithms for unsupervised training of dense
vector representations of words on large text corpuses. The resulting vectors
have been shown to capture semantic relationships among their corresponding
words, and have shown promise in reducing a number of natural language
processing (NLP) tasks to mathematical operations on these vectors. While
heretofore applications of word2vec have centered around vocabularies with a
few million words, wherein the vocabulary is the set of words for which vectors
are simultaneously trained, novel applications are emerging in areas outside of
NLP with vocabularies comprising several 100 million words. Existing word2vec
training systems are impractical for training such large vocabularies as they
either require that the vectors of all vocabulary words be stored in the memory
of a single server or suffer unacceptable training latency due to massive
network data transfer. In this paper, we present a novel distributed, parallel
training system that enables unprecedented practical training of vectors for
vocabularies with several 100 million words on a shared cluster of commodity
servers, using far less network traffic than the existing solutions. We
evaluate the proposed system on a benchmark dataset, showing that the quality
of vectors does not degrade relative to non-distributed training. Finally, for
several quarters, the system has been deployed for the purpose of matching
queries to ads in Gemini, the sponsored search advertising platform at Yahoo,
resulting in significant improvement of business metrics.
| 2,016 | Computation and Language |
SelQA: A New Benchmark for Selection-based Question Answering | This paper presents a new selection-based question answering dataset, SelQA.
The dataset consists of questions generated through crowdsourcing and sentence
length answers that are drawn from the ten most prevalent topics in the English
Wikipedia. We introduce a corpus annotation scheme that enhances the generation
of large, diverse, and challenging datasets by explicitly aiming to reduce word
co-occurrences between the question and answers. Our annotation scheme is
composed of a series of crowdsourcing tasks with a view to more effectively
utilize crowdsourcing in the creation of question answering datasets in various
domains. Several systems are compared on the tasks of answer sentence selection
and answer triggering, providing strong baseline results for future work to
improve upon.
| 2,016 | Computation and Language |
Hierarchical Neural Language Models for Joint Representation of
Streaming Documents and their Content | We consider the problem of learning distributed representations for documents
in data streams. The documents are represented as low-dimensional vectors and
are jointly learned with distributed vector representations of word tokens
using a hierarchical framework with two embedded neural language models. In
particular, we exploit the context of documents in streams and use one of the
language models to model the document sequences, and the other to model word
sequences within them. The models learn continuous vector representations for
both word tokens and documents such that semantically similar documents and
words are close in a common vector space. We discuss extensions to our model,
which can be applied to personalized recommendation and social relationship
mining by adding further user layers to the hierarchy, thus learning
user-specific vectors to represent individual preferences. We validated the
learned representations on a public movie rating data set from MovieLens, as
well as on a large-scale Yahoo News data comprising three months of user
activity logs collected on Yahoo servers. The results indicate that the
proposed model can learn useful representations of both documents and word
tokens, outperforming the current state-of-the-art by a large margin.
| 2,016 | Computation and Language |
Recurrent Neural Networks for Dialogue State Tracking | This paper discusses models for dialogue state tracking using recurrent
neural networks (RNN). We present experiments on the standard dialogue state
tracking (DST) dataset, DSTC2. On the one hand, RNN models became the state of
the art models in DST, on the other hand, most state-of-the-art models are only
turn-based and require dataset-specific preprocessing (e.g. DSTC2-specific) in
order to achieve such results. We implemented two architectures which can be
used in incremental settings and require almost no preprocessing. We compare
their performance to the benchmarks on DSTC2 and discuss their properties. With
only trivial preprocessing, the performance of our models is close to the
state-of- the-art results.
| 2,016 | Computation and Language |
"Show me the cup": Reference with Continuous Representations | One of the most basic functions of language is to refer to objects in a
shared scene. Modeling reference with continuous representations is challenging
because it requires individuation, i.e., tracking and distinguishing an
arbitrary number of referents. We introduce a neural network model that, given
a definite description and a set of objects represented by natural images,
points to the intended object if the expression has a unique referent, or
indicates a failure, if it does not. The model, directly trained on reference
acts, is competitive with a pipeline manually engineered to perform the same
task, both when referents are purely visual, and when they are characterized by
a combination of visual and linguistic properties.
| 2,017 | Computation and Language |
Generation and Pruning of Pronunciation Variants to Improve ASR Accuracy | Speech recognition, especially name recognition, is widely used in phone
services such as company directory dialers, stock quote providers or location
finders. It is usually challenging due to pronunciation variations. This paper
proposes an efficient and robust data-driven technique which automatically
learns acceptable word pronunciations and updates the pronunciation dictionary
to build a better lexicon without affecting recognition of other words similar
to the target word. It generalizes well on datasets with various sizes, and
reduces the error rate on a database with 13000+ human names by 42%, compared
to a baseline with regular dictionaries already covering canonical
pronunciations of 97%+ words in names, plus a well-trained
spelling-to-pronunciation (STP) engine.
| 2,016 | Computation and Language |
Greedy, Joint Syntactic-Semantic Parsing with Stack LSTMs | We present a transition-based parser that jointly produces syntactic and
semantic dependencies. It learns a representation of the entire algorithm
state, using stack long short-term memories. Our greedy inference algorithm has
linear time, including feature extraction. On the CoNLL 2008--9 English shared
tasks, we obtain the best published parsing performance among models that
jointly learn syntax and semantics.
| 2,018 | Computation and Language |
A Distributional Semantics Approach to Implicit Language Learning | In the present paper we show that distributional information is particularly
important when considering concept availability under implicit language
learning conditions. Based on results from different behavioural experiments we
argue that the implicit learnability of semantic regularities depends on the
degree to which the relevant concept is reflected in language use. In our
simulations, we train a Vector-Space model on either an English or a Chinese
corpus and then feed the resulting representations to a feed-forward neural
network. The task of the neural network was to find a mapping between the word
representations and the novel words. Using datasets from four behavioural
experiments, which used different semantic manipulations, we were able to
obtain learning patterns very similar to those obtained by humans.
| 2,016 | Computation and Language |
Optimising The Input Window Alignment in CD-DNN Based Phoneme
Recognition for Low Latency Processing | We present a systematic analysis on the performance of a phonetic recogniser
when the window of input features is not symmetric with respect to the current
frame. The recogniser is based on Context Dependent Deep Neural Networks
(CD-DNNs) and Hidden Markov Models (HMMs). The objective is to reduce the
latency of the system by reducing the number of future feature frames required
to estimate the current output. Our tests performed on the TIMIT database show
that the performance does not degrade when the input window is shifted up to 5
frames in the past compared to common practice (no future frame). This
corresponds to improving the latency by 50 ms in our settings. Our tests also
show that the best results are not obtained with the symmetric window commonly
employed, but with an asymmetric window with eight past and two future context
frames, although this observation should be confirmed on other data sets. The
reduction in latency suggested by our results is critical for specific
applications such as real-time lip synchronisation for tele-presence, but may
also be beneficial in general applications to improve the lag in human-machine
spoken interaction.
| 2,016 | Computation and Language |
Learning Concept Taxonomies from Multi-modal Data | We study the problem of automatically building hypernym taxonomies from
textual and visual data. Previous works in taxonomy induction generally ignore
the increasingly prominent visual data, which encode important perceptual
semantics. Instead, we propose a probabilistic model for taxonomy induction by
jointly leveraging text and images. To avoid hand-crafted feature engineering,
we design end-to-end features based on distributed representations of images
and words. The model is discriminatively trained given a small set of existing
ontologies and is capable of building full taxonomies from scratch for a
collection of unseen conceptual label items with associated images. We evaluate
our model and features on the WordNet hierarchies, where our system outperforms
previous approaches by a large gap.
| 2,016 | Computation and Language |
Relation extraction from clinical texts using domain invariant
convolutional neural network | In recent years extracting relevant information from biomedical and clinical
texts such as research articles, discharge summaries, or electronic health
records have been a subject of many research efforts and shared challenges.
Relation extraction is the process of detecting and classifying the semantic
relation among entities in a given piece of texts. Existing models for this
task in biomedical domain use either manually engineered features or kernel
methods to create feature vector. These features are then fed to classifier for
the prediction of the correct class. It turns out that the results of these
methods are highly dependent on quality of user designed features and also
suffer from curse of dimensionality. In this work we focus on extracting
relations from clinical discharge summaries. Our main objective is to exploit
the power of convolution neural network (CNN) to learn features automatically
and thus reduce the dependency on manual feature engineering. We evaluate
performance of the proposed model on i2b2-2010 clinical relation extraction
challenge dataset. Our results indicate that convolution neural network can be
a good model for relation exaction in clinical text without being dependent on
expert's knowledge on defining quality features.
| 2,016 | Computation and Language |
Recurrent neural network models for disease name recognition using
domain invariant features | Hand-crafted features based on linguistic and domain-knowledge play crucial
role in determining the performance of disease name recognition systems. Such
methods are further limited by the scope of these features or in other words,
their ability to cover the contexts or word dependencies within a sentence. In
this work, we focus on reducing such dependencies and propose a
domain-invariant framework for the disease name recognition task. In
particular, we propose various end-to-end recurrent neural network (RNN) models
for the tasks of disease name recognition and their classification into four
pre-defined categories. We also utilize convolution neural network (CNN) in
cascade of RNN to get character-based embedded features and employ it with
word-embedded features in our model. We compare our models with the
state-of-the-art results for the two tasks on NCBI disease dataset. Our results
for the disease mention recognition task indicate that state-of-the-art
performance can be obtained without relying on feature engineering. Further the
proposed models obtained improved performance on the classification task of
disease names.
| 2,016 | Computation and Language |
Learning Crosslingual Word Embeddings without Bilingual Corpora | Crosslingual word embeddings represent lexical items from different languages
in the same vector space, enabling transfer of NLP tools. However, previous
attempts had expensive resource requirements, difficulty incorporating
monolingual data or were unable to handle polysemy. We address these drawbacks
in our method which takes advantage of a high coverage dictionary in an EM
style training algorithm over monolingual corpora in two languages. Our model
achieves state-of-the-art performance on bilingual lexicon induction task
exceeding models using large bilingual corpora, and competitive results on the
monolingual word similarity and cross-lingual document classification task.
| 2,016 | Computation and Language |
Neural Network-based Word Alignment through Score Aggregation | We present a simple neural network for word alignment that builds source and
target word window representations to compute alignment scores for sentence
pairs. To enable unsupervised training, we use an aggregation operation that
summarizes the alignment scores for a given target word. A soft-margin
objective increases scores for true target words while decreasing scores for
target words that are not present. Compared to the popular Fast Align model,
our approach improves alignment accuracy by 7 AER on English-Czech, by 6 AER on
Romanian-English and by 1.7 AER on English-French alignment.
| 2,016 | Computation and Language |
Exploring Prediction Uncertainty in Machine Translation Quality
Estimation | Machine Translation Quality Estimation is a notoriously difficult task, which
lessens its usefulness in real-world translation environments. Such scenarios
can be improved if quality predictions are accompanied by a measure of
uncertainty. However, models in this task are traditionally evaluated only in
terms of point estimate metrics, which do not take prediction uncertainty into
account. We investigate probabilistic methods for Quality Estimation that can
provide well-calibrated uncertainty estimates and evaluate them in terms of
their full posterior predictive distributions. We also show how this posterior
information can be useful in an asymmetric risk scenario, which aims to capture
typical situations in translation workflows.
| 2,016 | Computation and Language |
SnapToGrid: From Statistical to Interpretable Models for Biomedical
Information Extraction | We propose an approach for biomedical information extraction that marries the
advantages of machine learning models, e.g., learning directly from data, with
the benefits of rule-based approaches, e.g., interpretability. Our approach
starts by training a feature-based statistical model, then converts this model
to a rule-based variant by converting its features to rules, and "snapping to
grid" the feature weights to discrete votes. In doing so, our proposal takes
advantage of the large body of work in machine learning, but it produces an
interpretable model, which can be directly edited by experts. We evaluate our
approach on the BioNLP 2009 event extraction task. Our results show that there
is a small performance penalty when converting the statistical model to rules,
but the gain in interpretability compensates for that: with minimal effort,
human experts improve this model to have similar performance to the statistical
model that served as starting point.
| 2,016 | Computation and Language |
Representation of texts as complex networks: a mesoscopic approach | Statistical techniques that analyze texts, referred to as text analytics,
have departed from the use of simple word count statistics towards a new
paradigm. Text mining now hinges on a more sophisticated set of methods,
including the representations in terms of complex networks. While
well-established word-adjacency (co-occurrence) methods successfully grasp
syntactical features of written texts, they are unable to represent important
aspects of textual data, such as its topical structure, i.e. the sequence of
subjects developing at a mesoscopic level along the text. Such aspects are
often overlooked by current methodologies. In order to grasp the mesoscopic
characteristics of semantical content in written texts, we devised a network
model which is able to analyze documents in a multi-scale fashion. In the
proposed model, a limited amount of adjacent paragraphs are represented as
nodes, which are connected whenever they share a minimum semantical content. To
illustrate the capabilities of our model, we present, as a case example, a
qualitative analysis of "Alice's Adventures in Wonderland". We show that the
mesoscopic structure of a document, modeled as a network, reveals many semantic
traits of texts. Such an approach paves the way to a myriad of semantic-based
applications. In addition, our approach is illustrated in a machine learning
context, in which texts are classified among real texts and randomized
instances.
| 2,018 | Computation and Language |
HUME: Human UCCA-Based Evaluation of Machine Translation | Human evaluation of machine translation normally uses sentence-level measures
such as relative ranking or adequacy scales. However, these provide no insight
into possible errors, and do not scale well with sentence length. We argue for
a semantics-based evaluation, which captures what meaning components are
retained in the MT output, thus providing a more fine-grained analysis of
translation quality, and enabling the construction and tuning of
semantics-based MT. We present a novel human semantic evaluation measure, Human
UCCA-based MT Evaluation (HUME), building on the UCCA semantic representation
scheme. HUME covers a wider range of semantic phenomena than previous methods
and does not rely on semantic annotation of the potentially garbled MT output.
We experiment with four language pairs, demonstrating HUME's broad
applicability, and report good inter-annotator agreement rates and correlation
with human adequacy scores.
| 2,016 | Computation and Language |
A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue
Systems | User simulation is essential for generating enough data to train a
statistical spoken dialogue system. Previous models for user simulation suffer
from several drawbacks, such as the inability to take dialogue history into
account, the need of rigid structure to ensure coherent user behaviour, heavy
dependence on a specific domain, the inability to output several user
intentions during one dialogue turn, or the requirement of a summarized action
space for tractability. This paper introduces a data-driven user simulator
based on an encoder-decoder recurrent neural network. The model takes as input
a sequence of dialogue contexts and outputs a sequence of dialogue acts
corresponding to user intentions. The dialogue contexts include information
about the machine acts and the status of the user goal. We show on the Dialogue
State Tracking Challenge 2 (DSTC2) dataset that the sequence-to-sequence model
outperforms an agenda-based simulator and an n-gram simulator, according to
F-score. Furthermore, we show how this model can be used on the original action
space and thereby models user behaviour with finer granularity.
| 2,016 | Computation and Language |
TensiStrength: Stress and relaxation magnitude detection for social
media texts | Computer systems need to be able to react to stress in order to perform
optimally on some tasks. This article describes TensiStrength, a system to
detect the strength of stress and relaxation expressed in social media text
messages. TensiStrength uses a lexical approach and a set of rules to detect
direct and indirect expressions of stress or relaxation, particularly in the
context of transportation. It is slightly more effective than a comparable
sentiment analysis program, although their similar performances occur despite
differences on almost half of the tweets gathered. The effectiveness of
TensiStrength depends on the nature of the tweets classified, with tweets that
are rich in stress-related terms being particularly problematic. Although
generic machine learning methods can give better performance than TensiStrength
overall, they exploit topic-related terms in a way that may be undesirable in
practical applications and that may not work as well in more focused contexts.
In conclusion, TensiStrength and generic machine learning approaches work well
enough to be practical choices for intelligent applications that need to take
advantage of stress information, and the decision about which to use depends on
the nature of the texts analysed and the purpose of the task.
| 2,016 | Computation and Language |
Throwing fuel on the embers: Probability or Dichotomy, Cognitive or
Linguistic? | Prof. Robert Berwick's abstract for his forthcoming invited talk at the
ACL2016 workshop on Cognitive Aspects of Computational Language Learning
revives an ancient debate. Entitled "Why take a chance?", Berwick seems to
refer implicitly to Chomsky's critique of the statistical approach of Harris as
well as the currently dominant paradigms in CoNLL.
Berwick avoids Chomsky's use of "innate" but states that "the debate over the
existence of sophisticated mental grammars was settled with Chomsky's Logical
Structure of Linguistic Theory (1957/1975)", acknowledging that "this debate
has often been revived".
This paper agrees with the view that this debate has long since been settled,
but with the opposite outcome! Given the embers have not yet died away, and the
questions remain fundamental, perhaps it is appropriate to refuel the debate,
so I would like to join Bob in throwing fuel on this fire by reviewing the
evidence against the Chomskian position!
| 2,016 | Computation and Language |
Sharing Network Parameters for Crosslingual Named Entity Recognition | Most state of the art approaches for Named Entity Recognition rely on hand
crafted features and annotated corpora. Recently Neural network based models
have been proposed which do not require handcrafted features but still require
annotated corpora. However, such annotated corpora may not be available for
many languages. In this paper, we propose a neural network based model which
allows sharing the decoder as well as word and character level parameters
between two languages thereby allowing a resource fortunate language to aid a
resource deprived language. Specifically, we focus on the case when limited
annotated corpora is available in one language ($L_1$) and abundant annotated
corpora is available in another language ($L_2$). Sharing the network
architecture and parameters between $L_1$ and $L_2$ leads to improved
performance in $L_1$. Further, our approach does not require any hand crafted
features but instead directly learns meaningful feature representations from
the training data itself. We experiment with 4 language pairs and show that
indeed in a resource constrained setup (lesser annotated corpora), a model
jointly trained with data from another language performs better than a model
trained only on the limited corpora in one language.
| 2,016 | Computation and Language |
Evaluating Unsupervised Dutch Word Embeddings as a Linguistic Resource | Word embeddings have recently seen a strong increase in interest as a result
of strong performance gains on a variety of tasks. However, most of this
research also underlined the importance of benchmark datasets, and the
difficulty of constructing these for a variety of language-specific tasks.
Still, many of the datasets used in these tasks could prove to be fruitful
linguistic resources, allowing for unique observations into language use and
variability. In this paper we demonstrate the performance of multiple types of
embeddings, created with both count and prediction-based architectures on a
variety of corpora, in two language-specific tasks: relation evaluation, and
dialect identification. For the latter, we compare unsupervised methods with a
traditional, hand-crafted dictionary. With this research, we provide the
embeddings themselves, the relation evaluation task benchmark for use in
further research, and demonstrate how the benchmarked embeddings prove a useful
unsupervised linguistic resource, effectively used in a downstream task.
| 2,016 | Computation and Language |
Permutation Invariant Training of Deep Models for Speaker-Independent
Multi-talker Speech Separation | We propose a novel deep learning model, which supports permutation invariant
training (PIT), for speaker independent multi-talker speech separation,
commonly known as the cocktail-party problem. Different from most of the prior
arts that treat speech separation as a multi-class regression problem and the
deep clustering technique that considers it a segmentation (or clustering)
problem, our model optimizes for the separation regression error, ignoring the
order of mixing sources. This strategy cleverly solves the long-lasting label
permutation problem that has prevented progress on deep learning based
techniques for speech separation. Experiments on the equal-energy mixing setup
of a Danish corpus confirms the effectiveness of PIT. We believe improvements
built upon PIT can eventually solve the cocktail-party problem and enable
real-world adoption of, e.g., automatic meeting transcription and multi-party
human-computer interaction, where overlapping speech is common.
| 2,018 | Computation and Language |
Moving Toward High Precision Dynamical Modelling in Hidden Markov Models | Hidden Markov Model (HMM) is often regarded as the dynamical model of choice
in many fields and applications. It is also at the heart of most
state-of-the-art speech recognition systems since the 70's. However, from
Gaussian mixture models HMMs (GMM-HMM) to deep neural network HMMs (DNN-HMM),
the underlying Markovian chain of state-of-the-art models did not changed much.
The "left-to-right" topology is mostly always employed because very few other
alternatives exist. In this paper, we propose that finely-tuned HMM topologies
are essential for precise temporal modelling and that this approach should be
investigated in state-of-the-art HMM system. As such, we propose a
proof-of-concept framework for learning efficient topologies by pruning down
complex generic models. Speech recognition experiments that were conducted
indicate that complex time dependencies can be better learned by this approach
than with classical "left-to-right" models.
| 2,016 | Computation and Language |
Domain Adaptation for Neural Networks by Parameter Augmentation | We propose a simple domain adaptation method for neural networks in a
supervised setting. Supervised domain adaptation is a way of improving the
generalization performance on the target domain by using the source domain
dataset, assuming that both of the datasets are labeled. Recently, recurrent
neural networks have been shown to be successful on a variety of NLP tasks such
as caption generation; however, the existing domain adaptation techniques are
limited to (1) tune the model parameters by the target dataset after the
training by the source dataset, or (2) design the network to have dual output,
one for the source domain and the other for the target domain. Reformulating
the idea of the domain adaptation technique proposed by Daume (2007), we
propose a simple domain adaptation method, which can be applied to neural
networks trained with a cross-entropy loss. On captioning datasets, we show
performance improvements over other domain adaptation methods.
| 2,016 | Computation and Language |
Text comparison using word vector representations and dimensionality
reduction | This paper describes a technique to compare large text sources using word
vector representations (word2vec) and dimensionality reduction (t-SNE) and how
it can be implemented using Python. The technique provides a bird's-eye view of
text sources, e.g. text summaries and their source material, and enables users
to explore text sources like a geographical map. Word vector representations
capture many linguistic properties such as gender, tense, plurality and even
semantic concepts like "capital city of". Using dimensionality reduction, a 2D
map can be computed where semantically similar words are close to each other.
The technique uses the word2vec model from the gensim Python library and t-SNE
from scikit-learn.
| 2,016 | Computation and Language |
Context-Dependent Word Representation for Neural Machine Translation | We first observe a potential weakness of continuous vector representations of
symbols in neural machine translation. That is, the continuous vector
representation, or a word embedding vector, of a symbol encodes multiple
dimensions of similarity, equivalent to encoding more than one meaning of the
word. This has the consequence that the encoder and decoder recurrent networks
in neural machine translation need to spend substantial amount of their
capacity in disambiguating source and target words based on the context which
is defined by a source sentence. Based on this observation, in this paper we
propose to contextualize the word embedding vectors using a nonlinear
bag-of-words representation of the source sentence. Additionally, we propose to
represent special tokens (such as numbers, proper nouns and acronyms) with
typed symbols to facilitate translating those words that are not well-suited to
be translated via continuous vectors. The experiments on En-Fr and En-De reveal
that the proposed approaches of contextualization and symbolization improves
the translation quality of neural machine translation systems significantly.
| 2,016 | Computation and Language |
Visualizing Natural Language Descriptions: A Survey | A natural language interface exploits the conceptual simplicity and
naturalness of the language to create a high-level user-friendly communication
channel between humans and machines. One of the promising applications of such
interfaces is generating visual interpretations of semantic content of a given
natural language that can be then visualized either as a static scene or a
dynamic animation. This survey discusses requirements and challenges of
developing such systems and reports 26 graphical systems that exploit natural
language interfaces and addresses both artificial intelligence and
visualization aspects. This work serves as a frame of reference to researchers
and to enable further advances in the field.
| 2,016 | Computation and Language |
Towards Abstraction from Extraction: Multiple Timescale Gated Recurrent
Unit for Summarization | In this work, we introduce temporal hierarchies to the sequence to sequence
(seq2seq) model to tackle the problem of abstractive summarization of
scientific articles. The proposed Multiple Timescale model of the Gated
Recurrent Unit (MTGRU) is implemented in the encoder-decoder setting to better
deal with the presence of multiple compositionalities in larger texts. The
proposed model is compared to the conventional RNN encoder-decoder, and the
results demonstrate that our model trains faster and shows significant
performance gains. The results also show that the temporal hierarchies help
improve the ability of seq2seq models to capture compositionalities better
without the presence of highly complex architectural hierarchies.
| 2,016 | Computation and Language |
Sequence to Backward and Forward Sequences: A Content-Introducing
Approach to Generative Short-Text Conversation | Using neural networks to generate replies in human-computer dialogue systems
is attracting increasing attention over the past few years. However, the
performance is not satisfactory: the neural network tends to generate safe,
universally relevant replies which carry little meaning. In this paper, we
propose a content-introducing approach to neural network-based generative
dialogue systems. We first use pointwise mutual information (PMI) to predict a
noun as a keyword, reflecting the main gist of the reply. We then propose
seq2BF, a "sequence to backward and forward sequences" model, which generates a
reply containing the given keyword. Experimental results show that our approach
significantly outperforms traditional sequence-to-sequence models in terms of
human evaluation and the entropy measure, and that the predicted keyword can
appear at an appropriate position in the reply.
| 2,016 | Computation and Language |
Modelling Context with User Embeddings for Sarcasm Detection in Social
Media | We introduce a deep neural network for automated sarcasm detection. Recent
work has emphasized the need for models to capitalize on contextual features,
beyond lexical and syntactic cues present in utterances. For example, different
speakers will tend to employ sarcasm regarding different subjects and, thus,
sarcasm detection models ought to encode such speaker information. Current
methods have achieved this by way of laborious feature engineering. By
contrast, we propose to automatically learn and then exploit user embeddings,
to be used in concert with lexical signals to recognize sarcasm. Our approach
does not require elaborate feature engineering (and concomitant data scraping);
fitting user embeddings requires only the text from their previous posts. The
experimental results show that our model outperforms a state-of-the-art
approach leveraging an extensive set of carefully crafted features.
| 2,016 | Computation and Language |
Learning when to trust distant supervision: An application to
low-resource POS tagging using cross-lingual projection | Cross lingual projection of linguistic annotation suffers from many sources
of bias and noise, leading to unreliable annotations that cannot be used
directly. In this paper, we introduce a novel approach to sequence tagging that
learns to correct the errors from cross-lingual projection using an explicit
debiasing layer. This is framed as joint learning over two corpora, one tagged
with gold standard and the other with projected tags. We evaluated with only
1,000 tokens tagged with gold standard tags, along with more plentiful parallel
data. Our system equals or exceeds the state-of-the-art on eight simulated
low-resource settings, as well as two real low-resource languages, Malagasy and
Kinyarwanda.
| 2,016 | Computation and Language |
Target-Side Context for Discriminative Models in Statistical Machine
Translation | Discriminative translation models utilizing source context have been shown to
help statistical machine translation performance. We propose a novel extension
of this work using target context information. Surprisingly, we show that this
model can be efficiently integrated directly in the decoding process. Our
approach scales to large training data sizes and results in consistent
improvements in translation quality on four language pairs. We also provide an
analysis comparing the strengths of the baseline source-context model with our
extended source-context and target-context model and we show that our extension
allows us to better capture morphological coherence. Our work is freely
available as part of Moses.
| 2,016 | Computation and Language |
Temporal Topic Analysis with Endogenous and Exogenous Processes | We consider the problem of modeling temporal textual data taking endogenous
and exogenous processes into account. Such text documents arise in real world
applications, including job advertisements and economic news articles, which
are influenced by the fluctuations of the general economy. We propose a
hierarchical Bayesian topic model which imposes a "group-correlated"
hierarchical structure on the evolution of topics over time incorporating both
processes, and show that this model can be estimated from Markov chain Monte
Carlo sampling methods. We further demonstrate that this model captures the
intrinsic relationships between the topic distribution and the time-dependent
factors, and compare its performance with latent Dirichlet allocation (LDA) and
two other related models. The model is applied to two collections of documents
to illustrate its empirical performance: online job advertisements from
DirectEmployers Association and journalists' postings on BusinessInsider.com.
| 2,016 | Computation and Language |
Chains of Reasoning over Entities, Relations, and Text using Recurrent
Neural Networks | Our goal is to combine the rich multistep inference of symbolic logical
reasoning with the generalization capabilities of neural networks. We are
particularly interested in complex reasoning about entities and relations in
text and large-scale knowledge bases (KBs). Neelakantan et al. (2015) use RNNs
to compose the distributed semantics of multi-hop paths in KBs; however for
multiple reasons, the approach lacks accuracy and practicality. This paper
proposes three significant modeling advances: (1) we learn to jointly reason
about relations, entities, and entity-types; (2) we use neural attention
modeling to incorporate multiple paths; (3) we learn to share strength in a
single RNN that represents logical composition across all relations. On a
largescale Freebase+ClueWeb prediction task, we achieve 25% error reduction,
and a 53% error reduction on sparse relations due to shared strength. On chains
of reasoning in WordNet we reduce error in mean quantile by 84% versus previous
state-of-the-art. The code and data are available at
https://rajarshd.github.io/ChainsofReasoning
| 2,017 | Computation and Language |
Global Neural CCG Parsing with Optimality Guarantees | We introduce the first global recursive neural parsing model with optimality
guarantees during decoding. To support global features, we give up dynamic
programs and instead search directly in the space of all possible subtrees.
Although this space is exponentially large in the sentence length, we show it
is possible to learn an efficient A* parser. We augment existing parsing
models, which have informative bounds on the outside score, with a global model
that has loose bounds but only needs to model non-local phenomena. The global
model is trained with a new objective that encourages the parser to explore a
tiny fraction of the search space. The approach is applied to CCG parsing,
improving state-of-the-art accuracy by 0.4 F1. The parser finds the optimal
parse for 99.9% of held-out sentences, exploring on average only 190 subtrees.
| 2,016 | Computation and Language |
Extracting Formal Models from Normative Texts | Normative texts are documents based on the deontic notions of obligation,
permission, and prohibition. Our goal is to model such texts using the C-O
Diagram formalism, making them amenable to formal analysis, in particular
verifying that a text satisfies properties concerning causality of actions and
timing constraints. We present an experimental, semi-automatic aid to bridge
the gap between a normative text and its formal representation. Our approach
uses dependency trees combined with our own rules and heuristics for extracting
the relevant components. The resulting tabular data can then be converted into
a C-O Diagram.
| 2,016 | Computation and Language |
Guided Alignment Training for Topic-Aware Neural Machine Translation | In this paper, we propose an effective way for biasing the attention
mechanism of a sequence-to-sequence neural machine translation (NMT) model
towards the well-studied statistical word alignment models. We show that our
novel guided alignment training approach improves translation quality on
real-life e-commerce texts consisting of product titles and descriptions,
overcoming the problems posed by many unknown words and a large type/token
ratio. We also show that meta-data associated with input texts such as topic or
category information can significantly improve translation quality when used as
an additional signal to the decoder part of the network. With both novel
features, the BLEU score of the NMT system on a product title set improves from
18.6 to 21.3%. Even larger MT quality gains are obtained through domain
adaptation of a general domain NMT system to e-commerce data. The developed NMT
system also performs well on the IWSLT speech translation task, where an
ensemble of four variant systems outperforms the phrase-based baseline by 2.1%
BLEU absolute.
| 2,016 | Computation and Language |
Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute.
| 2,016 | Computation and Language |
Neural Name Translation Improves Neural Machine Translation | In order to control computational complexity, neural machine translation
(NMT) systems convert all rare words outside the vocabulary into a single unk
symbol. Previous solution (Luong et al., 2015) resorts to use multiple numbered
unks to learn the correspondence between source and target rare words. However,
testing words unseen in the training corpus cannot be handled by this method.
And it also suffers from the noisy word alignment. In this paper, we focus on a
major type of rare words -- named entity (NE), and propose to translate them
with character level sequence to sequence model. The NE translation model is
further used to derive high quality NE alignment in the bilingual training
corpus. With the integration of NE translation and alignment modules, our NMT
system is able to surpass the baseline system by 2.9 BLEU points on the Chinese
to English task.
| 2,016 | Computation and Language |
Stock trend prediction using news sentiment analysis | Efficient Market Hypothesis is the popular theory about stock prediction.
With its failure much research has been carried in the area of prediction of
stocks. This project is about taking non quantifiable data such as financial
news articles about a company and predicting its future stock trend with news
sentiment classification. Assuming that news articles have impact on stock
market, this is an attempt to study relationship between news and stock trend.
To show this, we created three different classification models which depict
polarity of news articles being positive or negative. Observations show that RF
and SVM perform well in all types of testing. Na\"ive Bayes gives good result
but not compared to the other two. Experiments are conducted to evaluate
various aspects of the proposed model and encouraging results are obtained in
all of the experiments. The accuracy of the prediction model is more than 80%
and in comparison with news random labeling with 50% of accuracy; the model has
increased the accuracy by 30%.
| 2,016 | Computation and Language |
Sequence Training and Adaptation of Highway Deep Neural Networks | Highway deep neural network (HDNN) is a type of depth-gated feedforward
neural network, which has shown to be easier to train with more hidden layers
and also generalise better compared to conventional plain deep neural networks
(DNNs). Previously, we investigated a structured HDNN architecture for speech
recognition, in which the two gate functions were tied across all the hidden
layers, and we were able to train a much smaller model without sacrificing the
recognition accuracy. In this paper, we carry on the study of this architecture
with sequence-discriminative training criterion and speaker adaptation
techniques on the AMI meeting speech recognition corpus. We show that these two
techniques improve speech recognition accuracy on top of the model trained with
the cross entropy criterion. Furthermore, we demonstrate that the two gate
functions that are tied across all the hidden layers are able to control the
information flow over the whole network, and we can achieve considerable
improvements by only updating these gate functions in both sequence training
and adaptation experiments.
| 2,017 | Computation and Language |
Representing Verbs with Rich Contexts: an Evaluation on Verb Similarity | Several studies on sentence processing suggest that the mental lexicon keeps
track of the mutual expectations between words. Current DSMs, however,
represent context words as separate features, thereby loosing important
information for word expectations, such as word interrelations. In this paper,
we present a DSM that addresses this issue by defining verb contexts as joint
syntactic dependencies. We test our representation in a verb similarity task on
two datasets, showing that joint contexts achieve performances comparable to
single dependencies or even better. Moreover, they are able to overcome the
data sparsity problem of joint feature spaces, in spite of the limited size of
our training corpus.
| 2,016 | Computation and Language |
Predicting and Understanding Law-Making with Word Vectors and an
Ensemble Model | Out of nearly 70,000 bills introduced in the U.S. Congress from 2001 to 2015,
only 2,513 were enacted. We developed a machine learning approach to
forecasting the probability that any bill will become law. Starting in 2001
with the 107th Congress, we trained models on data from previous Congresses,
predicted all bills in the current Congress, and repeated until the 113th
Congress served as the test. For prediction we scored each sentence of a bill
with a language model that embeds legislative vocabulary into a
high-dimensional, semantic-laden vector space. This language representation
enables our investigation into which words increase the probability of
enactment for any topic. To test the relative importance of text and context,
we compared the text model to a context-only model that uses variables such as
whether the bill's sponsor is in the majority party. To test the effect of
changes to bills after their introduction on our ability to predict their final
outcome, we compared using the bill text and meta-data available at the time of
introduction with using the most recent data. At the time of introduction
context-only predictions outperform text-only, and with the newest data
text-only outperforms context-only. Combining text and context always performs
best. We conducted a global sensitivity analysis on the combined model to
determine important variables predicting enactment.
| 2,017 | Computation and Language |
Consensus Attention-based Neural Networks for Chinese Reading
Comprehension | Reading comprehension has embraced a booming in recent NLP research. Several
institutes have released the Cloze-style reading comprehension data, and these
have greatly accelerated the research of machine comprehension. In this work,
we firstly present Chinese reading comprehension datasets, which consist of
People Daily news dataset and Children's Fairy Tale (CFT) dataset. Also, we
propose a consensus attention-based neural network architecture to tackle the
Cloze-style reading comprehension problem, which aims to induce a consensus
attention over every words in the query. Experimental results show that the
proposed neural network significantly outperforms the state-of-the-art
baselines in several public datasets. Furthermore, we setup a baseline for
Chinese reading comprehension task, and hopefully this would speed up the
process for future research.
| 2,018 | Computation and Language |
Collaborative Training of Tensors for Compositional Distributional
Semantics | Type-based compositional distributional semantic models present an
interesting line of research into functional representations of linguistic
meaning. One of the drawbacks of such models, however, is the lack of training
data required to train each word-type combination. In this paper we address
this by introducing training methods that share parameters between similar
words. We show that these methods enable zero-shot learning for words that have
no training data at all, as well as enabling construction of high-quality
tensors from very few training examples per word.
| 2,017 | Computation and Language |
Lexical Based Semantic Orientation of Online Customer Reviews and Blogs | Rapid increase in internet users along with growing power of online review
sites and social media has given birth to sentiment analysis or opinion mining,
which aims at determining what other people think and comment. Sentiments or
Opinions contain public generated content about products, services, policies
and politics. People are usually interested to seek positive and negative
opinions containing likes and dislikes, shared by users for features of
particular product or service. This paper proposed sentence-level lexical based
domain independent sentiment classification method for different types of data
such as reviews and blogs. The proposed method is based on general lexicons
i.e. WordNet, SentiWordNet and user defined lexical dictionaries for semantic
orientation. The relations and glosses of these dictionaries provide solution
to the domain portability problem. The method performs better than word and
text level corpus based machine learning methods for semantic orientation. The
results show the proposed method performs better as it shows precision of 87%
and83% at document and sentence levels respectively for online comments.
| 2,014 | Computation and Language |
Actionable and Political Text Classification using Word Embeddings and
LSTM | In this work, we apply word embeddings and neural networks with Long
Short-Term Memory (LSTM) to text classification problems, where the
classification criteria are decided by the context of the application. We
examine two applications in particular. The first is that of Actionability,
where we build models to classify social media messages from customers of
service providers as Actionable or Non-Actionable. We build models for over 30
different languages for actionability, and most of the models achieve accuracy
around 85%, with some reaching over 90% accuracy. We also show that using LSTM
neural networks with word embeddings vastly outperform traditional techniques.
Second, we explore classification of messages with respect to political
leaning, where social media messages are classified as Democratic or
Republican. The model is able to classify messages with a high accuracy of
87.57%. As part of our experiments, we vary different hyperparameters of the
neural networks, and report the effect of such variation on the accuracy. These
actionability models have been deployed to production and help company agents
provide customer support by prioritizing which messages to respond to. The
model for political leaning has been opened and made available for wider use.
| 2,016 | Computation and Language |
Analysis of opinionated text for opinion mining | In sentiment analysis, the polarities of the opinions expressed on an
object/feature are determined to assess the sentiment of a sentence or document
whether it is positive/negative/neutral. Naturally, the object/feature is a
noun representation which refers to a product or a component of a product, let
us say, the "lens" in a camera and opinions emanating on it are captured in
adjectives, verbs, adverbs and noun words themselves. Apart from such words,
other meta-information and diverse effective features are also going to play an
important role in influencing the sentiment polarity and contribute
significantly to the performance of the system. In this paper, some of the
associated information/meta-data are explored and investigated in the sentiment
text. Based on the analysis results presented here, there is scope for further
assessment and utilization of the meta-information as features in text
categorization, ranking text document, identification of spam documents and
polarity classification problems.
| 2,016 | Computation and Language |
Open Information Extraction | Open Information Extraction (Open IE) systems aim to obtain relation tuples
with highly scalable extraction in portable across domain by identifying a
variety of relation phrases and their arguments in arbitrary sentences. The
first generation of Open IE learns linear chain models based on unlexicalized
features such as Part-of-Speech (POS) or shallow tags to label the intermediate
words between pair of potential arguments for identifying extractable
relations. Open IE currently is developed in the second generation that is able
to extract instances of the most frequently observed relation types such as
Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic
analysis. They expose simple yet principled ways in which verbs express
relationships in linguistics such as verb phrase-based extraction or
clause-based extraction. They obtain a significantly higher performance over
previous systems in the first generation. In this paper, we describe an
overview of two Open IE generations including strengths, weaknesses and
application areas.
| 2,016 | Computation and Language |
Charagram: Embedding Words and Sentences via Character n-grams | We present Charagram embeddings, a simple approach for learning
character-based compositional models to embed textual sequences. A word or
sentence is represented using a character n-gram count vector, followed by a
single nonlinear transformation to yield a low-dimensional embedding. We use
three tasks for evaluation: word similarity, sentence similarity, and
part-of-speech tagging. We demonstrate that Charagram embeddings outperform
more complex architectures based on character-level recurrent and convolutional
neural networks, achieving new state-of-the-art performance on several
similarity tasks.
| 2,016 | Computation and Language |
Syntactic Phylogenetic Trees | In this paper we identify several serious problems that arise in the use of
syntactic data from the SSWL database for the purpose of computational
phylogenetic reconstruction. We show that the most naive approach fails to
produce reliable linguistic phylogenetic trees. We identify some of the sources
of the observed problems and we discuss how they may be, at least partly,
corrected by using additional information, such as prior subdivision into
language families and subfamilies, and a better use of the information about
ancient languages. We also describe how the use of phylogenetic algebraic
geometry can help in estimating to what extent the probability distribution at
the leaves of the phylogenetic tree obtained from the SSWL data can be
considered reliable, by testing it on phylogenetic trees established by other
forms of linguistic analysis. In simple examples, we find that, after
restricting to smaller language subfamilies and considering only those SSWL
parameters that are fully mapped for the whole subfamily, the SSWL data match
extremely well reliable phylogenetic trees, according to the evaluation of
phylogenetic invariants. This is a promising sign for the use of SSWL data for
linguistic phylogenetics.
| 2,016 | Computation and Language |
Mapping distributional to model-theoretic semantic spaces: a baseline | Word embeddings have been shown to be useful across state-of-the-art systems
in many natural language processing tasks, ranging from question answering
systems to dependency parsing. (Herbelot and Vecchi, 2015) explored word
embeddings and their utility for modeling language semantics. In particular,
they presented an approach to automatically map a standard distributional
semantic space onto a set-theoretic model using partial least squares
regression. We show in this paper that a simple baseline achieves a +51%
relative improvement compared to their model on one of the two datasets they
used, and yields competitive results on the second dataset.
| 2,016 | Computation and Language |
The Benefits of Word Embeddings Features for Active Learning in Clinical
Information Extraction | This study investigates the use of unsupervised word embeddings and sequence
features for sample representation in an active learning framework built to
extract clinical concepts from clinical free text. The objective is to further
reduce the manual annotation effort while achieving higher effectiveness
compared to a set of baseline features. Unsupervised features are derived from
skip-gram word embeddings and a sequence representation approach. The
comparative performance of unsupervised features and baseline hand-crafted
features in an active learning framework are investigated using a wide range of
selection criteria including least confidence, information diversity,
information density and diversity, and domain knowledge informativeness. Two
clinical datasets are used for evaluation: the i2b2/VA 2010 NLP challenge and
the ShARe/CLEF 2013 eHealth Evaluation Lab. Our results demonstrate significant
improvements in terms of effectiveness as well as annotation effort savings
across both datasets. Using unsupervised features along with baseline features
for sample representation lead to further savings of up to 9% and 10% of the
token and concept annotation rates, respectively.
| 2,016 | Computation and Language |
Exploring the Political Agenda of the European Parliament Using a
Dynamic Topic Modeling Approach | This study analyzes the political agenda of the European Parliament (EP)
plenary, how it has evolved over time, and the manner in which Members of the
European Parliament (MEPs) have reacted to external and internal stimuli when
making plenary speeches. To unveil the plenary agenda and detect latent themes
in legislative speeches over time, MEP speech content is analyzed using a new
dynamic topic modeling method based on two layers of Non-negative Matrix
Factorization (NMF). This method is applied to a new corpus of all English
language legislative speeches in the EP plenary from the period 1999-2014. Our
findings suggest that two-layer NMF is a valuable alternative to existing
dynamic topic modeling approaches found in the literature, and can unveil niche
topics and associated vocabularies not captured by existing methods.
Substantively, our findings suggest that the political agenda of the EP evolves
significantly over time and reacts to exogenous events such as EU Treaty
referenda and the emergence of the Euro-crisis. MEP contributions to the
plenary agenda are also found to be impacted upon by voting behaviour and the
committee structure of the Parliament.
| 2,016 | Computation and Language |
Separating Answers from Queries for Neural Reading Comprehension | We present a novel neural architecture for answering queries, designed to
optimally leverage explicit support in the form of query-answer memories. Our
model is able to refine and update a given query while separately accumulating
evidence for predicting the answer. Its architecture reflects this separation
with dedicated embedding matrices and loosely connected information pathways
(modules) for updating the query and accumulating evidence. This separation of
responsibilities effectively decouples the search for query related support and
the prediction of the answer. On recent benchmark datasets for reading
comprehension, our model achieves state-of-the-art results. A qualitative
analysis reveals that the model effectively accumulates weighted evidence from
the query and over multiple support retrieval cycles which results in a robust
answer prediction.
| 2,016 | Computation and Language |
Open-Vocabulary Semantic Parsing with both Distributional Statistics and
Formal Knowledge | Traditional semantic parsers map language onto compositional, executable
queries in a fixed schema. This mapping allows them to effectively leverage the
information contained in large, formal knowledge bases (KBs, e.g., Freebase) to
answer questions, but it is also fundamentally limiting---these semantic
parsers can only assign meaning to language that falls within the KB's
manually-produced schema. Recently proposed methods for open vocabulary
semantic parsing overcome this limitation by learning execution models for
arbitrary language, essentially using a text corpus as a kind of knowledge
base. However, all prior approaches to open vocabulary semantic parsing replace
a formal KB with textual information, making no use of the KB in their models.
We show how to combine the disparate representations used by these two
approaches, presenting for the first time a semantic parser that (1) produces
compositional, executable representations of language, (2) can successfully
leverage the information contained in both a formal KB and a large corpus, and
(3) is not limited to the schema of the underlying KB. We demonstrate
significantly improved performance over state-of-the-art baselines on an
open-domain natural language question answering task.
| 2,016 | Computation and Language |
Re-presenting a Story by Emotional Factors using Sentimental Analysis
Method | Remembering an event is affected by personal emotional status. We examined
the psychological status and personal factors; depression (Center for
Epidemiological Studies - Depression, Radloff, 1977), present affective
(Positive Affective and Negative Affective Schedule, Watson et al., 1988), life
orient (Life Orient Test, Scheier & Carver, 1985), self-awareness (Core Self
Evaluation Scale, Judge et al., 2003), and social factor (Social Support,
Sarason et al., 1983) of undergraduate students (N=64) and got summaries of a
story, Chronicle of a Death Foretold (Gabriel Garcia Marquez, 1981) from them.
We implement a sentimental analysis model based on convolutional neural network
(LeCun & Bengio, 1995) to evaluate each summary. From the same vein used for
transfer learning (Pan & Yang, 2010), we collected 38,265 movie review data to
train the model and then use them to score summaries of each student. The
results of CES-D and PANAS show the relationship between emotion and memory
retrieval as follows: depressed people have shown a tendency of representing a
story more negatively, and they seemed less expressive. People with full of
emotion - high in PANAS - have retrieved their memory more expressively than
others, using more negative words then others. The contributions of this study
can be summarized as follows: First, lightening the relationship between
emotion and its effect during times of storing or retrieving a memory. Second,
suggesting objective methods to evaluate the intensity of emotion in natural
language format, using a sentimental analysis model.
| 2,016 | Computation and Language |
A Vector Space for Distributional Semantics for Entailment | Distributional semantics creates vector-space representations that capture
many forms of semantic similarity, but their relation to semantic entailment
has been less clear. We propose a vector-space model which provides a formal
foundation for a distributional semantics of entailment. Using a mean-field
approximation, we develop approximate inference procedures and entailment
operators over vectors of probabilities of features being known (versus
unknown). We use this framework to reinterpret an existing
distributional-semantic model (Word2Vec) as approximating an entailment-based
model of the distributions of words in contexts, thereby predicting lexical
entailment relations. In both unsupervised and semi-supervised experiments on
hyponymy detection, we get substantial improvements over previous results.
| 2,016 | Computation and Language |
Tie-breaker: Using language models to quantify gender bias in sports
journalism | Gender bias is an increasingly important issue in sports journalism. In this
work, we propose a language-model-based approach to quantify differences in
questions posed to female vs. male athletes, and apply it to tennis post-match
interviews. We find that journalists ask male players questions that are
generally more focused on the game when compared with the questions they ask
their female counterparts. We also provide a fine-grained analysis of the
extent to which the salience of this bias depends on various factors, such as
question type, game outcome or player rank.
| 2,016 | Computation and Language |
Using Recurrent Neural Network for Learning Expressive Ontologies | Recently, Neural Networks have been proven extremely effective in many
natural language processing tasks such as sentiment analysis, question
answering, or machine translation. Aiming to exploit such advantages in the
Ontology Learning process, in this technical report we present a detailed
description of a Recurrent Neural Network based system to be used to pursue
such goal.
| 2,016 | Computation and Language |
Attention-over-Attention Neural Networks for Reading Comprehension | Cloze-style queries are representative problems in reading comprehension.
Over the past few months, we have seen much progress that utilizing neural
network approach to solve Cloze-style questions. In this paper, we present a
novel model called attention-over-attention reader for the Cloze-style reading
comprehension task. Our model aims to place another attention mechanism over
the document-level attention, and induces "attended attention" for final
predictions. Unlike the previous works, our neural network model requires less
pre-defined hyper-parameters and uses an elegant architecture for modeling.
Experimental results show that the proposed attention-over-attention model
significantly outperforms various state-of-the-art systems by a large margin in
public datasets, such as CNN and Children's Book Test datasets.
| 2,017 | Computation and Language |
Neural Tree Indexers for Text Understanding | Recurrent neural networks (RNNs) process input text sequentially and model
the conditional transition between word tokens. In contrast, the advantages of
recursive networks include that they explicitly model the compositionality and
the recursive structure of natural language. However, the current recursive
architecture is limited by its dependence on syntactic tree. In this paper, we
introduce a robust syntactic parsing-independent tree structured model, Neural
Tree Indexers (NTI) that provides a middle ground between the sequential RNNs
and the syntactic treebased recursive models. NTI constructs a full n-ary tree
by processing the input text with its node function in a bottom-up fashion.
Attention mechanism can then be applied to both structure and node function. We
implemented and evaluated a binarytree model of NTI, showing the model achieved
the state-of-the-art performance on three different NLP tasks: natural language
inference, answer sentence selection, and sentence classification,
outperforming state-of-the-art recurrent and recursive neural networks.
| 2,017 | Computation and Language |
Neural Discourse Modeling of Conversations | Deep neural networks have shown recent promise in many language-related tasks
such as the modeling of conversations. We extend RNN-based sequence to sequence
models to capture the long range discourse across many turns of conversation.
We perform a sensitivity analysis on how much additional context affects
performance, and provide quantitative and qualitative evidence that these
models are able to capture discourse relationships across multiple utterances.
Our results quantifies how adding an additional RNN layer for modeling
discourse improves the quality of output utterances and providing more of the
previous conversation as input also improves performance. By searching the
generated outputs for specific discourse markers we show how neural discourse
models can exhibit increased coherence and cohesion in conversations.
| 2,016 | Computation and Language |
Enriching Word Vectors with Subword Information | Continuous word representations, trained on large unlabeled corpora are
useful for many natural language processing tasks. Popular models that learn
such representations ignore the morphology of words, by assigning a distinct
vector to each word. This is a limitation, especially for languages with large
vocabularies and many rare words. In this paper, we propose a new approach
based on the skipgram model, where each word is represented as a bag of
character $n$-grams. A vector representation is associated to each character
$n$-gram; words being represented as the sum of these representations. Our
method is fast, allowing to train models on large corpora quickly and allows us
to compute word representations for words that did not appear in the training
data. We evaluate our word representations on nine different languages, both on
word similarity and analogy tasks. By comparing to recently proposed
morphological word representations, we show that our vectors achieve
state-of-the-art performance on these tasks.
| 2,017 | Computation and Language |
Identification of promising research directions using machine learning
aided medical literature analysis | The rapidly expanding corpus of medical research literature presents major
challenges in the understanding of previous work, the extraction of maximum
information from collected data, and the identification of promising research
directions. We present a case for the use of advanced machine learning
techniques as an aide in this task and introduce a novel methodology that is
shown to be capable of extracting meaningful information from large
longitudinal corpora, and of tracking complex temporal changes within it.
| 2,016 | Computation and Language |
An Empirical Evaluation of various Deep Learning Architectures for
Bi-Sequence Classification Tasks | Several tasks in argumentation mining and debating, question-answering, and
natural language inference involve classifying a sequence in the context of
another sequence (referred as bi-sequence classification). For several single
sequence classification tasks, the current state-of-the-art approaches are
based on recurrent and convolutional neural networks. On the other hand, for
bi-sequence classification problems, there is not much understanding as to the
best deep learning architecture. In this paper, we attempt to get an
understanding of this category of problems by extensive empirical evaluation of
19 different deep learning architectures (specifically on different ways of
handling context) for various problems originating in natural language
processing like debating, textual entailment and question-answering. Following
the empirical evaluation, we offer our insights and conclusions regarding the
architectures we have considered. We also establish the first deep learning
baselines for three argumentation mining tasks.
| 2,016 | Computation and Language |
Dependency Language Models for Transition-based Dependency Parsing | In this paper, we present an approach to improve the accuracy of a strong
transition-based dependency parser by exploiting dependency language models
that are extracted from a large parsed corpus. We integrated a small number of
features based on the dependency language models into the parser. To
demonstrate the effectiveness of the proposed approach, we evaluate our parser
on standard English and Chinese data where the base parser could achieve
competitive accuracy scores. Our enhanced parser achieved state-of-the-art
accuracy on Chinese data and competitive results on English data. We gained a
large absolute improvement of one point (UAS) on Chinese and 0.5 points for
English.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.