Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Robust Framework for Classifying Evolving Document Streams in an
Expert-Machine-Crowd Setting | An emerging challenge in the online classification of social media data
streams is to keep the categories used for classification up-to-date. In this
paper, we propose an innovative framework based on an Expert-Machine-Crowd
(EMC) triad to help categorize items by continuously identifying novel concepts
in heterogeneous data streams often riddled with outliers. We unify constrained
clustering and outlier detection by formulating a novel optimization problem:
COD-Means. We design an algorithm to solve the COD-Means problem and show that
COD-Means will not only help detect novel categories but also seamlessly
discover human annotation errors and improve the overall quality of the
categorization process. Experiments on diverse real data sets demonstrate that
our approach is both effective and efficient.
| 2,016 | Computation and Language |
Neural-based Noise Filtering from Word Embeddings | Word embeddings have been demonstrated to benefit NLP tasks impressively.
Yet, there is room for improvement in the vector representations, because
current word embeddings typically contain unnecessary information, i.e., noise.
We propose two novel models to improve word embeddings by unsupervised
learning, in order to yield word denoising embeddings. The word denoising
embeddings are obtained by strengthening salient information and weakening
noise in the original word embeddings, based on a deep feed-forward neural
network filter. Results from benchmark tasks show that the filtered word
denoising embeddings outperform the original word embeddings.
| 2,016 | Computation and Language |
A New Data Representation Based on Training Data Characteristics to
Extract Drug Named-Entity in Medical Text | One essential task in information extraction from the medical corpus is drug
name recognition. Compared with text sources come from other domains, the
medical text is special and has unique characteristics. In addition, the
medical text mining poses more challenges, e.g., more unstructured text, the
fast growing of new terms addition, a wide range of name variation for the same
drug. The mining is even more challenging due to the lack of labeled dataset
sources and external knowledge, as well as multiple token representations for a
single drug name that is more common in the real application setting. Although
many approaches have been proposed to overwhelm the task, some problems
remained with poor F-score performance (less than 0.75). This paper presents a
new treatment in data representation techniques to overcome some of those
challenges. We propose three data representation techniques based on the
characteristics of word distribution and word similarities as a result of word
embedding training. The first technique is evaluated with the standard NN
model, i.e., MLP (Multi-Layer Perceptrons). The second technique involves two
deep network classifiers, i.e., DBN (Deep Belief Networks), and SAE (Stacked
Denoising Encoders). The third technique represents the sentence as a sequence
that is evaluated with a recurrent NN model, i.e., LSTM (Long Short Term
Memory). In extracting the drug name entities, the third technique gives the
best F-score performance compared to the state of the art, with its average
F-score being 0.8645.
| 2,016 | Computation and Language |
Toward Automatic Understanding of the Function of Affective Language in
Support Groups | Understanding expressions of emotions in support forums has considerable
value and NLP methods are key to automating this. Many approaches
understandably use subjective categories which are more fine-grained than a
straightforward polarity-based spectrum. However, the definition of such
categories is non-trivial and, in fact, we argue for a need to incorporate
communicative elements even beyond subjectivity. To support our position, we
report experiments on a sentiment-labelled corpus of posts taken from a medical
support forum. We argue that not only is a more fine-grained approach to text
analysis important, but simultaneously recognising the social function behind
affective expressions enable a more accurate and valuable level of
understanding.
| 2,016 | Computation and Language |
Scalable Machine Translation in Memory Constrained Environments | Machine translation is the discipline concerned with developing automated
tools for translating from one human language to another. Statistical machine
translation (SMT) is the dominant paradigm in this field. In SMT, translations
are generated by means of statistical models whose parameters are learned from
bilingual data. Scalability is a key concern in SMT, as one would like to make
use of as much data as possible to train better translation systems.
In recent years, mobile devices with adequate computing power have become
widely available. Despite being very successful, mobile applications relying on
NLP systems continue to follow a client-server architecture, which is of
limited use because access to internet is often limited and expensive. The goal
of this dissertation is to show how to construct a scalable machine translation
system that can operate with the limited resources available on a mobile
device.
The main challenge for porting translation systems on mobile devices is
memory usage. The amount of memory available on a mobile device is far less
than what is typically available on the server side of a client-server
application. In this thesis, we investigate alternatives for the two components
which prevent standard translation systems from working on mobile devices due
to high memory usage. We show that once these standard components are replaced
with our proposed alternatives, we obtain a scalable translation system that
can work on a device with limited memory.
| 2,016 | Computation and Language |
There's No Comparison: Reference-less Evaluation Metrics in Grammatical
Error Correction | Current methods for automatically evaluating grammatical error correction
(GEC) systems rely on gold-standard references. However, these methods suffer
from penalizing grammatical edits that are correct but not in the gold
standard. We show that reference-less grammaticality metrics correlate very
strongly with human judgments and are competitive with the leading
reference-based evaluation metrics. By interpolating both methods, we achieve
state-of-the-art correlation with human judgments. Finally, we show that GEC
metrics are much more reliable when they are calculated at the sentence level
instead of the corpus level. We have set up a CodaLab site for benchmarking GEC
output using a common dataset and different evaluation metrics.
| 2,016 | Computation and Language |
Morphology Generation for Statistical Machine Translation using Deep
Learning Techniques | Morphology in unbalanced languages remains a big challenge in the context of
machine translation. In this paper, we propose to de-couple machine translation
from morphology generation in order to better deal with the problem. We
investigate the morphology simplification with a reasonable trade-off between
expected gain and generation complexity. For the Chinese-Spanish task, optimum
morphological simplification is in gender and number. For this purpose, we
design a new classification architecture which, compared to other standard
machine learning techniques, obtains the best results. This proposed
neural-based architecture consists of several layers: an embedding, a
convolutional followed by a recurrent neural network and, finally, ends with
sigmoid and softmax layers. We obtain classification results over 98% accuracy
in gender classification, over 93% in number classification, and an overall
translation improvement of 0.7 METEOR.
| 2,017 | Computation and Language |
Challenges of Computational Processing of Code-Switching | This paper addresses challenges of Natural Language Processing (NLP) on
non-canonical multilingual data in which two or more languages are mixed. It
refers to code-switching which has become more popular in our daily life and
therefore obtains an increasing amount of attention from the research
community. We report our experience that cov- ers not only core NLP tasks such
as normalisation, language identification, language modelling, part-of-speech
tagging and dependency parsing but also more downstream ones such as machine
translation and automatic speech recognition. We highlight and discuss the key
problems for each of the tasks with supporting examples from different language
pairs and relevant previous work.
| 2,016 | Computation and Language |
A Semantic Analyzer for the Comprehension of the Spontaneous Arabic
Speech | This work is part of a large research project entitled "Or\'eodule" aimed at
developing tools for automatic speech recognition, translation, and synthesis
for Arabic language. Our attention has mainly been focused on an attempt to
improve the probabilistic model on which our semantic decoder is based. To
achieve this goal, we have decided to test the influence of the pertinent
context use, and of the contextual data integration of different types, on the
effectiveness of the semantic decoder. The findings are quite satisfactory.
| 2,008 | Computation and Language |
Computational linking theory | A linking theory explains how verbs' semantic arguments are mapped to their
syntactic arguments---the inverse of the Semantic Role Labeling task from the
shallow semantic parsing literature. In this paper, we develop the
Computational Linking Theory framework as a method for implementing and testing
linking theories proposed in the theoretical literature. We deploy this
framework to assess two cross-cutting types of linking theory: local v. global
models and categorical v. featural models. To further investigate the behavior
of these models, we develop a measurement model in the spirit of previous work
in semantic role induction: the Semantic Proto-Role Linking Model. We use this
model, which implements a generalization of Dowty's seminal Proto-Role Theory,
to induce semantic proto-roles, which we compare to those Dowty proposes.
| 2,016 | Computation and Language |
Mining the Web for Pharmacovigilance: the Case Study of Duloxetine and
Venlafaxine | Adverse reactions caused by drugs following their release into the market are
among the leading causes of death in many countries. The rapid growth of
electronically available health related information, and the ability to process
large volumes of them automatically, using natural language processing (NLP)
and machine learning algorithms, have opened new opportunities for
pharmacovigilance. Survey found that more than 70% of US Internet users consult
the Internet when they require medical information. In recent years, research
in this area has addressed for Adverse Drug Reaction (ADR) pharmacovigilance
using social media, mainly Twitter and medical forums and websites. This paper
will show the information which can be collected from a variety of Internet
data sources and search engines, mainly Google Trends and Google Correlate.
While considering the case study of two popular Major depressive Disorder (MDD)
drugs, Duloxetine and Venlafaxine, we will provide a comparative analysis for
their reactions using publicly-available alternative data sources.
| 2,016 | Computation and Language |
Enabling Medical Translation for Low-Resource Languages | We present research towards bridging the language gap between migrant workers
in Qatar and medical staff. In particular, we present the first steps towards
the development of a real-world Hindi-English machine translation system for
doctor-patient communication. As this is a low-resource language pair,
especially for speech and for the medical domain, our initial focus has been on
gathering suitable training data from various sources. We applied a variety of
methods ranging from fully automatic extraction from the Web to manual
annotation of test data. Moreover, we developed a method for automatically
augmenting the training data with synthetically generated variants, which
yielded a very sizable improvement of more than 3 BLEU points absolute.
| 2,016 | Computation and Language |
Interpreting Neural Networks to Improve Politeness Comprehension | We present an interpretable neural network approach to predicting and
understanding politeness in natural language requests. Our models are based on
simple convolutional neural networks directly on raw text, avoiding any manual
identification of complex sentiment or syntactic features, while performing
better than such feature-based models from previous work. More importantly, we
use the challenging task of politeness prediction as a testbed to next present
a much-needed understanding of what these successful networks are actually
learning. For this, we present several network visualizations based on
activation clusters, first derivative saliency, and embedding space
transformations, helping us automatically identify several subtle linguistics
markers of politeness theories. Further, this analysis reveals multiple novel,
high-scoring politeness strategies which, when added back as new features,
reduce the accuracy gap between the original featurized system and the neural
model, thus providing a clear quantitative interpretation of the success of
these neural networks.
| 2,016 | Computation and Language |
Open-Ended Visual Question-Answering | This thesis report studies methods to solve Visual Question-Answering (VQA)
tasks with a Deep Learning framework. As a preliminary step, we explore Long
Short-Term Memory (LSTM) networks used in Natural Language Processing (NLP) to
tackle Question-Answering (text based). We then modify the previous model to
accept an image as an input in addition to the question. For this purpose, we
explore the VGG-16 and K-CNN convolutional neural networks to extract visual
features from the image. These are merged with the word embedding or with a
sentence embedding of the question to predict the answer. This work was
successfully submitted to the Visual Question Answering Challenge 2016, where
it achieved a 53,62% of accuracy in the test dataset. The developed software
has followed the best programming practices and Python code style, providing a
consistent baseline in Keras for different configurations.
| 2,016 | Computation and Language |
A Dynamic Window Neural Network for CCG Supertagging | Combinatory Category Grammar (CCG) supertagging is a task to assign lexical
categories to each word in a sentence. Almost all previous methods use fixed
context window sizes as input features. However, it is obvious that different
tags usually rely on different context window sizes. These motivate us to build
a supertagger with a dynamic window approach, which can be treated as an
attention mechanism on the local contexts. Applying dropout on the dynamic
filters can be seen as drop on words directly, which is superior to the regular
dropout on word embeddings. We use this approach to demonstrate the
state-of-the-art CCG supertagging performance on the standard test set.
| 2,016 | Computation and Language |
A New Theoretical and Technological System of Imprecise-Information
Processing | Imprecise-information processing will play an indispensable role in
intelligent systems, especially in the anthropomorphic intelligent systems (as
intelligent robots). A new theoretical and technological system of
imprecise-information processing has been founded in Principles of
Imprecise-Information Processing: A New Theoretical and Technological System[1]
which is different from fuzzy technology. The system has clear hierarchy and
rigorous structure, which results from the formation principle of imprecise
information and has solid mathematical and logical bases, and which has many
advantages beyond fuzzy technology. The system provides a technological
platform for relevant applications and lays a theoretical foundation for
further research.
| 2,016 | Computation and Language |
Modelling Sentence Pairs with Tree-structured Attentive Encoder | We describe an attentive encoder that combines tree-structured recursive
neural networks and sequential recurrent neural networks for modelling sentence
pairs. Since existing attentive models exert attention on the sequential
structure, we propose a way to incorporate attention into the tree topology.
Specially, given a pair of sentences, our attentive encoder uses the
representation of one sentence, which generated via an RNN, to guide the
structural encoding of the other sentence on the dependency parse tree. We
evaluate the proposed attentive encoder on three tasks: semantic similarity,
paraphrase identification and true-false question selection. Experimental
results show that our encoder outperforms all baselines and achieves
state-of-the-art results on two tasks.
| 2,016 | Computation and Language |
Fully Character-Level Neural Machine Translation without Explicit
Segmentation | Most existing machine translation systems operate at the level of words,
relying on explicit segmentation to extract tokens. We introduce a neural
machine translation (NMT) model that maps a source character sequence to a
target character sequence without any segmentation. We employ a character-level
convolutional network with max-pooling at the encoder to reduce the length of
source representation, allowing the model to be trained at a speed comparable
to subword-level models while capturing local regularities. Our
character-to-character model outperforms a recently proposed baseline with a
subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable
performance on FI-EN and RU-EN. We then demonstrate that it is possible to
share a single character-level encoder across multiple languages by training a
model on a many-to-one translation task. In this multilingual setting, the
character-level encoder significantly outperforms the subword-level encoder on
all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality
of the multilingual character-level translation even surpasses the models
specifically trained on that language pair alone, both in terms of BLEU score
and human judgment.
| 2,017 | Computation and Language |
Very Deep Convolutional Networks for End-to-End Speech Recognition | Sequence-to-sequence models have shown success in end-to-end speech
recognition. However these models have only used shallow acoustic encoder
networks. In our work, we successively train very deep convolutional networks
to add more expressive power and better generalization for end-to-end ASR
models. We apply network-in-network principles, batch normalization, residual
connections and convolutional LSTMs to build very deep recurrent and
convolutional structures. Our models exploit the spectral structure in the
feature space and add computational depth without overfitting issues. We
experiment with the WSJ ASR task and achieve 10.5\% word error rate without any
dictionary or language using a 15 layer deep network.
| 2,016 | Computation and Language |
Neural Paraphrase Generation with Stacked Residual LSTM Networks | In this paper, we propose a novel neural approach for paraphrase generation.
Conventional para- phrase generation methods either leverage hand-written rules
and thesauri-based alignments, or use statistical machine learning principles.
To the best of our knowledge, this work is the first to explore deep learning
models for paraphrase generation. Our primary contribution is a stacked
residual LSTM network, where we add residual connections between LSTM layers.
This allows for efficient training of deep LSTMs. We evaluate our model and
other state-of-the-art deep learning models on three different datasets: PPDB,
WikiAnswers and MSCOCO. Evaluation results demonstrate that our model
outperforms sequence to sequence, attention-based and bi- directional LSTM
models on BLEU, METEOR, TER and an embedding-based sentence similarity metric.
| 2,016 | Computation and Language |
Supervised Term Weighting Metrics for Sentiment Analysis in Short Text | Term weighting metrics assign weights to terms in order to discriminate the
important terms from the less crucial ones. Due to this characteristic, these
metrics have attracted growing attention in text classification and recently in
sentiment analysis. Using the weights given by such metrics could lead to more
accurate document representation which may improve the performance of the
classification. While previous studies have focused on proposing or comparing
different weighting metrics at two-classes document level sentiment analysis,
this study propose to analyse the results given by each metric in order to find
out the characteristics of good and bad weighting metrics. Therefore we present
an empirical study of fifteen global supervised weighting metrics with four
local weighting metrics adopted from information retrieval, we also give an
analysis to understand the behavior of each metric by observing and analysing
how each metric distributes the terms and deduce some characteristics which may
distinguish the good and bad metrics. The evaluation has been done using
Support Vector Machine on three different datasets: Twitter, restaurant and
laptop reviews.
| 2,016 | Computation and Language |
Leveraging Recurrent Neural Networks for Multimodal Recognition of
Social Norm Violation in Dialog | Social norms are shared rules that govern and facilitate social interaction.
Violating such social norms via teasing and insults may serve to upend power
imbalances or, on the contrary reinforce solidarity and rapport in
conversation, rapport which is highly situated and context-dependent. In this
work, we investigate the task of automatically identifying the phenomena of
social norm violation in discourse. Towards this goal, we leverage the power of
recurrent neural networks and multimodal information present in the
interaction, and propose a predictive model to recognize social norm violation.
Using long-term temporal and contextual information, our model achieves an F1
score of 0.705. Implications of our work regarding developing a social-aware
agent are discussed.
| 2,016 | Computation and Language |
Correlation-Based Method for Sentiment Classification | The classic supervised classification algorithms are efficient, but
time-consuming, complicated and not interpretable, which makes it difficult to
analyze their results that limits the possibility to improve them based on real
observations. In this paper, we propose a new and a simple classifier to
predict a sentiment label of a short text. This model keeps the capacity of
human interpret-ability and can be extended to integrate NLP techniques in a
more interpretable way. Our model is based on a correlation metric which
measures the degree of association between a sentiment label and a word. Ten
correlation metrics are proposed and evaluated intrinsically. And then a
classifier based on each metric is proposed, evaluated and compared to the
classic classification algorithms which have proved their performance in many
studies. Our model outperforms these algorithms with several correlation
metrics.
| 2,018 | Computation and Language |
Long Short-Term Memory based Convolutional Recurrent Neural Networks for
Large Vocabulary Speech Recognition | Long short-term memory (LSTM) recurrent neural networks (RNNs) have been
shown to give state-of-the-art performance on many speech recognition tasks, as
they are able to provide the learned dynamically changing contextual window of
all sequence history. On the other hand, the convolutional neural networks
(CNNs) have brought significant improvements to deep feed-forward neural
networks (FFNNs), as they are able to better reduce spectral variation in the
input signal. In this paper, a network architecture called as convolutional
recurrent neural network (CRNN) is proposed by combining the CNN and LSTM RNN.
In the proposed CRNNs, each speech frame, without adjacent context frames, is
organized as a number of local feature patches along the frequency axis, and
then a LSTM network is performed on each feature patch along the time axis. We
train and compare FFNNs, LSTM RNNs and the proposed LSTM CRNNs at various
number of configurations. Experimental results show that the LSTM CRNNs can
exceed state-of-the-art speech recognition performance.
| 2,016 | Computation and Language |
An Empirical Exploration of Skip Connections for Sequential Tagging | In this paper, we empirically explore the effects of various kinds of skip
connections in stacked bidirectional LSTMs for sequential tagging. We
investigate three kinds of skip connections connecting to LSTM cells: (a) skip
connections to the gates, (b) skip connections to the internal states and (c)
skip connections to the cell outputs. We present comprehensive experiments
showing that skip connections to cell outputs outperform the remaining two.
Furthermore, we observe that using gated identity functions as skip mappings
works pretty well. Based on this novel skip connections, we successfully train
deep stacked bidirectional LSTM models and obtain state-of-the-art results on
CCG supertagging and comparable results on POS tagging.
| 2,016 | Computation and Language |
Toward a new instances of NELL | We are developing the method to start new instances of NELL in various
languages and develop then NELL multilingualism. We base our method on our
experience on NELL Portuguese and NELL French. This reports explain our method
and develops some research perspectives.
| 2,016 | Computation and Language |
GMM-Free Flat Start Sequence-Discriminative DNN Training | Recently, attempts have been made to remove Gaussian mixture models (GMM)
from the training process of deep neural network-based hidden Markov models
(HMM/DNN). For the GMM-free training of a HMM/DNN hybrid we have to solve two
problems, namely the initial alignment of the frame-level state labels and the
creation of context-dependent states. Although flat-start training via
iteratively realigning and retraining the DNN using a frame-level error
function is viable, it is quite cumbersome. Here, we propose to use a
sequence-discriminative training criterion for flat start. While
sequence-discriminative training is routinely applied only in the final phase
of model training, we show that with proper caution it is also suitable for
getting an alignment of context-independent DNN models. For the construction of
tied states we apply a recently proposed KL-divergence-based state clustering
method, hence our whole training process is GMM-free. In the experimental
evaluation we found that the sequence-discriminative flat start training method
is not only significantly faster than the straightforward approach of iterative
retraining and realignment, but the word error rates attained are slightly
better as well.
| 2,016 | Computation and Language |
Keystroke dynamics as signal for shallow syntactic parsing | Keystroke dynamics have been extensively used in psycholinguistic and writing
research to gain insights into cognitive processing. But do keystroke logs
contain actual signal that can be used to learn better natural language
processing models?
We postulate that keystroke dynamics contain information about syntactic
structure that can inform shallow syntactic parsing. To test this hypothesis,
we explore labels derived from keystroke logs as auxiliary task in a multi-task
bidirectional Long Short-Term Memory (bi-LSTM). Our results show promising
results on two shallow syntactic parsing tasks, chunking and CCG supertagging.
Our model is simple, has the advantage that data can come from distinct
sources, and produces models that are significantly better than models trained
on the text annotations alone.
| 2,016 | Computation and Language |
From phonemes to images: levels of representation in a recurrent neural
model of visually-grounded language learning | We present a model of visually-grounded language learning based on stacked
gated recurrent neural networks which learns to predict visual features given
an image description in the form of a sequence of phonemes. The learning task
resembles that faced by human language learners who need to discover both
structure and meaning from noisy and ambiguous data across modalities. We show
that our model indeed learns to predict features of the visual context given
phonetically transcribed image descriptions, and show that it represents
linguistic information in a hierarchy of levels: lower layers in the stack are
comparatively more sensitive to form, whereas higher layers are more sensitive
to meaning.
| 2,016 | Computation and Language |
Survey on the Use of Typological Information in Natural Language
Processing | In recent years linguistic typology, which classifies the world's languages
according to their functional and structural properties, has been widely used
to support multilingual NLP. While the growing importance of typological
information in supporting multilingual tasks has been recognised, no systematic
survey of existing typological resources and their use in NLP has been
published. This paper provides such a survey as well as discussion which we
hope will both inform and inspire future work in the area.
| 2,016 | Computation and Language |
A Paradigm for Situated and Goal-Driven Language Learning | A distinguishing property of human intelligence is the ability to flexibly
use language in order to communicate complex ideas with other humans in a
variety of contexts. Research in natural language dialogue should focus on
designing communicative agents which can integrate themselves into these
contexts and productively collaborate with humans. In this abstract, we propose
a general situated language learning paradigm which is designed to bring about
robust language agents able to cooperate productively with humans.
| 2,016 | Computation and Language |
Semi-supervised Discovery of Informative Tweets During the Emerging
Disasters | The first objective towards the effective use of microblogging services such
as Twitter for situational awareness during the emerging disasters is discovery
of the disaster-related postings. Given the wide range of possible disasters,
using a pre-selected set of disaster-related keywords for the discovery is
suboptimal. An alternative that we focus on in this work is to train a
classifier using a small set of labeled postings that are becoming available as
a disaster is emerging. Our hypothesis is that utilizing large quantities of
historical microblogs could improve the quality of classification, as compared
to training a classifier only on the labeled data. We propose to use unlabeled
microblogs to cluster words into a limited number of clusters and use the word
clusters as features for classification. To evaluate the proposed
semi-supervised approach, we used Twitter data from 6 different disasters. Our
results indicate that when the number of labeled tweets is 100 or less, the
proposed approach is superior to the standard classification based on the bag
or words feature representation. Our results also reveal that the choice of the
unlabeled corpus, the choice of word clustering algorithm, and the choice of
hyperparameters can have a significant impact on the classification accuracy.
| 2,016 | Computation and Language |
Language Models with Pre-Trained (GloVe) Word Embeddings | In this work we implement a training of a Language Model (LM), using
Recurrent Neural Network (RNN) and GloVe word embeddings, introduced by
Pennigton et al. in [1]. The implementation is following the general idea of
training RNNs for LM tasks presented in [2], but is rather using Gated
Recurrent Unit (GRU) [3] for a memory cell, and not the more commonly used LSTM
[4].
| 2,017 | Computation and Language |
SentiHood: Targeted Aspect Based Sentiment Analysis Dataset for Urban
Neighbourhoods | In this paper, we introduce the task of targeted aspect-based sentiment
analysis. The goal is to extract fine-grained information with respect to
entities mentioned in user comments. This work extends both aspect-based
sentiment analysis that assumes a single entity per document and targeted
sentiment analysis that assumes a single sentiment towards a target entity. In
particular, we identify the sentiment towards each aspect of one or more
entities. As a testbed for this task, we introduce the SentiHood dataset,
extracted from a question answering (QA) platform where urban neighbourhoods
are discussed by users. In this context units of text often mention several
aspects of one or more neighbourhoods. This is the first time that a generic
social media platform in this case a QA platform, is used for fine-grained
opinion mining. Text coming from QA platforms is far less constrained compared
to text from review specific platforms which current datasets are based on. We
develop several strong baselines, relying on logistic regression and
state-of-the-art recurrent neural networks.
| 2,016 | Computation and Language |
Question Generation from a Knowledge Base with Web Exploration | Question generation from a knowledge base (KB) is the task of generating
questions related to the domain of the input KB. We propose a system for
generating fluent and natural questions from a KB, which significantly reduces
the human effort by leveraging massive web resources. In more detail, a seed
question set is first generated by applying a small number of hand-crafted
templates on the input KB, then more questions are retrieved by iteratively
forming already obtained questions as search queries into a standard search
engine, before finally questions are selected by estimating their fluency and
domain relevance. Evaluated by human graders on 500 random-selected triples
from Freebase, questions generated by our system are judged to be more fluent
than those of \newcite{serban-EtAl:2016:P16-1} by human graders.
| 2,017 | Computation and Language |
A Survey of Voice Translation Methodologies - Acoustic Dialect Decoder | Speech Translation has always been about giving source text or audio input
and waiting for system to give translated output in desired form. In this
paper, we present the Acoustic Dialect Decoder (ADD) - a voice to voice
ear-piece translation device. We introduce and survey the recent advances made
in the field of Speech Engineering, to employ in the ADD, particularly focusing
on the three major processing steps of Recognition, Translation and Synthesis.
We tackle the problem of machine understanding of natural language by designing
a recognition unit for source audio to text, a translation unit for source
language text to target language text, and a synthesis unit for target language
text to target language speech. Speech from the surroundings will be recorded
by the recognition unit present on the ear-piece and translation will start as
soon as one sentence is successfully read. This way, we hope to give translated
output as and when input is being read. The recognition unit will use Hidden
Markov Models (HMMs) Based Tool-Kit (HTK), hybrid RNN systems with gated memory
cells, and the synthesis unit, HMM based speech synthesis system HTS. This
system will initially be built as an English to Tamil translation device.
| 2,016 | Computation and Language |
A Neural Network for Coordination Boundary Prediction | We propose a neural-network based model for coordination boundary prediction.
The network is designed to incorporate two signals: the similarity between
conjuncts and the observation that replacing the whole coordination phrase with
a conjunct tends to produce a coherent sentences. The modeling makes use of
several LSTM networks. The model is trained solely on conjunction annotations
in a Treebank, without using external resources. We show improvements on
predicting coordination boundaries on the PTB compared to two state-of-the-art
parsers; as well as improvement over previous coordination boundary prediction
systems on the Genia corpus.
| 2,016 | Computation and Language |
Compressing Neural Language Models by Sparse Word Representations | Neural networks are among the state-of-the-art techniques for language
modeling. Existing neural language models typically map discrete words to
distributed, dense vector representations. After information processing of the
preceding context words by hidden layers, an output layer estimates the
probability of the next word. Such approaches are time- and memory-intensive
because of the large numbers of parameters for word embeddings and the output
layer. In this paper, we propose to compress neural language models by sparse
word representations. In the experiments, the number of parameters in our model
increases very slowly with the growth of the vocabulary size, which is almost
imperceptible. Moreover, our approach not only reduces the parameter space to a
large extent, but also improves the performance in terms of the perplexity
measure.
| 2,016 | Computation and Language |
Dialogue Session Segmentation by Embedding-Enhanced TextTiling | In human-computer conversation systems, the context of a user-issued
utterance is particularly important because it provides useful background
information of the conversation. However, it is unwise to track all previous
utterances in the current session as not all of them are equally important. In
this paper, we address the problem of session segmentation. We propose an
embedding-enhanced TextTiling approach, inspired by the observation that
conversation utterances are highly noisy, and that word embeddings provide a
robust way of capturing semantics. Experimental results show that our approach
achieves better performance than the TextTiling, MMD approaches.
| 2,016 | Computation and Language |
Gated End-to-End Memory Networks | Machine reading using differentiable reasoning models has recently shown
remarkable progress. In this context, End-to-End trainable Memory Networks,
MemN2N, have demonstrated promising performance on simple natural language
based reasoning tasks such as factual reasoning and basic deduction. However,
other tasks, namely multi-fact question-answering, positional reasoning or
dialog related tasks, remain challenging particularly due to the necessity of
more complex interactions between the memory and controller modules composing
this family of models. In this paper, we introduce a novel end-to-end memory
access regulation mechanism inspired by the current progress on the connection
short-cutting principle in the field of computer vision. Concretely, we develop
a Gated End-to-End trainable Memory Network architecture, GMemN2N. From the
machine learning perspective, this new capability is learned in an end-to-end
fashion without the use of any additional supervision signal which is, as far
as our knowledge goes, the first of its kind. Our experiments show significant
improvements on the most challenging tasks in the 20 bAbI dataset, without the
use of any domain knowledge. Then, we show improvements on the dialog bAbI
tasks including the real human-bot conversion-based Dialog State Tracking
Challenge (DSTC-2) dataset. On these two datasets, our model sets the new state
of the art.
| 2,016 | Computation and Language |
Fast, Scalable Phrase-Based SMT Decoding | The utilization of statistical machine translation (SMT) has grown enormously
over the last decade, many using open-source software developed by the NLP
community. As commercial use has increased, there is need for software that is
optimized for commercial requirements, in particular, fast phrase-based
decoding and more efficient utilization of modern multicore servers.
In this paper we re-examine the major components of phrase-based decoding and
decoder implementation with particular emphasis on speed and scalability on
multicore machines. The result is a drop-in replacement for the Moses decoder
which is up to fifteen times faster and scales monotonically with the number of
cores.
| 2,016 | Computation and Language |
A Language-independent and Compositional Model for Personality Trait
Recognition from Short Texts | Many methods have been used to recognize author personality traits from text,
typically combining linguistic feature engineering with shallow learning
models, e.g. linear regression or Support Vector Machines. This work uses
deep-learning-based models and atomic features of text, the characters, to
build hierarchical, vectorial word and sentence representations for trait
inference. This method, applied to a corpus of tweets, shows state-of-the-art
performance across five traits and three languages (English, Spanish and
Italian) compared with prior work in author profiling. The results, supported
by preliminary visualisation work, are encouraging for the ability to detect
complex human traits.
| 2,016 | Computation and Language |
Civique: Using Social Media to Detect Urban Emergencies | We present the Civique system for emergency detection in urban areas by
monitoring micro blogs like Tweets. The system detects emergency related
events, and classifies them into appropriate categories like "fire",
"accident", "earthquake", etc. We demonstrate our ideas by classifying Twitter
posts in real time, visualizing the ongoing event on a map interface and
alerting users with options to contact relevant authorities, both online and
offline. We evaluate our classifiers for both the steps, i.e., emergency
detection and categorization, and obtain F-scores exceeding 70% and 90%,
respectively. We demonstrate Civique using a web interface and on an Android
application, in realtime, and show its use for both tweet detection and
visualization.
| 2,016 | Computation and Language |
Distributional Inclusion Hypothesis for Tensor-based Composition | According to the distributional inclusion hypothesis, entailment between
words can be measured via the feature inclusions of their distributional
vectors. In recent work, we showed how this hypothesis can be extended from
words to phrases and sentences in the setting of compositional distributional
semantics. This paper focuses on inclusion properties of tensors; its main
contribution is a theoretical and experimental analysis of how feature
inclusion works in different concrete models of verb tensors. We present
results for relational, Frobenius, projective, and holistic methods and compare
them to the simple vector addition, multiplication, min, and max models. The
degrees of entailment thus obtained are evaluated via a variety of existing
word-based measures, such as Weed's and Clarke's, KL-divergence, APinc,
balAPinc, and two of our previously proposed metrics at the phrase/sentence
level. We perform experiments on three entailment datasets, investigating which
version of tensor-based composition achieves the highest performance when
combined with the sentence-level measures.
| 2,016 | Computation and Language |
Translation Quality Estimation using Recurrent Neural Network | This paper describes our submission to the shared task on word/phrase level
Quality Estimation (QE) in the First Conference on Statistical Machine
Translation (WMT16). The objective of the shared task was to predict if the
given word/phrase is a correct/incorrect (OK/BAD) translation in the given
sentence. In this paper, we propose a novel approach for word level Quality
Estimation using Recurrent Neural Network Language Model (RNN-LM) architecture.
RNN-LMs have been found very effective in different Natural Language Processing
(NLP) applications. RNN-LM is mainly used for vector space language modeling
for different NLP problems. For this task, we modify the architecture of
RNN-LM. The modified system predicts a label (OK/BAD) in the slot rather than
predicting the word. The input to the system is a word sequence, similar to the
standard RNN-LM. The approach is language independent and requires only the
translated text for QE. To estimate the phrase level quality, we use the output
of the word level QE system.
| 2,016 | Computation and Language |
Cached Long Short-Term Memory Neural Networks for Document-Level
Sentiment Classification | Recently, neural networks have achieved great success on sentiment
classification due to their ability to alleviate feature engineering. However,
one of the remaining challenges is to model long texts in document-level
sentiment classification under a recurrent architecture because of the
deficiency of the memory unit. To address this problem, we present a Cached
Long Short-Term Memory neural networks (CLSTM) to capture the overall semantic
information in long texts. CLSTM introduces a cache mechanism, which divides
memory into several groups with different forgetting rates and thus enables the
network to keep sentiment information better within a recurrent unit. The
proposed CLSTM outperforms the state-of-the-art models on three publicly
available document-level sentiment analysis datasets.
| 2,016 | Computation and Language |
Interactive Attention for Neural Machine Translation | Conventional attention-based Neural Machine Translation (NMT) conducts
dynamic alignment in generating the target sentence. By repeatedly reading the
representation of source sentence, which keeps fixed after generated by the
encoder (Bahdanau et al., 2015), the attention mechanism has greatly enhanced
state-of-the-art NMT. In this paper, we propose a new attention mechanism,
called INTERACTIVE ATTENTION, which models the interaction between the decoder
and the representation of source sentence during translation by both reading
and writing operations. INTERACTIVE ATTENTION can keep track of the interaction
history and therefore improve the translation performance. Experiments on NIST
Chinese-English translation task show that INTERACTIVE ATTENTION can achieve
significant improvements over both the previous attention-based NMT baseline
and some state-of-the-art variants of attention-based NMT (i.e., coverage
models (Tu et al., 2016)). And neural machine translator with our INTERACTIVE
ATTENTION can outperform the open source attention-based NMT system Groundhog
by 4.22 BLEU points and the open source phrase-based system Moses by 3.94 BLEU
points averagely on multiple test sets.
| 2,016 | Computation and Language |
Neural Machine Translation Advised by Statistical Machine Translation | Neural Machine Translation (NMT) is a new approach to machine translation
that has made great progress in recent years. However, recent studies show that
NMT generally produces fluent but inadequate translations (Tu et al. 2016b; Tu
et al. 2016a; He et al. 2016; Tu et al. 2017). This is in contrast to
conventional Statistical Machine Translation (SMT), which usually yields
adequate but non-fluent translations. It is natural, therefore, to leverage the
advantages of both models for better translations, and in this work we propose
to incorporate SMT model into NMT framework. More specifically, at each
decoding step, SMT offers additional recommendations of generated words based
on the decoding information from NMT (e.g., the generated partial translation
and attention history). Then we employ an auxiliary classifier to score the SMT
recommendations and a gating function to combine the SMT recommendations with
NMT generations, both of which are jointly trained within the NMT architecture
in an end-to-end manner. Experimental results on Chinese-English translation
show that the proposed approach achieves significant and consistent
improvements over state-of-the-art NMT and SMT systems on multiple NIST test
sets.
| 2,017 | Computation and Language |
Pre-Translation for Neural Machine Translation | Recently, the development of neural machine translation (NMT) has
significantly improved the translation quality of automatic machine
translation. While most sentences are more accurate and fluent than
translations by statistical machine translation (SMT)-based systems, in some
cases, the NMT system produces translations that have a completely different
meaning. This is especially the case when rare words occur.
When using statistical machine translation, it has already been shown that
significant gains can be achieved by simplifying the input in a preprocessing
step. A commonly used example is the pre-reordering approach.
In this work, we used phrase-based machine translation to pre-translate the
input into the target language. Then a neural machine translation system
generates the final hypothesis using the pre-translation. Thereby, we use
either only the output of the phrase-based machine translation (PBMT) system or
a combination of the PBMT output and the source sentence.
We evaluate the technique on the English to German translation task. Using
this approach we are able to outperform the PBMT system as well as the baseline
neural MT system by up to 2 BLEU points. We analyzed the influence of the
quality of the initial system on the final result.
| 2,016 | Computation and Language |
Achieving Human Parity in Conversational Speech Recognition | Conversational speech recognition has served as a flagship speech recognition
task since the release of the Switchboard corpus in the 1990s. In this paper,
we measure the human error rate on the widely used NIST 2000 test set, and find
that our latest automated system has reached human parity. The error rate of
professional transcribers is 5.9% for the Switchboard portion of the data, in
which newly acquainted pairs of people discuss an assigned topic, and 11.3% for
the CallHome portion where friends and family members have open-ended
conversations. In both cases, our automated system establishes a new state of
the art, and edges past the human benchmark, achieving error rates of 5.8% and
11.0%, respectively. The key to our system's performance is the use of various
convolutional and LSTM acoustic model architectures, combined with a novel
spatial smoothing method and lattice-free MMI acoustic training, multiple
recurrent neural network language modeling approaches, and a systematic use of
system combination.
| 2,018 | Computation and Language |
End-to-end attention-based distant speech recognition with Highway LSTM | End-to-end attention-based models have been shown to be competitive
alternatives to conventional DNN-HMM models in the Speech Recognition Systems.
In this paper, we extend existing end-to-end attention-based models that can be
applied for Distant Speech Recognition (DSR) task. Specifically, we propose an
end-to-end attention-based speech recognizer with multichannel input that
performs sequence prediction directly at the character level. To gain a better
performance, we also incorporate Highway long short-term memory (HLSTM) which
outperforms previous models on AMI distant speech recognition task.
| 2,016 | Computation and Language |
Personalized Machine Translation: Preserving Original Author Traits | The language that we produce reflects our personality, and various personal
and demographic characteristics can be detected in natural language texts. We
focus on one particular personal trait of the author, gender, and study how it
is manifested in original texts and in translations. We show that author's
gender has a powerful, clear signal in originals texts, but this signal is
obfuscated in human and machine translation. We then propose simple
domain-adaptation techniques that help retain the original gender traits in the
translation, without harming the quality of the translation, thereby creating
more personalized machine translation systems.
| 2,017 | Computation and Language |
Addressing Community Question Answering in English and Arabic | This paper studies the impact of different types of features applied to
learning to re-rank questions in community Question Answering. We tested our
models on two datasets released in SemEval-2016 Task 3 on "Community Question
Answering". Task 3 targeted real-life Web fora both in English and Arabic. Our
models include bag-of-words features (BoW), syntactic tree kernels (TKs), rank
features, embeddings, and machine translation evaluation features. To the best
of our knowledge, structural kernels have barely been applied to the question
reranking task, where they have to model paraphrase relations. In the case of
the English question re-ranking task, we compare our learning to rank (L2R)
algorithms against a strong baseline given by the Google-generated ranking
(GR). The results show that i) the shallow structures used in our TKs are
robust enough to noisy data and ii) improving GR is possible, but effective BoW
features and TKs along with an accurate model of GR features in the used L2R
algorithm are required. In the case of the Arabic question re-ranking task, for
the first time we applied tree kernels on syntactic trees of Arabic sentences.
Our approaches to both tasks obtained the second best results on SemEval-2016
subtasks B on English and D on Arabic.
| 2,016 | Computation and Language |
SYSTRAN's Pure Neural Machine Translation Systems | Since the first online demonstration of Neural Machine Translation (NMT) by
LISA, NMT development has recently moved from laboratory to production systems
as demonstrated by several entities announcing roll-out of NMT engines to
replace their existing technologies. NMT systems have a large number of
training configurations and the training process of such systems is usually
very long, often a few weeks, so role of experimentation is critical and
important to share. In this work, we present our approach to production-ready
systems simultaneously with release of online demonstrators covering a large
variety of languages (12 languages, for 32 language pairs). We explore
different practical choices: an efficient and evolutive open-source framework;
data preparation; network architecture; additional implemented features; tuning
for production; etc. We discuss about evaluation methodology, present our first
findings and we finally outline further work.
Our ultimate goal is to share our expertise to build competitive production
systems for "generic" translation. We aim at contributing to set up a
collaborative framework to speed-up adoption of the technology, foster further
research efforts and enable the delivery and adoption to/by industry of
use-case specific engines integrated in real production workflows. Mastering of
the technology would allow us to build translation engines suited for
particular needs, outperforming current simplest/uniform systems.
| 2,016 | Computation and Language |
Vietnamese Named Entity Recognition using Token Regular Expressions and
Bidirectional Inference | This paper describes an efficient approach to improve the accuracy of a named
entity recognition system for Vietnamese. The approach combines regular
expressions over tokens and a bidirectional inference method in a sequence
labelling model. The proposed method achieves an overall $F_1$ score of 89.66%
on a test set of an evaluation campaign, organized in late 2016 by the
Vietnamese Language and Speech Processing (VLSP) community.
| 2,016 | Computation and Language |
Stylometric Analysis of Early Modern Period English Plays | Function word adjacency networks (WANs) are used to study the authorship of
plays from the Early Modern English period. In these networks, nodes are
function words and directed edges between two nodes represent the relative
frequency of directed co-appearance of the two words. For every analyzed play,
a WAN is constructed and these are aggregated to generate author profile
networks. We first study the similarity of writing styles between Early English
playwrights by comparing the profile WANs. The accuracy of using WANs for
authorship attribution is then demonstrated by attributing known plays among
six popular playwrights. Moreover, the WAN method is shown to outperform other
frequency-based methods on attributing Early English plays. In addition, WANs
are shown to be reliable classifiers even when attributing collaborative plays.
For several plays of disputed co-authorship, a deeper analysis is performed by
attributing every act and scene separately, in which we both corroborate
existing breakdowns and provide evidence of new assignments.
| 2,017 | Computation and Language |
Low-rank and Sparse Soft Targets to Learn Better DNN Acoustic Models | Conventional deep neural networks (DNN) for speech acoustic modeling rely on
Gaussian mixture models (GMM) and hidden Markov model (HMM) to obtain binary
class labels as the targets for DNN training. Subword classes in speech
recognition systems correspond to context-dependent tied states or senones. The
present work addresses some limitations of GMM-HMM senone alignments for DNN
training. We hypothesize that the senone probabilities obtained from a DNN
trained with binary labels can provide more accurate targets to learn better
acoustic models. However, DNN outputs bear inaccuracies which are exhibited as
high dimensional unstructured noise, whereas the informative components are
structured and low-dimensional. We exploit principle component analysis (PCA)
and sparse coding to characterize the senone subspaces. Enhanced probabilities
obtained from low-rank and sparse reconstructions are used as soft-targets for
DNN acoustic modeling, that also enables training with untranscribed data.
Experiments conducted on AMI corpus shows 4.6% relative reduction in word error
rate.
| 2,017 | Computation and Language |
Small-footprint Highway Deep Neural Networks for Speech Recognition | State-of-the-art speech recognition systems typically employ neural network
acoustic models. However, compared to Gaussian mixture models, deep neural
network (DNN) based acoustic models often have many more model parameters,
making it challenging for them to be deployed on resource-constrained
platforms, such as mobile devices. In this paper, we study the application of
the recently proposed highway deep neural network (HDNN) for training
small-footprint acoustic models. HDNNs are a depth-gated feedforward neural
network, which include two types of gate functions to facilitate the
information flow through different layers. Our study demonstrates that HDNNs
are more compact than regular DNNs for acoustic modeling, i.e., they can
achieve comparable recognition accuracy with many fewer model parameters.
Furthermore, HDNNs are more controllable than DNNs: the gate functions of an
HDNN can control the behavior of the whole network using a very small number of
model parameters. Finally, we show that HDNNs are more adaptable than DNNs. For
example, simply updating the gate functions using adaptation data can result in
considerable gains in accuracy. We demonstrate these aspects by experiments
using the publicly available AMI corpus, which has around 80 hours of training
data.
| 2,017 | Computation and Language |
Bidirectional LSTM-CRF for Clinical Concept Extraction | Extraction of concepts present in patient clinical records is an essential
step in clinical research. The 2010 i2b2/VA Workshop on Natural Language
Processing Challenges for clinical records presented concept extraction (CE)
task, with aim to identify concepts (such as treatments, tests, problems) and
classify them into predefined categories. State-of-the-art CE approaches
heavily rely on hand crafted features and domain specific resources which are
hard to collect and tune. For this reason, this paper employs bidirectional
LSTM with CRF decoding initialized with general purpose off-the-shelf word
embeddings for CE. The experimental results achieved on 2010 i2b2/VA reference
standard corpora using bidirectional LSTM CRF ranks closely with top ranked
systems.
| 2,016 | Computation and Language |
Chinese Restaurant Process for cognate clustering: A threshold free
approach | In this paper, we introduce a threshold free approach, motivated from Chinese
Restaurant Process, for the purpose of cognate clustering. We show that our
approach yields similar results to a linguistically motivated cognate
clustering system known as LexStat. Our Chinese Restaurant Process system is
fast and does not require any threshold and can be applied to any language
family of the world.
| 2,016 | Computation and Language |
A Theme-Rewriting Approach for Generating Algebra Word Problems | Texts present coherent stories that have a particular theme or overall
setting, for example science fiction or western. In this paper, we present a
text generation method called {\it rewriting} that edits existing
human-authored narratives to change their theme without changing the underlying
story. We apply the approach to math word problems, where it might help
students stay more engaged by quickly transforming all of their homework
assignments to the theme of their favorite movie without changing the math
concepts that are being taught. Our rewriting method uses a two-stage decoding
process, which proposes new words from the target theme and scores the
resulting stories according to a number of factors defining aspects of
syntactic, semantic, and thematic coherence. Experiments demonstrate that the
final stories typically represent the new theme well while still testing the
original math concepts, outperforming a number of baselines. We also release a
new dataset of human-authored rewrites of math word problems in several themes.
| 2,016 | Computation and Language |
Cross-Lingual Syntactic Transfer with Limited Resources | We describe a simple but effective method for cross-lingual syntactic
transfer of dependency parsers, in the scenario where a large amount of
translation data is not available. The method makes use of three steps: 1) a
method for deriving cross-lingual word clusters, which can then be used in a
multilingual parser; 2) a method for transferring lexical information from a
target language to source language treebanks; 3) a method for integrating these
steps with the density-driven annotation projection method of Rasooli and
Collins (2015). Experiments show improvements over the state-of-the-art in
several languages used in previous work, in a setting where the only source of
translation data is the Bible, a considerably smaller corpus than the Europarl
corpus used in previous work. Results using the Europarl corpus as a source of
translation data show additional improvements over the results of Rasooli and
Collins (2015). We conclude with results on 38 datasets from the Universal
Dependencies corpora.
| 2,017 | Computation and Language |
Lexicon Integrated CNN Models with Attention for Sentiment Analysis | With the advent of word embeddings, lexicons are no longer fully utilized for
sentiment analysis although they still provide important features in the
traditional setting. This paper introduces a novel approach to sentiment
analysis that integrates lexicon embeddings and an attention mechanism into
Convolutional Neural Networks. Our approach performs separate convolutions for
word and lexicon embeddings and provides a global view of the document using
attention. Our models are experimented on both the SemEval'16 Task 4 dataset
and the Stanford Sentiment Treebank, and show comparative or better results
against the existing state-of-the-art systems. Our analysis shows that lexicon
embeddings allow to build high-performing models with much smaller word
embeddings, and the attention mechanism effectively dims out noisy words for
sentiment analysis.
| 2,017 | Computation and Language |
Clinical Text Prediction with Numerically Grounded Conditional Language
Models | Assisted text input techniques can save time and effort and improve text
quality. In this paper, we investigate how grounded and conditional extensions
to standard neural language models can bring improvements in the tasks of word
prediction and completion. These extensions incorporate a structured knowledge
base and numerical values from the text into the context used to predict the
next word. Our automated evaluation on a clinical dataset shows extended models
significantly outperform standard models. Our best system uses both
conditioning and grounding, because of their orthogonal benefits. For word
prediction with a list of 5 suggestions, it improves recall from 25.03% to
71.28% and for word completion it improves keystroke savings from 34.35% to
44.81%, where theoretical bound for this dataset is 58.78%. We also perform a
qualitative investigation of how models with lower perplexity occasionally fare
better at the tasks. We found that at test time numbers have more influence on
the document level than on individual word probabilities.
| 2,016 | Computation and Language |
Reasoning with Memory Augmented Neural Networks for Language
Comprehension | Hypothesis testing is an important cognitive process that supports human
reasoning. In this paper, we introduce a computational hypothesis testing
approach based on memory augmented neural networks. Our approach involves a
hypothesis testing loop that reconsiders and progressively refines a previously
formed hypothesis in order to generate new hypotheses to test. We apply the
proposed approach to language comprehension task by using Neural Semantic
Encoders (NSE). Our NSE models achieve the state-of-the-art results showing an
absolute improvement of 1.2% to 2.6% accuracy over previous results obtained by
single and ensemble systems on standard machine comprehension benchmarks such
as the Children's Book Test (CBT) and Who-Did-What (WDW) news article datasets.
| 2,017 | Computation and Language |
Authorship Attribution Based on Life-Like Network Automata | The authorship attribution is a problem of considerable practical and
technical interest. Several methods have been designed to infer the authorship
of disputed documents in multiple contexts. While traditional statistical
methods based solely on word counts and related measurements have provided a
simple, yet effective solution in particular cases; they are prone to
manipulation. Recently, texts have been successfully modeled as networks, where
words are represented by nodes linked according to textual similarity
measurements. Such models are useful to identify informative topological
patterns for the authorship recognition task. However, there is no consensus on
which measurements should be used. Thus, we proposed a novel method to
characterize text networks, by considering both topological and dynamical
aspects of networks. Using concepts and methods from cellular automata theory,
we devised a strategy to grasp informative spatio-temporal patterns from this
model. Our experiments revealed an outperformance over traditional analysis
relying only on topological measurements. Remarkably, we have found a
dependence of pre-processing steps (such as the lemmatization) on the obtained
results, a feature that has mostly been disregarded in related works. The
optimized results obtained here pave the way for a better characterization of
textual networks.
| 2,018 | Computation and Language |
Learning variable length units for SMT between related languages via
Byte Pair Encoding | We explore the use of segments learnt using Byte Pair Encoding (referred to
as BPE units) as basic units for statistical machine translation between
related languages and compare it with orthographic syllables, which are
currently the best performing basic units for this translation task. BPE
identifies the most frequent character sequences as basic units, while
orthographic syllables are linguistically motivated pseudo-syllables. We show
that BPE units modestly outperform orthographic syllables as units of
translation, showing up to 11% increase in BLEU score. While orthographic
syllables can be used only for languages whose writing systems use vowel
representations, BPE is writing system independent and we show that BPE
outperforms other units for non-vowel writing systems too. Our results are
supported by extensive experimentation spanning multiple language families and
writing systems.
| 2,017 | Computation and Language |
Jointly Learning to Align and Convert Graphemes to Phonemes with Neural
Attention Models | We propose an attention-enabled encoder-decoder model for the problem of
grapheme-to-phoneme conversion. Most previous work has tackled the problem via
joint sequence models that require explicit alignments for training. In
contrast, the attention-enabled encoder-decoder model allows for jointly
learning to align and convert characters to phonemes. We explore different
types of attention models, including global and local attention, and our best
models achieve state-of-the-art results on three standard data sets (CMUDict,
Pronlex, and NetTalk).
| 2,016 | Computation and Language |
Lexicons and Minimum Risk Training for Neural Machine Translation:
NAIST-CMU at WAT2016 | This year, the Nara Institute of Science and Technology (NAIST)/Carnegie
Mellon University (CMU) submission to the Japanese-English translation track of
the 2016 Workshop on Asian Translation was based on attentional neural machine
translation (NMT) models. In addition to the standard NMT model, we make a
number of improvements, most notably the use of discrete translation lexicons
to improve probability estimates, and the use of minimum risk training to
optimize the MT system for BLEU score. As a result, our system achieved the
highest translation evaluation scores for the task.
| 2,016 | Computation and Language |
Neural Machine Translation with Characters and Hierarchical Encoding | Most existing Neural Machine Translation models use groups of characters or
whole words as their unit of input and output. We propose a model with a
hierarchical char2word encoder, that takes individual characters both as input
and output. We first argue that this hierarchical representation of the
character encoder reduces computational complexity, and show that it improves
translation performance. Secondly, by qualitatively studying attention plots
from the decoder we find that the model learns to compress common words into a
single embedding whereas rare words, such as names and places, are represented
character by character.
| 2,016 | Computation and Language |
An Approach to Speed-up the Word Sense Disambiguation Procedure through
Sense Filtering | In this paper, we are going to focus on speed up of the Word Sense
Disambiguation procedure by filtering the relevant senses of an ambiguous word
through Part-of-Speech Tagging. First, this proposed approach performs the
Part-of-Speech Tagging operation before the disambiguation procedure using
Bigram approximation. As a result, the exact Part-of-Speech of the ambiguous
word at a particular text instance is derived. In the next stage, only those
dictionary definitions (glosses) are retrieved from an online dictionary, which
are associated with that particular Part-of-Speech to disambiguate the exact
sense of the ambiguous word. In the training phase, we have used Brown Corpus
for Part-of-Speech Tagging and WordNet as an online dictionary. The proposed
approach reduces the execution time upto half (approximately) of the normal
execution time for a text, containing around 200 sentences. Not only that, we
have found several instances, where the correct sense of an ambiguous word is
found for using the Part-of-Speech Tagging before the Disambiguation procedure.
| 2,016 | Computation and Language |
Iterative Refinement for Machine Translation | Existing machine translation decoding algorithms generate translations in a
strictly monotonic fashion and never revisit previous decisions. As a result,
earlier mistakes cannot be corrected at a later stage. In this paper, we
present a translation scheme that starts from an initial guess and then makes
iterative improvements that may revisit previous decisions. We parameterize our
model as a convolutional neural network that predicts discrete substitutions to
an existing translation based on an attention mechanism over both the source
sentence as well as the current translation output. By making less than one
modification per sentence, we improve the output of a phrase-based translation
system by up to 0.4 BLEU on WMT15 German-English translation.
| 2,018 | Computation and Language |
Proposing Plausible Answers for Open-ended Visual Question Answering | Answering open-ended questions is an essential capability for any intelligent
agent. One of the most interesting recent open-ended question answering
challenges is Visual Question Answering (VQA) which attempts to evaluate a
system's visual understanding through its answers to natural language questions
about images. There exist many approaches to VQA, the majority of which do not
exhibit deeper semantic understanding of the candidate answers they produce. We
study the importance of generating plausible answers to a given question by
introducing the novel task of `Answer Proposal': for a given open-ended
question, a system should generate a ranked list of candidate answers informed
by the semantics of the question. We experiment with various models including a
neural generative model as well as a semantic graph matching one. We provide
both intrinsic and extrinsic evaluations for the task of Answer Proposal,
showing that our best model learns to propose plausible answers with a high
recall and performs competitively with some other solutions to VQA.
| 2,016 | Computation and Language |
End-to-End Training Approaches for Discriminative Segmental Models | Recent work on discriminative segmental models has shown that they can
achieve competitive speech recognition performance, using features based on
deep neural frame classifiers. However, segmental models can be more
challenging to train than standard frame-based approaches. While some segmental
models have been successfully trained end to end, there is a lack of
understanding of their training under different settings and with different
losses.
We investigate a model class based on recent successful approaches,
consisting of a linear model that combines segmental features based on an LSTM
frame classifier. Similarly to hybrid HMM-neural network models, segmental
models of this class can be trained in two stages (frame classifier training
followed by linear segmental model weight training), end to end (joint training
of both frame classifier and linear weights), or with end-to-end fine-tuning
after two-stage training.
We study segmental models trained end to end with hinge loss, log loss,
latent hinge loss, and marginal log loss. We consider several losses for the
case where training alignments are available as well as where they are not.
We find that in general, marginal log loss provides the most consistent
strong performance without requiring ground-truth alignments. We also find that
training with dropout is very important in obtaining good performance with
end-to-end training. Finally, the best results are typically obtained by a
combination of two-stage training and fine-tuning.
| 2,016 | Computation and Language |
Automatic Identification of Sarcasm Target: An Introductory Approach | Past work in computational sarcasm deals primarily with sarcasm detection. In
this paper, we introduce a novel, related problem: sarcasm target
identification i.e., extracting the target of ridicule in a sarcastic
sentence). We present an introductory approach for sarcasm target
identification. Our approach employs two types of extractors: one based on
rules, and another consisting of a statistical classifier. To compare our
approach, we use two baselines: a na\"ive baseline and another baseline based
on work in sentiment target identification. We perform our experiments on book
snippets and tweets, and show that our hybrid approach performs better than the
two baselines and also, in comparison with using the two extractors
individually. Our introductory approach establishes the viability of sarcasm
target identification, and will serve as a baseline for future work.
| 2,017 | Computation and Language |
Two are Better than One: An Ensemble of Retrieval- and Generation-Based
Dialog Systems | Open-domain human-computer conversation has attracted much attention in the
field of NLP. Contrary to rule- or template-based domain-specific dialog
systems, open-domain conversation usually requires data-driven approaches,
which can be roughly divided into two categories: retrieval-based and
generation-based systems. Retrieval systems search a user-issued utterance
(called a query) in a large database, and return a reply that best matches the
query. Generative approaches, typically based on recurrent neural networks
(RNNs), can synthesize new replies, but they suffer from the problem of
generating short, meaningless utterances. In this paper, we propose a novel
ensemble of retrieval-based and generation-based dialog systems in the open
domain. In our approach, the retrieved candidate, in addition to the original
query, is fed to an RNN-based reply generator, so that the neural model is
aware of more information. The generated reply is then fed back as a new
candidate for post-reranking. Experimental results show that such ensemble
outperforms each single part of it by a large margin.
| 2,016 | Computation and Language |
Bridging Neural Machine Translation and Bilingual Dictionaries | Neural Machine Translation (NMT) has become the new state-of-the-art in
several language pairs. However, it remains a challenging problem how to
integrate NMT with a bilingual dictionary which mainly contains words rarely or
never seen in the bilingual training data. In this paper, we propose two
methods to bridge NMT and the bilingual dictionaries. The core idea behind is
to design novel models that transform the bilingual dictionaries into adequate
sentence pairs, so that NMT can distil latent bilingual mappings from the ample
and repetitive phenomena. One method leverages a mixed word/character model and
the other attempts at synthesizing parallel sentences guaranteeing massive
occurrence of the translation lexicon. Extensive experiments demonstrate that
the proposed methods can remarkably improve the translation quality, and most
of the rare words in the test sentences can obtain correct translations if they
are covered by the dictionary.
| 2,016 | Computation and Language |
Learning Reporting Dynamics during Breaking News for Rumour Detection in
Social Media | Breaking news leads to situations of fast-paced reporting in social media,
producing all kinds of updates related to news stories, albeit with the caveat
that some of those early updates tend to be rumours, i.e., information with an
unverified status at the time of posting. Flagging information that is
unverified can be helpful to avoid the spread of information that may turn out
to be false. Detection of rumours can also feed a rumour tracking system that
ultimately determines their veracity. In this paper we introduce a novel
approach to rumour detection that learns from the sequential dynamics of
reporting during breaking news in social media to detect rumours in new
stories. Using Twitter datasets collected during five breaking news stories, we
experiment with Conditional Random Fields as a sequential classifier that
leverages context learnt during an event for rumour detection, which we compare
with the state-of-the-art rumour detection system as well as other baselines.
In contrast to existing work, our classifier does not need to observe tweets
querying a piece of information to deem it a rumour, but instead we detect
rumours from the tweet alone by exploiting context learnt during the event. Our
classifier achieves competitive performance, beating the state-of-the-art
classifier that relies on querying tweets with improved precision and recall,
as well as outperforming our best baseline with nearly 40% improvement in terms
of F1 score. The scale and diversity of our experiments reinforces the
generalisability of our classifier.
| 2,016 | Computation and Language |
Introduction: Cognitive Issues in Natural Language Processing | This special issue is dedicated to get a better picture of the relationships
between computational linguistics and cognitive science. It specifically raises
two questions: "what is the potential contribution of computational language
modeling to cognitive science?" and conversely: "what is the influence of
cognitive science in contemporary computational linguistics?"
| 2,014 | Computation and Language |
Statistical Machine Translation for Indian Languages: Mission Hindi | This paper discusses Centre for Development of Advanced Computing Mumbai's
(CDACM) submission to the NLP Tools Contest on Statistical Machine Translation
in Indian Languages (ILSMT) 2014 (collocated with ICON 2014). The objective of
the contest was to explore the effectiveness of Statistical Machine Translation
(SMT) for Indian language to Indian language and English-Hindi machine
translation. In this paper, we have proposed that suffix separation and word
splitting for SMT from agglutinative languages to Hindi significantly improves
over the baseline (BL). We have also shown that the factored model with
reordering outperforms the phrase-based SMT for English-Hindi (\enhi). We
report our work on all five pairs of languages, namely Bengali-Hindi (\bnhi),
Marathi-Hindi (\mrhi), Tamil-Hindi (\tahi), Telugu-Hindi (\tehi), and \enhi for
Health, Tourism, and General domains.
| 2,016 | Computation and Language |
Reordering rules for English-Hindi SMT | Reordering is a preprocessing stage for Statistical Machine Translation (SMT)
system where the words of the source sentence are reordered as per the syntax
of the target language. We are proposing a rich set of rules for better
reordering. The idea is to facilitate the training process by better alignments
and parallel phrase extraction for a phrase-based SMT system. Reordering also
helps the decoding process and hence improving the machine translation quality.
We have observed significant improvements in the translation quality by using
our approach over the baseline SMT. We have used BLEU, NIST, multi-reference
word error rate, multi-reference position independent error rate for judging
the improvements. We have exploited open source SMT toolkit MOSES to develop
the system.
| 2,016 | Computation and Language |
Geometry of Polysemy | Vector representations of words have heralded a transformational approach to
classical problems in NLP; the most popular example is word2vec. However, a
single vector does not suffice to model the polysemous nature of many
(frequent) words, i.e., words with multiple meanings. In this paper, we propose
a three-fold approach for unsupervised polysemy modeling: (a) context
representations, (b) sense induction and disambiguation and (c) lexeme (as a
word and sense pair) representations. A key feature of our work is the finding
that a sentence containing a target word is well represented by a low rank
subspace, instead of a point in a vector space. We then show that the subspaces
associated with a particular sense of the target word tend to intersect over a
line (one-dimensional subspace), which we use to disambiguate senses using a
clustering algorithm that harnesses the Grassmannian geometry of the
representations. The disambiguation algorithm, which we call $K$-Grassmeans,
leads to a procedure to label the different senses of the target word in the
corpus -- yielding lexeme vector representations, all in an unsupervised manner
starting from a large (Wikipedia) corpus in English. Apart from several
prototypical target (word,sense) examples and a host of empirical studies to
intuit and justify the various geometric representations, we validate our
algorithms on standard sense induction and disambiguation datasets and present
new state-of-the-art results.
| 2,016 | Computation and Language |
Learning to Reason With Adaptive Computation | Multi-hop inference is necessary for machine learning systems to successfully
solve tasks such as Recognising Textual Entailment and Machine Reading. In this
work, we demonstrate the effectiveness of adaptive computation for learning the
number of inference steps required for examples of different complexity and
that learning the correct number of inference steps is difficult. We introduce
the first model involving Adaptive Computation Time which provides a small
performance benefit on top of a similar model without an adaptive component as
well as enabling considerable insight into the reasoning process of the model.
| 2,016 | Computation and Language |
UTD-CRSS Systems for 2016 NIST Speaker Recognition Evaluation | This document briefly describes the systems submitted by the Center for
Robust Speech Systems (CRSS) from The University of Texas at Dallas (UTD) to
the 2016 National Institute of Standards and Technology (NIST) Speaker
Recognition Evaluation (SRE). We developed several UBM and DNN i-Vector based
speaker recognition systems with different data sets and feature
representations. Given that the emphasis of the NIST SRE 2016 is on language
mismatch between training and enrollment/test data, so-called domain mismatch,
in our system development we focused on: (1) using unlabeled in-domain data for
centralizing data to alleviate the domain mismatch problem, (2) finding the
best data set for training LDA/PLDA, (3) using newly proposed dimension
reduction technique incorporating unlabeled in-domain data before PLDA
training, (4) unsupervised speaker clustering of unlabeled data and using them
alone or with previous SREs for PLDA training, (5) score calibration using only
unlabeled data and combination of unlabeled and development (Dev) data as
separate experiments.
| 2,016 | Computation and Language |
EmojiNet: Building a Machine Readable Sense Inventory for Emoji | Emoji are a contemporary and extremely popular way to enhance electronic
communication. Without rigid semantics attached to them, emoji symbols take on
different meanings based on the context of a message. Thus, like the word sense
disambiguation task in natural language processing, machines also need to
disambiguate the meaning or sense of an emoji. In a first step toward achieving
this goal, this paper presents EmojiNet, the first machine readable sense
inventory for emoji. EmojiNet is a resource enabling systems to link emoji with
their context-specific meaning. It is automatically constructed by integrating
multiple emoji resources with BabelNet, which is the most comprehensive
multilingual sense inventory available to date. The paper discusses its
construction, evaluates the automatic resource creation process, and presents a
use case where EmojiNet disambiguates emoji usage in tweets. EmojiNet is
available online for use at http://emojinet.knoesis.org.
| 2,016 | Computation and Language |
Still not there? Comparing Traditional Sequence-to-Sequence Models to
Encoder-Decoder Neural Networks on Monotone String Translation Tasks | We analyze the performance of encoder-decoder neural models and compare them
with well-known established methods. The latter represent different classes of
traditional approaches that are applied to the monotone sequence-to-sequence
tasks OCR post-correction, spelling correction, grapheme-to-phoneme conversion,
and lemmatization. Such tasks are of practical relevance for various
higher-level research fields including digital humanities, automatic text
correction, and speech recognition. We investigate how well generic
deep-learning approaches adapt to these tasks, and how they perform in
comparison with established and more specialized methods, including our own
adaptation of pruned CRFs.
| 2,016 | Computation and Language |
How Document Pre-processing affects Keyphrase Extraction Performance | The SemEval-2010 benchmark dataset has brought renewed attention to the task
of automatic keyphrase extraction. This dataset is made up of scientific
articles that were automatically converted from PDF format to plain text and
thus require careful preprocessing so that irrevelant spans of text do not
negatively affect keyphrase extraction performance. In previous work, a wide
range of document preprocessing techniques were described but their impact on
the overall performance of keyphrase extraction models is still unexplored.
Here, we re-assess the performance of several keyphrase extraction models and
measure their robustness against increasingly sophisticated levels of document
preprocessing.
| 2,016 | Computation and Language |
Improving historical spelling normalization with bi-directional LSTMs
and multi-task learning | Natural-language processing of historical documents is complicated by the
abundance of variant spellings and lack of annotated data. A common approach is
to normalize the spelling of historical words to modern forms. We explore the
suitability of a deep neural network architecture for this task, particularly a
deep bi-LSTM network applied on a character level. Our model compares well to
previously established normalization algorithms when evaluated on a diverse set
of texts from Early New High German. We show that multi-task learning with
additional normalization data can improve our model's performance further.
| 2,016 | Computation and Language |
Sequence Segmentation Using Joint RNN and Structured Prediction Models | We describe and analyze a simple and effective algorithm for sequence
segmentation applied to speech processing tasks. We propose a neural
architecture that is composed of two modules trained jointly: a recurrent
neural network (RNN) module and a structured prediction model. The RNN outputs
are considered as feature functions to the structured model. The overall model
is trained with a structured loss function which can be designed to the given
segmentation task. We demonstrate the effectiveness of our method by applying
it to two simple tasks commonly used in phonetic studies: word segmentation and
voice onset time segmentation. Results sug- gest the proposed model is superior
to previous methods, ob- taining state-of-the-art results on the tested
datasets.
| 2,016 | Computation and Language |
Statistical Machine Translation for Indian Languages: Mission Hindi 2 | This paper presents Centre for Development of Advanced Computing Mumbai's
(CDACM) submission to NLP Tools Contest on Statistical Machine Translation in
Indian Languages (ILSMT) 2015 (collocated with ICON 2015). The aim of the
contest was to collectively explore the effectiveness of Statistical Machine
Translation (SMT) while translating within Indian languages and between English
and Indian languages. In this paper, we report our work on all five language
pairs, namely Bengali-Hindi (\bnhi), Marathi-Hindi (\mrhi), Tamil-Hindi
(\tahi), Telugu-Hindi (\tehi), and English-Hindi (\enhi) for Health, Tourism,
and General domains. We have used suffix separation, compound splitting and
preordering prior to SMT training and testing.
| 2,015 | Computation and Language |
Dis-S2V: Discourse Informed Sen2Vec | Vector representation of sentences is important for many text processing
tasks that involve clustering, classifying, or ranking sentences. Recently,
distributed representation of sentences learned by neural models from unlabeled
data has been shown to outperform the traditional bag-of-words representation.
However, most of these learning methods consider only the content of a sentence
and disregard the relations among sentences in a discourse by and large.
In this paper, we propose a series of novel models for learning latent
representations of sentences (Sen2Vec) that consider the content of a sentence
as well as inter-sentence relations. We first represent the inter-sentence
relations with a language network and then use the network to induce contextual
information into the content-based Sen2Vec models. Two different approaches are
introduced to exploit the information in the network. Our first approach
retrofits (already trained) Sen2Vec vectors with respect to the network in two
different ways: (1) using the adjacency relations of a node, and (2) using a
stochastic sampling method which is more flexible in sampling neighbors of a
node. The second approach uses a regularizer to encode the information in the
network into the existing Sen2Vec model. Experimental results show that our
proposed models outperform existing methods in three fundamental information
system tasks demonstrating the effectiveness of our approach. The models
leverage the computational power of multi-core CPUs to achieve fine-grained
computational efficiency. We make our code publicly available upon acceptance.
| 2,016 | Computation and Language |
Content Selection in Data-to-Text Systems: A Survey | Data-to-text systems are powerful in generating reports from data
automatically and thus they simplify the presentation of complex data. Rather
than presenting data using visualisation techniques, data-to-text systems use
natural (human) language, which is the most common way for human-human
communication. In addition, data-to-text systems can adapt their output content
to users' preferences, background or interests and therefore they can be
pleasant for users to interact with. Content selection is an important part of
every data-to-text system, because it is the module that determines which from
the available information should be conveyed to the user. This survey initially
introduces the field of data-to-text generation, describes the general
data-to-text system architecture and then it reviews the state-of-the-art
content selection methods. Finally, it provides recommendations for choosing an
approach and discusses opportunities for future research.
| 2,016 | Computation and Language |
Broad Context Language Modeling as Reading Comprehension | Progress in text understanding has been driven by large datasets that test
particular capabilities, like recent datasets for reading comprehension
(Hermann et al., 2015). We focus here on the LAMBADA dataset (Paperno et al.,
2016), a word prediction task requiring broader context than the immediate
sentence. We view LAMBADA as a reading comprehension problem and apply
comprehension models based on neural networks. Though these models are
constrained to choose a word from the context, they improve the state of the
art on LAMBADA from 7.3% to 49%. We analyze 100 instances, finding that neural
network readers perform well in cases that involve selecting a name from the
context based on dialogue or discourse cues but struggle when coreference
resolution or external knowledge is needed.
| 2,017 | Computation and Language |
Distraction-Based Neural Networks for Document Summarization | Distributed representation learned with neural networks has recently shown to
be effective in modeling natural languages at fine granularities such as words,
phrases, and even sentences. Whether and how such an approach can be extended
to help model larger spans of text, e.g., documents, is intriguing, and further
investigation would still be desirable. This paper aims to enhance neural
network models for such a purpose. A typical problem of document-level modeling
is automatic summarization, which aims to model documents in order to generate
summaries. In this paper, we propose neural models to train computers not just
to pay attention to specific regions and content of input documents with
attention models, but also distract them to traverse between different content
of a document so as to better grasp the overall meaning for summarization.
Without engineering any features, we train the models on two large datasets.
The models achieve the state-of-the-art performance, and they significantly
benefit from the distraction modeling, particularly when input documents are
long.
| 2,016 | Computation and Language |
Knowledge-Based Biomedical Word Sense Disambiguation with Neural Concept
Embeddings | Biomedical word sense disambiguation (WSD) is an important intermediate task
in many natural language processing applications such as named entity
recognition, syntactic parsing, and relation extraction. In this paper, we
employ knowledge-based approaches that also exploit recent advances in neural
word/concept embeddings to improve over the state-of-the-art in biomedical WSD
using the MSH WSD dataset as the test set. Our methods involve weak supervision
- we do not use any hand-labeled examples for WSD to build our prediction
models; however, we employ an existing well known named entity recognition and
concept mapping program, MetaMap, to obtain our concept vectors. Over the MSH
WSD dataset, our linear time (in terms of numbers of senses and words in the
test instance) method achieves an accuracy of 92.24% which is an absolute 3%
improvement over the best known results obtained via unsupervised or
knowledge-based means. A more expensive approach that we developed relies on a
nearest neighbor framework and achieves an accuracy of 94.34%. Employing dense
vector representations learned from unlabeled free text has been shown to
benefit many language processing tasks recently and our efforts show that
biomedical WSD is no exception to this trend. For a complex and rapidly
evolving domain such as biomedicine, building labeled datasets for larger sets
of ambiguous terms may be impractical. Here, we show that weak supervision that
leverages recent advances in representation learning can rival supervised
approaches in biomedical WSD. However, external knowledge bases (here sense
inventories) play a key role in the improvements achieved.
| 2,017 | Computation and Language |
CogALex-V Shared Task: LexNET - Integrated Path-based and Distributional
Method for the Identification of Semantic Relations | We present a submission to the CogALex 2016 shared task on the corpus-based
identification of semantic relations, using LexNET (Shwartz and Dagan, 2016),
an integrated path-based and distributional method for semantic relation
classification. The reported results in the shared task bring this submission
to the third place on subtask 1 (word relatedness), and the first place on
subtask 2 (semantic relation classification), demonstrating the utility of
integrating the complementary path-based and distributional information sources
in recognizing concrete semantic relations. Combined with a common similarity
measure, LexNET performs fairly good on the word relatedness task (subtask 1).
The relatively low performance of LexNET and all other systems on subtask 2,
however, confirms the difficulty of the semantic relation classification task,
and stresses the need to develop additional methods for this task.
| 2,016 | Computation and Language |
CoType: Joint Extraction of Typed Entities and Relations with Knowledge
Bases | Extracting entities and relations for types of interest from text is
important for understanding massive text corpora. Traditionally, systems of
entity relation extraction have relied on human-annotated corpora for training
and adopted an incremental pipeline. Such systems require additional human
expertise to be ported to a new domain, and are vulnerable to errors cascading
down the pipeline. In this paper, we investigate joint extraction of typed
entities and relations with labeled data heuristically obtained from knowledge
bases (i.e., distant supervision). As our algorithm for type labeling via
distant supervision is context-agnostic, noisy training data poses unique
challenges for the task. We propose a novel domain-independent framework,
called CoType, that runs a data-driven text segmentation algorithm to extract
entity mentions, and jointly embeds entity mentions, relation mentions, text
features and type labels into two low-dimensional spaces (for entity and
relation mentions respectively), where, in each space, objects whose types are
close will also have similar representations. CoType, then using these learned
embeddings, estimates the types of test (unlinkable) mentions. We formulate a
joint optimization problem to learn embeddings from text corpora and knowledge
bases, adopting a novel partial-label loss function for noisy labeled data and
introducing an object "translation" function to capture the cross-constraints
of entities and relations on each other. Experiments on three public datasets
demonstrate the effectiveness of CoType across different domains (e.g., news,
biomedical), with an average of 25% improvement in F1 score compared to the
next best method.
| 2,017 | Computation and Language |
A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural
Networks | Sarcasm detection is a key task for many natural language processing tasks.
In sentiment analysis, for example, sarcasm can flip the polarity of an
"apparently positive" sentence and, hence, negatively affect polarity detection
performance. To date, most approaches to sarcasm detection have treated the
task primarily as a text categorization problem. Sarcasm, however, can be
expressed in very subtle ways and requires a deeper understanding of natural
language that standard text categorization techniques cannot grasp. In this
work, we develop models based on a pre-trained convolutional neural network for
extracting sentiment, emotion and personality features for sarcasm detection.
Such features, along with the network's baseline features, allow the proposed
models to outperform the state of the art on benchmark datasets. We also
address the often ignored generalizability issue of classifying data that have
not been seen by the models at learning phase.
| 2,017 | Computation and Language |
Ex Machina: Personal Attacks Seen at Scale | The damage personal attacks cause to online discourse motivates many
platforms to try to curb the phenomenon. However, understanding the prevalence
and impact of personal attacks in online platforms at scale remains
surprisingly difficult. The contribution of this paper is to develop and
illustrate a method that combines crowdsourcing and machine learning to analyze
personal attacks at scale. We show an evaluation method for a classifier in
terms of the aggregated number of crowd-workers it can approximate. We apply
our methodology to English Wikipedia, generating a corpus of over 100k high
quality human-labeled comments and 63M machine-labeled ones from a classifier
that is as good as the aggregate of 3 crowd-workers, as measured by the area
under the ROC curve and Spearman correlation. Using this corpus of
machine-labeled scores, our methodology allows us to explore some of the open
questions about the nature of online personal attacks. This reveals that the
majority of personal attacks on Wikipedia are not the result of a few malicious
users, nor primarily the consequence of allowing anonymous contributions from
unregistered users.
| 2,017 | Computation and Language |
Representation Learning Models for Entity Search | We focus on the problem of learning distributed representations for entity
search queries, named entities, and their short descriptions. With our
representation learning models, the entity search query, named entity and
description can be represented as low-dimensional vectors. Our goal is to
develop a simple but effective model that can make the distributed
representations of query related entities similar to the query in the vector
space. Hence, we propose three kinds of learning strategies, and the difference
between them mainly lies in how to deal with the relationship between an entity
and its description. We analyze the strengths and weaknesses of each learning
strategy and validate our methods on public datasets which contain four kinds
of named entities, i.e., movies, TV shows, restaurants and celebrities. The
experimental results indicate that our proposed methods can adapt to different
types of entity search queries, and outperform the current state-of-the-art
methods based on keyword matching and vanilla word2vec models. Besides, the
proposed methods can be trained fast and be easily extended to other similar
tasks.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.