Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Data Selection Strategies for Multi-Domain Sentiment Analysis | Domain adaptation is important in sentiment analysis as sentiment-indicating
words vary between domains. Recently, multi-domain adaptation has become more
pervasive, but existing approaches train on all available source domains
including dissimilar ones. However, the selection of appropriate training data
is as important as the choice of algorithm. We undertake -- to our knowledge
for the first time -- an extensive study of domain similarity metrics in the
context of sentiment analysis and propose novel representations, metrics, and a
new scope for data selection. We evaluate the proposed methods on two
large-scale multi-domain adaptation settings on tweets and reviews and
demonstrate that they consistently outperform strong random and balanced
baselines, while our proposed selection strategy outperforms instance-level
selection and yields the best score on a large reviews corpus.
| 2,017 | Computation and Language |
Trainable Greedy Decoding for Neural Machine Translation | Recent research in neural machine translation has largely focused on two
aspects; neural network architectures and end-to-end learning algorithms. The
problem of decoding, however, has received relatively little attention from the
research community. In this paper, we solely focus on the problem of decoding
given a trained neural machine translation model. Instead of trying to build a
new decoding algorithm for any specific decoding objective, we propose the idea
of trainable decoding algorithm in which we train a decoding algorithm to find
a translation that maximizes an arbitrary decoding objective. More
specifically, we design an actor that observes and manipulates the hidden state
of the neural machine translation decoder and propose to train it using a
variant of deterministic policy gradient. We extensively evaluate the proposed
algorithm using four language pairs and two decoding objectives and show that
we can indeed train a trainable greedy decoder that generates a better
translation (in terms of a target decoding objective) with minimal
computational overhead.
| 2,017 | Computation and Language |
Exploiting Domain Knowledge via Grouped Weight Sharing with Application
to Text Categorization | A fundamental advantage of neural models for NLP is their ability to learn
representations from scratch. However, in practice this often means ignoring
existing external linguistic resources, e.g., WordNet or domain specific
ontologies such as the Unified Medical Language System (UMLS). We propose a
general, novel method for exploiting such resources via weight sharing. Prior
work on weight sharing in neural networks has considered it largely as a means
of model compression. In contrast, we treat weight sharing as a flexible
mechanism for incorporating prior knowledge into neural models. We show that
this approach consistently yields improved performance on classification tasks
compared to baseline strategies that do not exploit weight sharing.
| 2,017 | Computation and Language |
Automatic Rule Extraction from Long Short Term Memory Networks | Although deep learning models have proven effective at solving problems in
natural language processing, the mechanism by which they come to their
conclusions is often unclear. As a result, these models are generally treated
as black boxes, yielding no insight of the underlying learned patterns. In this
paper we consider Long Short Term Memory networks (LSTMs) and demonstrate a new
approach for tracking the importance of a given input to the LSTM for a given
output. By identifying consistently important patterns of words, we are able to
distill state of the art LSTMs on sentiment analysis and question answering
into a set of representative phrases. This representation is then
quantitatively validated by using the extracted phrases to construct a simple,
rule-based classifier which approximates the output of the LSTM.
| 2,017 | Computation and Language |
Predicting Audience's Laughter Using Convolutional Neural Network | For the purpose of automatically evaluating speakers' humor usage, we build a
presentation corpus containing humorous utterances based on TED talks. Compared
to previous data resources supporting humor recognition research, ours has
several advantages, including (a) both positive and negative instances coming
from a homogeneous data set, (b) containing a large number of speakers, and (c)
being open. Focusing on using lexical cues for humor recognition, we
systematically compare a newly emerging text classification method based on
Convolutional Neural Networks (CNNs) with a well-established conventional
method using linguistic knowledge. The advantages of the CNN method are both
getting higher detection accuracies and being able to learn essential features
automatically.
| 2,017 | Computation and Language |
Character-level Deep Conflation for Business Data Analytics | Connecting different text attributes associated with the same entity
(conflation) is important in business data analytics since it could help merge
two different tables in a database to provide a more comprehensive profile of
an entity. However, the conflation task is challenging because two text strings
that describe the same entity could be quite different from each other for
reasons such as misspelling. It is therefore critical to develop a conflation
model that is able to truly understand the semantic meaning of the strings and
match them at the semantic level. To this end, we develop a character-level
deep conflation model that encodes the input text strings from character level
into finite dimension feature vectors, which are then used to compute the
cosine similarity between the text strings. The model is trained in an
end-to-end manner using back propagation and stochastic gradient descent to
maximize the likelihood of the correct association. Specifically, we propose
two variants of the deep conflation model, based on long-short-term memory
(LSTM) recurrent neural network (RNN) and convolutional neural network (CNN),
respectively. Both models perform well on a real-world business analytics
dataset and significantly outperform the baseline bag-of-character (BoC) model.
| 2,017 | Computation and Language |
Challenges in Providing Automatic Affective Feedback in Instant
Messaging Applications | Instant messaging is one of the major channels of computer mediated
communication. However, humans are known to be very limited in understanding
others' emotions via text-based communication. Aiming on introducing emotion
sensing technologies to instant messaging, we developed EmotionPush, a system
that automatically detects the emotions of the messages end-users received on
Facebook Messenger and provides colored cues on their smartphones accordingly.
We conducted a deployment study with 20 participants during a time span of two
weeks. In this paper, we revealed five challenges, along with examples, that we
observed in our study based on both user's feedback and chat logs, including
(i)the continuum of emotions, (ii)multi-user conversations, (iii)different
dynamics between different users, (iv)misclassification of emotions and
(v)unconventional content. We believe this discussion will benefit the future
exploration of affective computing for instant messaging, and also shed light
on research of conversational emotion sensing.
| 2,017 | Computation and Language |
Local System Voting Feature for Machine Translation System Combination | In this paper, we enhance the traditional confusion network system
combination approach with an additional model trained by a neural network. This
work is motivated by the fact that the commonly used binary system voting
models only assign each input system a global weight which is responsible for
the global impact of each input system on all translations. This prevents
individual systems with low system weights from having influence on the system
combination output, although in some situations this could be helpful. Further,
words which have only been seen by one or few systems rarely have a chance of
being present in the combined output. We train a local system voting model by a
neural network which is based on the words themselves and the combinatorial
occurrences of the different system outputs. This gives system combination the
option to prefer other systems at different word positions even for the same
sentence.
| 2,015 | Computation and Language |
UsingWord Embedding for Cross-Language Plagiarism Detection | This paper proposes to use distributed representation of words (word
embeddings) in cross-language textual similarity detection. The main
contributions of this paper are the following: (a) we introduce new
cross-language similarity detection methods based on distributed representation
of words; (b) we combine the different methods proposed to verify their
complementarity and finally obtain an overall F1 score of 89.15% for
English-French similarity detection at chunk level (88.5% at sentence level) on
a very challenging corpus.
| 2,017 | Computation and Language |
Modeling Semantic Expectation: Using Script Knowledge for Referent
Prediction | Recent research in psycholinguistics has provided increasing evidence that
humans predict upcoming content. Prediction also affects perception and might
be a key to robustness in human language processing. In this paper, we
investigate the factors that affect human prediction by building a
computational model that can predict upcoming discourse referents based on
linguistic knowledge alone vs. linguistic knowledge jointly with common-sense
knowledge in the form of scripts. We find that script knowledge significantly
improves model estimates of human predictions. In a second study, we test the
highly controversial hypothesis that predictability influences referring
expression type but do not find evidence for such an effect.
| 2,017 | Computation and Language |
Universal Semantic Parsing | Universal Dependencies (UD) offer a uniform cross-lingual syntactic
representation, with the aim of advancing multilingual applications. Recent
work shows that semantic parsing can be accomplished by transforming syntactic
dependencies to logical forms. However, this work is limited to English, and
cannot process dependency graphs, which allow handling complex phenomena such
as control. In this work, we introduce UDepLambda, a semantic interface for UD,
which maps natural language to logical forms in an almost language-independent
fashion and can process dependency graphs. We perform experiments on question
answering against Freebase and provide German and Spanish translations of the
WebQuestions and GraphQuestions datasets to facilitate multilingual evaluation.
Results show that UDepLambda outperforms strong baselines across languages and
datasets. For English, it achieves a 4.9 F1 point improvement over the
state-of-the-art on GraphQuestions. Our code and data can be downloaded at
https://github.com/sivareddyg/udeplambda.
| 2,017 | Computation and Language |
Arabic Language Sentiment Analysis on Health Services | The social media network phenomenon leads to a massive amount of valuable
data that is available online and easy to access. Many users share images,
videos, comments, reviews, news and opinions on different social networks
sites, with Twitter being one of the most popular ones. Data collected from
Twitter is highly unstructured, and extracting useful information from tweets
is a challenging task. Twitter has a huge number of Arabic users who mostly
post and write their tweets using the Arabic language. While there has been a
lot of research on sentiment analysis in English, the amount of researches and
datasets in Arabic language is limited. This paper introduces an Arabic
language dataset which is about opinions on health services and has been
collected from Twitter. The paper will first detail the process of collecting
the data from Twitter and also the process of filtering, pre-processing and
annotating the Arabic text in order to build a big sentiment analysis dataset
in Arabic. Several Machine Learning algorithms (Naive Bayes, Support Vector
Machine and Logistic Regression) alongside Deep and Convolutional Neural
Networks were utilized in our experiments of sentiment analysis on our health
dataset.
| 2,017 | Computation and Language |
Universal Dependencies to Logical Forms with Negation Scope | Many language technology applications would benefit from the ability to
represent negation and its scope on top of widely-used linguistic resources. In
this paper, we investigate the possibility of obtaining a first-order logic
representation with negation scope marked using Universal Dependencies. To do
so, we enhance UDepLambda, a framework that converts dependency graphs to
logical forms. The resulting UDepLambda$\lnot$ is able to handle phenomena
related to scope by means of an higher-order type theory, relevant not only to
negation but also to universal quantification and other complex semantic
phenomena. The initial conversion we did for English is promising, in that one
can represent the scope of negation also in the presence of more complex
phenomena such as universal quantifiers.
| 2,017 | Computation and Language |
Learning Concept Embeddings for Efficient Bag-of-Concepts Densification | Explicit concept space models have proven efficacy for text representation in
many natural language and text mining applications. The idea is to embed
textual structures into a semantic space of concepts which captures the main
ideas, objects, and the characteristics of these structures. The so called Bag
of Concepts (BoC) representation suffers from data sparsity causing low
similarity scores between similar texts due to low concept overlap. To address
this problem, we propose two neural embedding models to learn continuous
concept vectors. Once they are learned, we propose an efficient vector
aggregation method to generate fully continuous BoC representations. We
evaluate our concept embedding models on three tasks: 1) measuring entity
semantic relatedness and ranking where we achieve 1.6% improvement in
correlation scores, 2) dataless concept categorization where we achieve
state-of-the-art performance and reduce the categorization error rate by more
than 5% compared to five prior word and entity embedding models, and 3)
dataless document classification where our models outperform the sparse BoC
representations. In addition, by exploiting our efficient linear time vector
aggregation method, we achieve better accuracy scores with much less concept
dimensions compared to previous BoC densification methods which operate in
polynomial time and require hundreds of dimensions in the BoC representation.
| 2,018 | Computation and Language |
Vector Embedding of Wikipedia Concepts and Entities | Using deep learning for different machine learning tasks such as image
classification and word embedding has recently gained many attentions. Its
appealing performance reported across specific Natural Language Processing
(NLP) tasks in comparison with other approaches is the reason for its
popularity. Word embedding is the task of mapping words or phrases to a low
dimensional numerical vector. In this paper, we use deep learning to embed
Wikipedia Concepts and Entities. The English version of Wikipedia contains more
than five million pages, which suggest its capability to cover many English
Entities, Phrases, and Concepts. Each Wikipedia page is considered as a
concept. Some concepts correspond to entities, such as a person's name, an
organization or a place. Contrary to word embedding, Wikipedia Concepts
Embedding is not ambiguous, so there are different vectors for concepts with
similar surface form but different mentions. We proposed several approaches and
evaluated their performance based on Concept Analogy and Concept Similarity
tasks. The results show that proposed approaches have the performance
comparable and in some cases even higher than the state-of-the-art methods.
| 2,017 | Computation and Language |
Learning to Parse and Translate Improves Neural Machine Translation | There has been relatively little attention to incorporating linguistic prior
to neural machine translation. Much of the previous work was further
constrained to considering linguistic prior on the source side. In this paper,
we propose a hybrid model, called NMT+RNNG, that learns to parse and translate
by combining the recurrent neural network grammar into the attention-based
neural machine translation. Our approach encourages the neural machine
translation model to incorporate linguistic prior during training, and lets it
translate on its own afterward. Extensive experiments with four language pairs
show the effectiveness of the proposed NMT+RNNG.
| 2,017 | Computation and Language |
A Morphology-aware Network for Morphological Disambiguation | Agglutinative languages such as Turkish, Finnish and Hungarian require
morphological disambiguation before further processing due to the complex
morphology of words. A morphological disambiguator is used to select the
correct morphological analysis of a word. Morphological disambiguation is
important because it generally is one of the first steps of natural language
processing and its performance affects subsequent analyses. In this paper, we
propose a system that uses deep learning techniques for morphological
disambiguation. Many of the state-of-the-art results in computer vision, speech
recognition and natural language processing have been obtained through deep
learning models. However, applying deep learning techniques to morphologically
rich languages is not well studied. In this work, while we focus on Turkish
morphological disambiguation we also present results for French and German in
order to show that the proposed architecture achieves high accuracy with no
language-specific feature engineering or additional resource. In the
experiments, we achieve 84.12, 88.35 and 93.78 morphological disambiguation
accuracy among the ambiguous words for Turkish, German and French respectively.
| 2,017 | Computation and Language |
Multitask Learning with Deep Neural Networks for Community Question
Answering | In this paper, we developed a deep neural network (DNN) that learns to solve
simultaneously the three tasks of the cQA challenge proposed by the
SemEval-2016 Task 3, i.e., question-comment similarity, question-question
similarity and new question-comment similarity. The latter is the main task,
which can exploit the previous two for achieving better results. Our DNN is
trained jointly on all the three cQA tasks and learns to encode questions and
comments into a single vector representation shared across the multiple tasks.
The results on the official challenge test set show that our approach produces
higher accuracy and faster convergence rates than the individual neural
networks. Additionally, our method, which does not use any manual feature
engineering, approaches the state of the art established with methods that make
heavy use of it.
| 2,017 | Computation and Language |
Towards speech-to-text translation without speech recognition | We explore the problem of translating speech to text in low-resource
scenarios where neither automatic speech recognition (ASR) nor machine
translation (MT) are available, but we have training data in the form of audio
paired with text translations. We present the first system for this problem
applied to a realistic multi-speaker dataset, the CALLHOME Spanish-English
speech translation corpus. Our approach uses unsupervised term discovery (UTD)
to cluster repeated patterns in the audio, creating a pseudotext, which we pair
with translations to create a parallel text and train a simple bag-of-words MT
model. We identify the challenges faced by the system, finding that the
difficulty of cross-speaker UTD results in low recall, but that our system is
still able to correctly translate some content words in test data.
| 2,017 | Computation and Language |
Offline bilingual word vectors, orthogonal transformations and the
inverted softmax | Usually bilingual word vectors are trained "online". Mikolov et al. showed
they can also be found "offline", whereby two pre-trained embeddings are
aligned with a linear transformation, using dictionaries compiled from expert
knowledge. In this work, we prove that the linear transformation between two
spaces should be orthogonal. This transformation can be obtained using the
singular value decomposition. We introduce a novel "inverted softmax" for
identifying translation pairs, with which we improve the precision @1 of
Mikolov's original mapping from 34% to 43%, when translating a test set
composed of both common and rare English words into Italian. Orthogonal
transformations are more robust to noise, enabling us to learn the
transformation without expert bilingual signal by constructing a
"pseudo-dictionary" from the identical character strings which appear in both
languages, achieving 40% precision on the same test set. Finally, we extend our
method to retrieve the true translations of English sentences from a corpus of
200k Italian sentences with a precision @1 of 68%.
| 2,017 | Computation and Language |
The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations
Annotated with Compositional Meaning Representations | The Parallel Meaning Bank is a corpus of translations annotated with shared,
formal meaning representations comprising over 11 million words divided over
four languages (English, German, Italian, and Dutch). Our approach is based on
cross-lingual projection: automatically produced (and manually corrected)
semantic annotations for English sentences are mapped onto their word-aligned
translations, assuming that the translations are meaning-preserving. The
semantic annotation consists of five main steps: (i) segmentation of the text
in sentences and lexical items; (ii) syntactic parsing with Combinatory
Categorial Grammar; (iii) universal semantic tagging; (iv) symbolization; and
(v) compositional semantic analysis based on Discourse Representation Theory.
These steps are performed using statistical models trained in a semi-supervised
manner. The employed annotation models are all language-neutral. Our first
results are promising.
| 2,017 | Computation and Language |
JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction | We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for
developing and evaluating grammatical error correction (GEC). Unlike other
corpora, it represents a broad range of language proficiency levels and uses
holistic fluency edits to not only correct grammatical errors but also make the
original text more native sounding. We describe the types of corrections made
and benchmark four leading GEC systems on this corpus, identifying specific
areas in which they do well and how they can improve. JFLEG fulfills the need
for a new gold standard to properly assess the current state of GEC.
| 2,017 | Computation and Language |
Detection of Slang Words in e-Data using semi-Supervised Learning | The proposed algorithmic approach deals with finding the sense of a word in
an electronic data. Now a day,in different communication mediums like internet,
mobile services etc. people use few words, which are slang in nature. This
approach detects those abusive words using supervised learning procedure. But
in the real life scenario, the slang words are not used in complete word forms
always. Most of the times, those words are used in different abbreviated forms
like sounds alike forms, taboo morphemes etc. This proposed approach can detect
those abbreviated forms also using semi supervised learning procedure. Using
the synset and concept analysis of the text, the probability of a suspicious
word to be a slang word is also evaluated.
| 2,017 | Computation and Language |
On the Relevance of Auditory-Based Gabor Features for Deep Learning in
Automatic Speech Recognition | Previous studies support the idea of merging auditory-based Gabor features
with deep learning architectures to achieve robust automatic speech
recognition, however, the cause behind the gain of such combination is still
unknown. We believe these representations provide the deep learning decoder
with more discriminable cues. Our aim with this paper is to validate this
hypothesis by performing experiments with three different recognition tasks
(Aurora 4, CHiME 2 and CHiME 3) and assess the discriminability of the
information encoded by Gabor filterbank features. Additionally, to identify the
contribution of low, medium and high temporal modulation frequencies subsets of
the Gabor filterbank were used as features (dubbed LTM, MTM and HTM
respectively). With temporal modulation frequencies between 16 and 25 Hz, HTM
consistently outperformed the remaining ones in every condition, highlighting
the robustness of these representations against channel distortions, low
signal-to-noise ratios and acoustically challenging real-life scenarios with
relative improvements from 11 to 56% against a Mel-filterbank-DNN baseline. To
explain the results, a measure of similarity between phoneme classes from DNN
activations is proposed and linked to their acoustic properties. We find this
measure to be consistent with the observed error rates and highlight specific
differences on phoneme level to pinpoint the benefit of the proposed features.
| 2,017 | Computation and Language |
A case study on using speech-to-translation alignments for language
documentation | For many low-resource or endangered languages, spoken language resources are
more likely to be annotated with translations than with transcriptions. Recent
work exploits such annotations to produce speech-to-translation alignments,
without access to any text transcriptions. We investigate whether providing
such information can aid in producing better (mismatched) crowdsourced
transcriptions, which in turn could be valuable for training speech recognition
systems, and show that they can indeed be beneficial through a small-scale case
study as a proof-of-concept. We also present a simple phonetically aware string
averaging technique that produces transcriptions of higher quality.
| 2,017 | Computation and Language |
Automated Phrase Mining from Massive Text Corpora | As one of the fundamental tasks in text analysis, phrase mining aims at
extracting quality phrases from a text corpus. Phrase mining is important in
various tasks such as information extraction/retrieval, taxonomy construction,
and topic modeling. Most existing methods rely on complex, trained linguistic
analyzers, and thus likely have unsatisfactory performance on text corpora of
new domains and genres without extra but expensive adaption. Recently, a few
data-driven methods have been developed successfully for extraction of phrases
from massive domain-specific text. However, none of the state-of-the-art models
is fully automated because they require human experts for designing rules or
labeling phrases.
Since one can easily obtain many quality phrases from public knowledge bases
to a scale that is much larger than that produced by human experts, in this
paper, we propose a novel framework for automated phrase mining, AutoPhrase,
which leverages this large amount of high-quality phrases in an effective way
and achieves better performance compared to limited human labeled phrases. In
addition, we develop a POS-guided phrasal segmentation model, which
incorporates the shallow syntactic information in part-of-speech (POS) tags to
further enhance the performance, when a POS tagger is available. Note that,
AutoPhrase can support any language as long as a general knowledge base (e.g.,
Wikipedia) in that language is available, while benefiting from, but not
requiring, a POS tagger. Compared to the state-of-the-art methods, the new
method has shown significant improvements in effectiveness on five real-world
datasets across different domains and languages.
| 2,017 | Computation and Language |
Transfer Deep Learning for Low-Resource Chinese Word Segmentation with a
Novel Neural Network | Recent studies have shown effectiveness in using neural networks for Chinese
word segmentation. However, these models rely on large-scale data and are less
effective for low-resource datasets because of insufficient training data. We
propose a transfer learning method to improve low-resource word segmentation by
leveraging high-resource corpora. First, we train a teacher model on
high-resource corpora and then use the learned knowledge to initialize a
student model. Second, a weighted data similarity method is proposed to train
the student model on low-resource data. Experiment results show that our work
significantly improves the performance on low-resource datasets: 2.3% and 1.5%
F-score on PKU and CTB datasets. Furthermore, this paper achieves
state-of-the-art results: 96.1%, and 96.2% F-score on PKU and CTB datasets.
| 2,017 | Computation and Language |
A Dependency-Based Neural Reordering Model for Statistical Machine
Translation | In machine translation (MT) that involves translating between two languages
with significant differences in word order, determining the correct word order
of translated words is a major challenge. The dependency parse tree of a source
sentence can help to determine the correct word order of the translated words.
In this paper, we present a novel reordering approach utilizing a neural
network and dependency-based embeddings to predict whether the translations of
two source words linked by a dependency relation should remain in the same
order or should be swapped in the translated sentence. Experiments on
Chinese-to-English translation show that our approach yields a statistically
significant improvement of 0.57 BLEU point on benchmark NIST test sets,
compared to our prior state-of-the-art statistical MT system that uses sparse
dependency-based reordering features.
| 2,017 | Computation and Language |
Frustratingly Short Attention Spans in Neural Language Modeling | Neural language models predict the next token using a latent representation
of the immediate token history. Recently, various methods for augmenting neural
language models with an attention mechanism over a differentiable memory have
been proposed. For predicting the next token, these models query information
from a memory of the recent history which can facilitate learning mid- and
long-range dependencies. However, conventional attention mechanisms used in
memory-augmented neural language models produce a single output vector per time
step. This vector is used both for predicting the next token as well as for the
key and value of a differentiable memory of a token history. In this paper, we
propose a neural language model with a key-value attention mechanism that
outputs separate representations for the key and value of a differentiable
memory, as well as for encoding the next-word distribution. This model
outperforms existing memory-augmented neural language models on two corpora.
Yet, we found that our method mainly utilizes a memory of the five most recent
output representations. This led to the unexpected main finding that a much
simpler model based only on the concatenation of recent output representations
from previous time steps is on par with more sophisticated memory-augmented
neural language models.
| 2,017 | Computation and Language |
Automated Identification of Drug-Drug Interactions in Pediatric
Congestive Heart Failure Patients | Congestive Heart Failure, or CHF, is a serious medical condition that can
result in fluid buildup in the body as a result of a weak heart. When the heart
can't pump enough blood to efficiently deliver nutrients and oxygen to the
body, kidney function may be impaired, resulting in fluid retention. CHF
patients require a broad drug regimen to maintain the delicate system balance,
particularly between their heart and kidneys. These drugs include ACE
inhibitors and Beta Blockers to control blood pressure, anticoagulants to
prevent blood clots, and diuretics to reduce fluid overload. Many of these
drugs may interact, and potential effects of these interactions must be weighed
against their benefits. For this project, we consider a set of 44 drugs
identified as specifically relevant for treating CHF by pediatric cardiologists
at Lucile Packard Children's Hospital. This list was generated as part of our
current work at the LPCH Heart Center. The goal of this project is to identify
and evaluate potentially harmful drug-drug interactions (DDIs) within pediatric
patients with Congestive Heart Failure. This identification will be done
autonomously, so that it may continuously update by evaluating newly published
literature.
| 2,017 | Computation and Language |
Training Language Models Using Target-Propagation | While Truncated Back-Propagation through Time (BPTT) is the most popular
approach to training Recurrent Neural Networks (RNNs), it suffers from being
inherently sequential (making parallelization difficult) and from truncating
gradient flow between distant time-steps. We investigate whether Target
Propagation (TPROP) style approaches can address these shortcomings.
Unfortunately, extensive experiments suggest that TPROP generally underperforms
BPTT, and we end with an analysis of this phenomenon, and suggestions for
future work.
| 2,017 | Computation and Language |
Understanding Deep Learning Performance through an Examination of Test
Set Difficulty: A Psychometric Case Study | Interpreting the performance of deep learning models beyond test set accuracy
is challenging. Characteristics of individual data points are often not
considered during evaluation, and each data point is treated equally. We
examine the impact of a test set question's difficulty to determine if there is
a relationship between difficulty and performance. We model difficulty using
well-studied psychometric methods on human response patterns. Experiments on
Natural Language Inference (NLI) and Sentiment Analysis (SA) show that the
likelihood of answering a question correctly is impacted by the question's
difficulty. As DNNs are trained with more data, easy examples are learned more
quickly than hard examples.
| 2,018 | Computation and Language |
Fast and unsupervised methods for multilingual cognate clustering | In this paper we explore the use of unsupervised methods for detecting
cognates in multilingual word lists. We use online EM to train sound segment
similarity weights for computing similarity between two words. We tested our
online systems on geographically spread sixteen different language groups of
the world and show that the Online PMI system (Pointwise Mutual Information)
outperforms a HMM based system and two linguistically motivated systems:
LexStat and ALINE. Our results suggest that a PMI system trained in an online
fashion can be used by historical linguists for fast and accurate
identification of cognates in not so well-studied language families.
| 2,017 | Computation and Language |
Addressing the Data Sparsity Issue in Neural AMR Parsing | Neural attention models have achieved great success in different NLP tasks.
How- ever, they have not fulfilled their promise on the AMR parsing task due to
the data sparsity issue. In this paper, we de- scribe a sequence-to-sequence
model for AMR parsing and present different ways to tackle the data sparsity
problem. We show that our methods achieve significant improvement over a
baseline neural atten- tion model and our results are also compet- itive
against state-of-the-art systems that do not use extra linguistic resources.
| 2,017 | Computation and Language |
Be Precise or Fuzzy: Learning the Meaning of Cardinals and Quantifiers
from Vision | People can refer to quantities in a visual scene by using either exact
cardinals (e.g. one, two, three) or natural language quantifiers (e.g. few,
most, all). In humans, these two processes underlie fairly different cognitive
and neural mechanisms. Inspired by this evidence, the present study proposes
two models for learning the objective meaning of cardinals and quantifiers from
visual scenes containing multiple objects. We show that a model capitalizing on
a 'fuzzy' measure of similarity is effective for learning quantifiers, whereas
the learning of exact cardinals is better accomplished when information about
number is provided.
| 2,017 | Computation and Language |
Experiment Segmentation in Scientific Discourse as Clause-level
Structured Prediction using Recurrent Neural Networks | We propose a deep learning model for identifying structure within experiment
narratives in scientific literature. We take a sequence labeling approach to
this problem, and label clauses within experiment narratives to identify the
different parts of the experiment. Our dataset consists of paragraphs taken
from open access PubMed papers labeled with rhetorical information as a result
of our pilot annotation. Our model is a Recurrent Neural Network (RNN) with
Long Short-Term Memory (LSTM) cells that labels clauses. The clause
representations are computed by combining word representations using a novel
attention mechanism that involves a separate RNN. We compare this model against
LSTMs where the input layer has simple or no attention and a feature rich CRF
model. Furthermore, we describe how our work could be useful for information
extraction from scientific literature.
| 2,017 | Computation and Language |
Analysis and Optimization of fastText Linear Text Classifier | The paper [1] shows that simple linear classifier can compete with complex
deep learning algorithms in text classification applications. Combining bag of
words (BoW) and linear classification techniques, fastText [1] attains same or
only slightly lower accuracy than deep learning algorithms [2-9] that are
orders of magnitude slower. We proved formally that fastText can be transformed
into a simpler equivalent classifier, which unlike fastText does not have any
hidden layer. We also proved that the necessary and sufficient dimensionality
of the word vector embedding space is exactly the number of document classes.
These results help constructing more optimal linear text classifiers with
guaranteed maximum classification capabilities. The results are proven exactly
by pure formal algebraic methods without attracting any empirical data.
| 2,017 | Computation and Language |
Reproducing and learning new algebraic operations on word embeddings
using genetic programming | Word-vector representations associate a high dimensional real-vector to every
word from a corpus. Recently, neural-network based methods have been proposed
for learning this representation from large corpora. This type of
word-to-vector embedding is able to keep, in the learned vector space, some of
the syntactic and semantic relationships present in the original word corpus.
This, in turn, serves to address different types of language classification
tasks by doing algebraic operations defined on the vectors. The general
practice is to assume that the semantic relationships between the words can be
inferred by the application of a-priori specified algebraic operations. Our
general goal in this paper is to show that it is possible to learn methods for
word composition in semantic spaces. Instead of expressing the compositional
method as an algebraic operation, we will encode it as a program, which can be
linear, nonlinear, or involve more intricate expressions. More remarkably, this
program will be evolved from a set of initial random programs by means of
genetic programming (GP). We show that our method is able to reproduce the same
behavior as human-designed algebraic operators. Using a word analogy task as
benchmark, we also show that GP-generated programs are able to obtain accuracy
values above those produced by the commonly used human-designed rule for
algebraic manipulation of word vectors. Finally, we show the robustness of our
approach by executing the evolved programs on the word2vec GoogleNews vectors,
learned over 3 billion running words, and assessing their accuracy in the same
word analogy task.
| 2,017 | Computation and Language |
A Stylometric Inquiry into Hyperpartisan and Fake News | This paper reports on a writing style analysis of hyperpartisan (i.e.,
extremely one-sided) news in connection to fake news. It presents a large
corpus of 1,627 articles that were manually fact-checked by professional
journalists from BuzzFeed. The articles originated from 9 well-known political
publishers, 3 each from the mainstream, the hyperpartisan left-wing, and the
hyperpartisan right-wing. In sum, the corpus contains 299 fake news, 97% of
which originated from hyperpartisan publishers.
We propose and demonstrate a new way of assessing style similarity between
text categories via Unmasking---a meta-learning approach originally devised for
authorship verification---, revealing that the style of left-wing and
right-wing news have a lot more in common than any of the two have with the
mainstream. Furthermore, we show that hyperpartisan news can be discriminated
well by its style from the mainstream (F1=0.78), as can be satire from both
(F1=0.81). Unsurprisingly, style-based fake news detection does not live up to
scratch (F1=0.46). Nevertheless, the former results are important to implement
pre-screening for fake news detectors.
| 2,017 | Computation and Language |
Harmonic Grammar, Optimality Theory, and Syntax Learnability: An
Empirical Exploration of Czech Word Order | This work presents a systematic theoretical and empirical comparison of the
major algorithms that have been proposed for learning Harmonic and Optimality
Theory grammars (HG and OT, respectively). By comparing learning algorithms, we
are also able to compare the closely related OT and HG frameworks themselves.
Experimental results show that the additional expressivity of the HG framework
over OT affords performance gains in the task of predicting the surface word
order of Czech sentences. We compare the perceptron with the classic Gradual
Learning Algorithm (GLA), which learns OT grammars, as well as the popular
Maximum Entropy model. In addition to showing that the perceptron is
theoretically appealing, our work shows that the performance of the HG model it
learns approaches that of the upper bound in prediction accuracy on a held out
test set and that it is capable of accurately modeling observed variation.
| 2,017 | Computation and Language |
Post-edit Analysis of Collective Biography Generation | Text generation is increasingly common but often requires manual post-editing
where high precision is critical to end users. However, manual editing is
expensive so we want to ensure this effort is focused on high-value tasks. And
we want to maintain stylistic consistency, a particular challenge in crowd
settings. We present a case study, analysing human post-editing in the context
of a template-based biography generation system. An edit flow visualisation
combined with manual characterisation of edits helps identify and prioritise
work for improving end-to-end efficiency and accuracy.
| 2,017 | Computation and Language |
Latent Variable Dialogue Models and their Diversity | We present a dialogue generation model that directly captures the variability
in possible responses to a given input, which reduces the `boring output' issue
of deterministic dialogue models. Experiments show that our model generates
more diverse outputs than baseline models, and also generates more consistently
acceptable output than sampling from a deterministic encoder-decoder model.
| 2,017 | Computation and Language |
Parent Oriented Teacher Selection Causes Language Diversity | An evolutionary model for emergence of diversity in language is developed. We
investigated the effects of two real life observations, namely, people prefer
people that they communicate with well, and people interact with people that
are physically close to each other. Clearly these groups are relatively small
compared to the entire population. We restrict selection of the teachers from
such small groups, called imitation sets, around parents. Then the child learns
language from a teacher selected within the imitation set of her parent. As a
result, there are subcommunities with their own languages developed. Within
subcommunity comprehension is found to be high. The number of languages is
related to the relative size of imitation set by a power law.
| 2,017 | Computation and Language |
Enabling Multi-Source Neural Machine Translation By Concatenating Source
Sentences In Multiple Languages | In this paper, we explore a simple solution to "Multi-Source Neural Machine
Translation" (MSNMT) which only relies on preprocessing a N-way multilingual
corpus without modifying the Neural Machine Translation (NMT) architecture or
training procedure. We simply concatenate the source sentences to form a single
long multi-source input sentence while keeping the target side sentence as it
is and train an NMT system using this preprocessed corpus. We evaluate our
method in resource poor as well as resource rich settings and show its
effectiveness (up to 4 BLEU using 2 source languages and up to 6 BLEU using 5
source languages). We also compare against existing methods for MSNMT and show
that our solution gives competitive results despite its simplicity. We also
provide some insights on how the NMT system leverages multilingual information
in such a scenario by visualizing attention.
| 2,019 | Computation and Language |
Filtering Tweets for Social Unrest | Since the events of the Arab Spring, there has been increased interest in
using social media to anticipate social unrest. While efforts have been made
toward automated unrest prediction, we focus on filtering the vast volume of
tweets to identify tweets relevant to unrest, which can be provided to
downstream users for further analysis. We train a supervised classifier that is
able to label Arabic language tweets as relevant to unrest with high
reliability. We examine the relationship between training data size and
performance and investigate ways to optimize the model building process while
minimizing cost. We also explore how confidence thresholds can be set to
achieve desired levels of performance.
| 2,017 | Computation and Language |
Learning to generate one-sentence biographies from Wikidata | We investigate the generation of one-sentence Wikipedia biographies from
facts derived from Wikidata slot-value pairs. We train a recurrent neural
network sequence-to-sequence model with attention to select facts and generate
textual summaries. Our model incorporates a novel secondary objective that
helps ensure it generates sentences that contain the input facts. The model
achieves a BLEU score of 41, improving significantly upon the vanilla
sequence-to-sequence model and scoring roughly twice that of a simple template
baseline. Human preference evaluation suggests the model is nearly as good as
the Wikipedia reference. Manual analysis explores content selection, suggesting
the model can trade the ability to infer knowledge against the risk of
hallucinating incorrect information.
| 2,017 | Computation and Language |
Reinforcement Learning Based Argument Component Detection | Argument component detection (ACD) is an important sub-task in argumentation
mining. ACD aims at detecting and classifying different argument components in
natural language texts. Historical annotations (HAs) are important features the
human annotators consider when they manually perform the ACD task. However, HAs
are largely ignored by existing automatic ACD techniques. Reinforcement
learning (RL) has proven to be an effective method for using HAs in some
natural language processing tasks. In this work, we propose a RL-based ACD
technique, and evaluate its performance on two well-annotated corpora. Results
suggest that, in terms of classification accuracy, HAs-augmented RL outperforms
plain RL by at most 17.85%, and outperforms the state-of-the-art supervised
learning algorithm by at most 11.94%.
| 2,017 | Computation and Language |
Hybrid Dialog State Tracker with ASR Features | This paper presents a hybrid dialog state tracker enhanced by trainable
Spoken Language Understanding (SLU) for slot-filling dialog systems. Our
architecture is inspired by previously proposed neural-network-based
belief-tracking systems. In addition, we extended some parts of our modular
architecture with differentiable rules to allow end-to-end training. We
hypothesize that these rules allow our tracker to generalize better than pure
machine-learning based systems. For evaluation, we used the Dialog State
Tracking Challenge (DSTC) 2 dataset - a popular belief tracking testbed with
dialogs from restaurant information system. To our knowledge, our hybrid
tracker sets a new state-of-the-art result in three out of four categories
within the DSTC2.
| 2,017 | Computation and Language |
Multitask Learning with CTC and Segmental CRF for Speech Recognition | Segmental conditional random fields (SCRFs) and connectionist temporal
classification (CTC) are two sequence labeling methods used for end-to-end
training of speech recognition models. Both models define a transcription
probability by marginalizing decisions about latent segmentation alternatives
to derive a sequence probability: the former uses a globally normalized joint
model of segment labels and durations, and the latter classifies each frame as
either an output symbol or a "continuation" of the previous label. In this
paper, we train a recognition model by optimizing an interpolation between the
SCRF and CTC losses, where the same recurrent neural network (RNN) encoder is
used for feature extraction for both outputs. We find that this multitask
objective improves recognition accuracy when decoding with either the SCRF or
CTC models. Additionally, we show that CTC can also be used to pretrain the RNN
encoder, which improves the convergence rate when learning the joint model.
| 2,017 | Computation and Language |
Syst\`emes du LIA \`a DEFT'13 | The 2013 D\'efi de Fouille de Textes (DEFT) campaign is interested in two
types of language analysis tasks, the document classification and the
information extraction in the specialized domain of cuisine recipes. We present
the systems that the LIA has used in DEFT 2013. Our systems show interesting
results, even though the complexity of the proposed tasks.
| 2,013 | Computation and Language |
Neural Multi-Step Reasoning for Question Answering on Semi-Structured
Tables | Advances in natural language processing tasks have gained momentum in recent
years due to the increasingly popular neural network methods. In this paper, we
explore deep learning techniques for answering multi-step reasoning questions
that operate on semi-structured tables. Challenges here arise from the level of
logical compositionality expressed by questions, as well as the domain
openness. Our approach is weakly supervised, trained on question-answer-table
triples without requiring intermediate strong supervision. It performs two
phases: first, machine understandable logical forms (programs) are generated
from natural language questions following the work of [Pasupat and Liang,
2015]. Second, paraphrases of logical forms and questions are embedded in a
jointly learned vector space using word and character convolutional neural
networks. A neural scoring function is further used to rank and retrieve the
most probable logical form (interpretation) of a question. Our best single
model achieves 34.8% accuracy on the WikiTableQuestions dataset, while the best
ensemble of our models pushes the state-of-the-art score on this task to 38.7%,
thus slightly surpassing both the engineered feature scoring baseline, as well
as the Neural Programmer model of [Neelakantan et al., 2016].
| 2,018 | Computation and Language |
On the Complexity of CCG Parsing | We study the parsing complexity of Combinatory Categorial Grammar (CCG) in
the formalism of Vijay-Shanker and Weir (1994). As our main result, we prove
that any parsing algorithm for this formalism will take in the worst case
exponential time when the size of the grammar, and not only the length of the
input sentence, is included in the analysis. This sets the formalism of
Vijay-Shanker and Weir (1994) apart from weakly equivalent formalisms such as
Tree-Adjoining Grammar (TAG), for which parsing can be performed in time
polynomial in the combined size of grammar and input sentence. Our results
contribute to a refined understanding of the class of mildly context-sensitive
grammars, and inform the search for new, mildly context-sensitive versions of
CCG.
| 2,018 | Computation and Language |
Guided Deep List: Automating the Generation of Epidemiological Line
Lists from Open Sources | Real-time monitoring and responses to emerging public health threats rely on
the availability of timely surveillance data. During the early stages of an
epidemic, the ready availability of line lists with detailed tabular
information about laboratory-confirmed cases can assist epidemiologists in
making reliable inferences and forecasts. Such inferences are crucial to
understand the epidemiology of a specific disease early enough to stop or
control the outbreak. However, construction of such line lists requires
considerable human supervision and therefore, difficult to generate in
real-time. In this paper, we motivate Guided Deep List, the first tool for
building automated line lists (in near real-time) from open source reports of
emerging disease outbreaks. Specifically, we focus on deriving epidemiological
characteristics of an emerging disease and the affected population from reports
of illness. Guided Deep List uses distributed vector representations (ala
word2vec) to discover a set of indicators for each line list feature. This
discovery of indicators is followed by the use of dependency parsing based
techniques for final extraction in tabular form. We evaluate the performance of
Guided Deep List against a human annotated line list provided by HealthMap
corresponding to MERS outbreaks in Saudi Arabia. We demonstrate that Guided
Deep List extracts line list features with increased accuracy compared to a
baseline method. We further show how these automatically extracted line list
features can be used for making epidemiological inferences, such as inferring
demographics and symptoms-to-hospitalization period of affected individuals.
| 2,017 | Computation and Language |
Calculating Probabilities Simplifies Word Learning | Children can use the statistical regularities of their environment to learn
word meanings, a mechanism known as cross-situational learning. We take a
computational approach to investigate how the information present during each
observation in a cross-situational framework can affect the overall acquisition
of word meanings. We do so by formulating various in-the-moment learning
mechanisms that are sensitive to different statistics of the environment, such
as counts and conditional probabilities. Each mechanism introduces a unique
source of competition or mutual exclusivity bias to the model; the mechanism
that maximally uses the model's knowledge of word meanings performs the best.
Moreover, the gap between this mechanism and others is amplified in more
challenging learning scenarios, such as learning from few examples.
| 2,017 | Computation and Language |
Context-Aware Prediction of Derivational Word-forms | Derivational morphology is a fundamental and complex characteristic of
language. In this paper we propose the new task of predicting the derivational
form of a given base-form lemma that is appropriate for a given context. We
present an encoder--decoder style neural network to produce a derived form
character-by-character, based on its corresponding character-level
representation of the base form and the context. We demonstrate that our model
is able to generate valid context-sensitive derivations from known base forms,
but is less accurate under a lexicon agnostic setting.
| 2,017 | Computation and Language |
One Representation per Word - Does it make Sense for Composition? | In this paper, we investigate whether an a priori disambiguation of word
senses is strictly necessary or whether the meaning of a word in context can be
disambiguated through composition alone. We evaluate the performance of
off-the-shelf single-vector and multi-sense vector models on a benchmark phrase
similarity task and a novel task for word-sense discrimination. We find that
single-sense vector models perform as well or better than multi-sense vector
models despite arguably less clean elementary representations. Our findings
furthermore show that simple composition functions such as pointwise addition
are able to recover sense specific information from a single-sense vector model
remarkably well.
| 2,017 | Computation and Language |
Data Distillation for Controlling Specificity in Dialogue Generation | People speak at different levels of specificity in different situations.
Depending on their knowledge, interlocutors, mood, etc.} A conversational agent
should have this ability and know when to be specific and when to be general.
We propose an approach that gives a neural network--based conversational agent
this ability. Our approach involves alternating between \emph{data
distillation} and model training : removing training examples that are closest
to the responses most commonly produced by the model trained from the last
round and then retrain the model on the remaining dataset. Dialogue generation
models trained with different degrees of data distillation manifest different
levels of specificity.
We then train a reinforcement learning system for selecting among this pool
of generation models, to choose the best level of specificity for a given
input. Compared to the original generative model trained without distillation,
the proposed system is capable of generating more interesting and
higher-quality responses, in addition to appropriately adjusting specificity
depending on the context.
Our research constitutes a specific case of a broader approach involving
training multiple subsystems from a single dataset distinguished by differences
in a specific property one wishes to model. We show that from such a set of
subsystems, one can use reinforcement learning to build a system that tailors
its output to different input contexts at test time.
| 2,017 | Computation and Language |
Fine-Grained Entity Type Classification by Jointly Learning
Representations and Label Embeddings | Fine-grained entity type classification (FETC) is the task of classifying an
entity mention to a broad set of types. Distant supervision paradigm is
extensively used to generate training data for this task. However, generated
training data assigns same set of labels to every mention of an entity without
considering its local context. Existing FETC systems have two major drawbacks:
assuming training data to be noise free and use of hand crafted features. Our
work overcomes both drawbacks. We propose a neural network model that jointly
learns entity mentions and their context representation to eliminate use of
hand crafted features. Our model treats training data as noisy and uses
non-parametric variant of hinge loss function. Experiments show that the
proposed model outperforms previous state-of-the-art methods on two publicly
available datasets, namely FIGER (GOLD) and BBN with an average relative
improvement of 2.69% in micro-F1 score. Knowledge learnt by our model on one
dataset can be transferred to other datasets while using same model or other
FETC systems. These approaches of transferring knowledge further improve the
performance of respective models.
| 2,017 | Computation and Language |
Improving a Strong Neural Parser with Conjunction-Specific Features | While dependency parsers reach very high overall accuracy, some dependency
relations are much harder than others. In particular, dependency parsers
perform poorly in coordination construction (i.e., correctly attaching the
"conj" relation). We extend a state-of-the-art dependency parser with
conjunction-specific features, focusing on the similarity between the conjuncts
head words. Training the extended parser yields an improvement in "conj"
attachment as well as in overall dependency parsing accuracy on the Stanford
dependency conversion of the Penn TreeBank.
| 2,017 | Computation and Language |
Improving Chinese SRL with Heterogeneous Annotations | Previous studies on Chinese semantic role labeling (SRL) have concentrated on
single semantically annotated corpus. But the training data of single corpus is
often limited. Meanwhile, there usually exists other semantically annotated
corpora for Chinese SRL scattered across different annotation frameworks. Data
sparsity remains a bottleneck. This situation calls for larger training
datasets, or effective approaches which can take advantage of highly
heterogeneous data. In these papers, we focus mainly on the latter, that is, to
improve Chinese SRL by using heterogeneous corpora together. We propose a novel
progressive learning model which augments the Progressive Neural Network with
Gated Recurrent Adapters. The model can accommodate heterogeneous inputs and
effectively transfer knowledge between them. We also release a new corpus,
Chinese SemBank, for Chinese SRL. Experiments on CPB 1.0 show that ours model
outperforms state-of-the-art methods.
| 2,017 | Computation and Language |
Dialectometric analysis of language variation in Twitter | In the last few years, microblogging platforms such as Twitter have given
rise to a deluge of textual data that can be used for the analysis of informal
communication between millions of individuals. In this work, we propose an
information-theoretic approach to geographic language variation using a corpus
based on Twitter. We test our models with tens of concepts and their associated
keywords detected in Spanish tweets geolocated in Spain. We employ
dialectometric measures (cosine similarity and Jensen-Shannon divergence) to
quantify the linguistic distance on the lexical level between cells created in
a uniform grid over the map. This can be done for a single concept or in the
general case taking into account an average of the considered variants. The
latter permits an analysis of the dialects that naturally emerge from the data.
Interestingly, our results reveal the existence of two dialect macrovarieties.
The first group includes a region-specific speech spoken in small towns and
rural areas whereas the second cluster encompasses cities that tend to use a
more uniform variety. Since the results obtained with the two different metrics
qualitatively agree, our work suggests that social media corpora can be
efficiently used for dialectometric analyses.
| 2,017 | Computation and Language |
Tackling Error Propagation through Reinforcement Learning: A Case of
Greedy Dependency Parsing | Error propagation is a common problem in NLP. Reinforcement learning explores
erroneous states during training and can therefore be more robust when mistakes
are made early in a process. In this paper, we apply reinforcement learning to
greedy dependency parsing which is known to suffer from error propagation.
Reinforcement learning improves accuracy of both labeled and unlabeled
dependencies of the Stanford Neural Dependency Parser, a high performance
greedy parser, while maintaining its efficiency. We investigate the portion of
errors which are the result of error propagation and confirm that reinforcement
learning reduces the occurrence of error propagation.
| 2,017 | Computation and Language |
Triaging Content Severity in Online Mental Health Forums | Mental health forums are online communities where people express their issues
and seek help from moderators and other users. In such forums, there are often
posts with severe content indicating that the user is in acute distress and
there is a risk of attempted self-harm. Moderators need to respond to these
severe posts in a timely manner to prevent potential self-harm. However, the
large volume of daily posted content makes it difficult for the moderators to
locate and respond to these critical posts. We present a framework for triaging
user content into four severity categories which are defined based on
indications of self-harm ideation. Our models are based on a feature-rich
classification framework which includes lexical, psycholinguistic, contextual
and topic modeling features. Our approaches improve the state of the art in
triaging the content severity in mental health forums by large margins (up to
17% improvement over the F-1 scores). Using the proposed model, we analyze the
mental state of users and we show that overall, long-term users of the forum
demonstrate a decreased severity of risk over time. Our analysis on the
interaction of the moderators with the users further indicates that without an
automatic way to identify critical content, it is indeed challenging for the
moderators to provide timely response to the users in need.
| 2,017 | Computation and Language |
EVE: Explainable Vector Based Embedding Technique Using Wikipedia | We present an unsupervised explainable word embedding technique, called EVE,
which is built upon the structure of Wikipedia. The proposed model defines the
dimensions of a semantic vector representing a word using human-readable
labels, thereby it readily interpretable. Specifically, each vector is
constructed using the Wikipedia category graph structure together with the
Wikipedia article link structure. To test the effectiveness of the proposed
word embedding model, we consider its usefulness in three fundamental tasks: 1)
intruder detection - to evaluate its ability to identify a non-coherent vector
from a list of coherent vectors, 2) ability to cluster - to evaluate its
tendency to group related vectors together while keeping unrelated vectors in
separate clusters, and 3) sorting relevant items first - to evaluate its
ability to rank vectors (items) relevant to the query in the top order of the
result. For each task, we also propose a strategy to generate a task-specific
human-interpretable explanation from the model. These demonstrate the overall
effectiveness of the explainable embeddings generated by EVE. Finally, we
compare EVE with the Word2Vec, FastText, and GloVe embedding techniques across
the three tasks, and report improvements over the state-of-the-art.
| 2,017 | Computation and Language |
Unsupervised Learning of Morphological Forests | This paper focuses on unsupervised modeling of morphological families,
collectively comprising a forest over the language vocabulary. This formulation
enables us to capture edgewise properties reflecting single-step morphological
derivations, along with global distributional properties of the entire forest.
These global properties constrain the size of the affix set and encourage
formation of tight morphological families. The resulting objective is solved
using Integer Linear Programming (ILP) paired with contrastive estimation. We
train the model by alternating between optimizing the local log-linear model
and the global ILP objective. We evaluate our system on three tasks: root
detection, clustering of morphological families and segmentation. Our
experiments demonstrate that our model yields consistent gains in all three
tasks compared with the best published results.
| 2,017 | Computation and Language |
Feature Generation for Robust Semantic Role Labeling | Hand-engineered feature sets are a well understood method for creating robust
NLP models, but they require a lot of expertise and effort to create. In this
work we describe how to automatically generate rich feature sets from simple
units called featlets, requiring less engineering. Using information gain to
guide the generation process, we train models which rival the state of the art
on two standard Semantic Role Labeling datasets with almost no task or
linguistic insight.
| 2,017 | Computation and Language |
Pronunciation recognition of English phonemes /\textipa{@}/, /{\ae}/,
/\textipa{A}:/ and /\textipa{2}/ using Formants and Mel Frequency Cepstral
Coefficients | The Vocal Joystick Vowel Corpus, by Washington University, was used to study
monophthongs pronounced by native English speakers. The objective of this study
was to quantitatively measure the extent at which speech recognition methods
can distinguish between similar sounding vowels. In particular, the phonemes
/\textipa{@}/, /{\ae}/, /\textipa{A}:/ and /\textipa{2}/ were analysed. 748
sound files from the corpus were used and subjected to Linear Predictive Coding
(LPC) to compute their formants, and to Mel Frequency Cepstral Coefficients
(MFCC) algorithm, to compute the cepstral coefficients. A Decision Tree
Classifier was used to build a predictive model that learnt the patterns of the
two first formants measured in the data set, as well as the patterns of the 13
cepstral coefficients. An accuracy of 70\% was achieved using formants for the
mentioned phonemes. For the MFCC analysis an accuracy of 52 \% was achieved and
an accuracy of 71\% when /\textipa{@}/ was ignored. The results obtained show
that the studied algorithms are far from mimicking the ability of
distinguishing subtle differences in sounds like human hearing does.
| 2,017 | Computation and Language |
A Neural Attention Model for Categorizing Patient Safety Events | Medical errors are leading causes of death in the US and as such, prevention
of these errors is paramount to promoting health care. Patient Safety Event
reports are narratives describing potential adverse events to the patients and
are important in identifying and preventing medical errors. We present a neural
network architecture for identifying the type of safety events which is the
first step in understanding these narratives. Our proposed model is based on a
soft neural attention model to improve the effectiveness of encoding long
sequences. Empirical results on two large-scale real-world datasets of patient
safety reports demonstrate the effectiveness of our method with significant
improvements over existing methods.
| 2,017 | Computation and Language |
LTSG: Latent Topical Skip-Gram for Mutually Learning Topic Model and
Vector Representations | Topic models have been widely used in discovering latent topics which are
shared across documents in text mining. Vector representations, word embeddings
and topic embeddings, map words and topics into a low-dimensional and dense
real-value vector space, which have obtained high performance in NLP tasks.
However, most of the existing models assume the result trained by one of them
are perfect correct and used as prior knowledge for improving the other model.
Some other models use the information trained from external large corpus to
help improving smaller corpus. In this paper, we aim to build such an algorithm
framework that makes topic models and vector representations mutually improve
each other within the same corpus. An EM-style algorithm framework is employed
to iteratively optimize both topic model and vector representations.
Experimental results show that our model outperforms state-of-art methods on
various NLP tasks.
| 2,017 | Computation and Language |
Utilizing Lexical Similarity between Related, Low-resource Languages for
Pivot-based SMT | We investigate pivot-based translation between related languages in a low
resource, phrase-based SMT setting. We show that a subword-level pivot-based
SMT model using a related pivot language is substantially better than word and
morpheme-level pivot models. It is also highly competitive with the best direct
translation model, which is encouraging as no direct source-target training
corpus is used. We also show that combining multiple related language pivot
models can rival a direct translation model. Thus, the use of subwords as
translation units coupled with multiple related pivot languages can compensate
for the lack of a direct parallel corpus.
| 2,017 | Computation and Language |
Are Emojis Predictable? | Emojis are ideograms which are naturally combined with plain text to visually
complement or condense the meaning of a message. Despite being widely used in
social media, their underlying semantics have received little attention from a
Natural Language Processing standpoint. In this paper, we investigate the
relation between words and emojis, studying the novel task of predicting which
emojis are evoked by text-based tweet messages. We train several models based
on Long Short-Term Memory networks (LSTMs) in this task. Our experimental
results show that our neural model outperforms two baselines as well as humans
solving the same task, suggesting that computational models are able to better
capture the underlying semantics of emojis.
| 2,017 | Computation and Language |
Inherent Biases of Recurrent Neural Networks for Phonological
Assimilation and Dissimilation | A recurrent neural network model of phonological pattern learning is
proposed. The model is a relatively simple neural network with one recurrent
layer, and displays biases in learning that mimic observed biases in human
learning. Single-feature patterns are learned faster than two-feature patterns,
and vowel or consonant-only patterns are learned faster than patterns involving
vowels and consonants, mimicking the results of laboratory learning
experiments. In non-recurrent models, capturing these biases requires the use
of alpha features or some other representation of repeated features, but with a
recurrent neural network, these elaborations are not necessary.
| 2,017 | Computation and Language |
Dirichlet-vMF Mixture Model | This document is about the multi-document Von-Mises-Fisher mixture model with
a Dirichlet prior, referred to as VMFMix. VMFMix is analogous to Latent
Dirichlet Allocation (LDA) in that they can capture the co-occurrence patterns
acorss multiple documents. The difference is that in VMFMix, the topic-word
distribution is defined on a continuous n-dimensional hypersphere. Hence VMFMix
is used to derive topic embeddings, i.e., representative vectors, from multiple
sets of embedding vectors. An efficient Variational Expectation-Maximization
inference algorithm is derived. The performance of VMFMix on two document
classification tasks is reported, with some preliminary analysis.
| 2,017 | Computation and Language |
Use Generalized Representations, But Do Not Forget Surface Features | Only a year ago, all state-of-the-art coreference resolvers were using an
extensive amount of surface features. Recently, there was a paradigm shift
towards using word embeddings and deep neural networks, where the use of
surface features is very limited. In this paper, we show that a simple SVM
model with surface features outperforms more complex neural models for
detecting anaphoric mentions. Our analysis suggests that using generalized
representations and surface features have different strength that should be
both taken into account for improving coreference resolution.
| 2,017 | Computation and Language |
Consistent Alignment of Word Embedding Models | Word embedding models offer continuous vector representations that can
capture rich contextual semantics based on their word co-occurrence patterns.
While these word vectors can provide very effective features used in many NLP
tasks such as clustering similar words and inferring learning relationships,
many challenges and open research questions remain. In this paper, we propose a
solution that aligns variations of the same model (or different models) in a
joint low-dimensional latent space leveraging carefully generated synthetic
data points. This generative process is inspired by the observation that a
variety of linguistic relationships is captured by simple linear operations in
embedded space. We demonstrate that our approach can lead to substantial
improvements in recovering embeddings of local neighborhoods.
| 2,017 | Computation and Language |
When confidence and competence collide: Effects on online
decision-making discussions | Group discussions are a way for individuals to exchange ideas and arguments
in order to reach better decisions than they could on their own. One of the
premises of productive discussions is that better solutions will prevail, and
that the idea selection process is mediated by the (relative) competence of the
individuals involved. However, since people may not know their actual
competence on a new task, their behavior is influenced by their self-estimated
competence --- that is, their confidence --- which can be misaligned with their
actual competence.
Our goal in this work is to understand the effects of confidence-competence
misalignment on the dynamics and outcomes of discussions. To this end, we
design a large-scale natural setting, in the form of an online team-based
geography game, that allows us to disentangle confidence from competence and
thus separate their effects.
We find that in task-oriented discussions, the more-confident individuals
have a larger impact on the group's decisions even when these individuals are
at the same level of competence as their teammates. Furthermore, this
unjustified role of confidence in the decision-making process often leads teams
to under-perform. We explore this phenomenon by investigating the effects of
confidence on conversational dynamics.
| 2,017 | Computation and Language |
Residual Convolutional CTC Networks for Automatic Speech Recognition | Deep learning approaches have been widely used in Automatic Speech
Recognition (ASR) and they have achieved a significant accuracy improvement.
Especially, Convolutional Neural Networks (CNNs) have been revisited in ASR
recently. However, most CNNs used in existing work have less than 10 layers
which may not be deep enough to capture all human speech signal information. In
this paper, we propose a novel deep and wide CNN architecture denoted as
RCNN-CTC, which has residual connections and Connectionist Temporal
Classification (CTC) loss function. RCNN-CTC is an end-to-end system which can
exploit temporal and spectral structures of speech signals simultaneously.
Furthermore, we introduce a CTC-based system combination, which is different
from the conventional frame-wise senone-based one. The basic subsystems adopted
in the combination are different types and thus mutually complementary to each
other. Experimental results show that our proposed single system RCNN-CTC can
achieve the lowest word error rate (WER) on WSJ and Tencent Chat data sets,
compared to several widely used neural network systems in ASR. In addition, the
proposed system combination can offer a further error reduction on these two
data sets, resulting in relative WER reductions of $14.91\%$ and $6.52\%$ on
WSJ dev93 and Tencent Chat data sets respectively.
| 2,017 | Computation and Language |
Deep Voice: Real-time Neural Text-to-Speech | We present Deep Voice, a production-quality text-to-speech system constructed
entirely from deep neural networks. Deep Voice lays the groundwork for truly
end-to-end neural speech synthesis. The system comprises five major building
blocks: a segmentation model for locating phoneme boundaries, a
grapheme-to-phoneme conversion model, a phoneme duration prediction model, a
fundamental frequency prediction model, and an audio synthesis model. For the
segmentation model, we propose a novel way of performing phoneme boundary
detection with deep neural networks using connectionist temporal classification
(CTC) loss. For the audio synthesis model, we implement a variant of WaveNet
that requires fewer parameters and trains faster than the original. By using a
neural network for each component, our system is simpler and more flexible than
traditional text-to-speech systems, where each component requires laborious
feature engineering and extensive domain expertise. Finally, we show that
inference with our system can be performed faster than real time and describe
optimized WaveNet inference kernels on both CPU and GPU that achieve up to 400x
speedups over existing implementations.
| 2,017 | Computation and Language |
Critical Survey of the Freely Available Arabic Corpora | The availability of corpora is a major factor in building natural language
processing applications. However, the costs of acquiring corpora can prevent
some researchers from going further in their endeavours. The ease of access to
freely available corpora is urgent needed in the NLP research community
especially for language such as Arabic. Currently, there is not easy was to
access to a comprehensive and updated list of freely available Arabic corpora.
We present in this paper, the results of a recent survey conducted to identify
the list of the freely available Arabic corpora and language resources. Our
preliminary results showed an initial list of 66 sources. We presents our
findings in the various categories studied and we provided the direct links to
get the data when possible.
| 2,017 | Computation and Language |
Detecting (Un)Important Content for Single-Document News Summarization | We present a robust approach for detecting intrinsic sentence importance in
news, by training on two corpora of document-summary pairs. When used for
single-document summarization, our approach, combined with the "beginning of
document" heuristic, outperforms a state-of-the-art summarizer and the
beginning-of-article baseline in both automatic and manual evaluations. These
results represent an important advance because in the absence of cross-document
repetition, single document summarizers for news have not been able to
consistently outperform the strong beginning-of-article baseline.
| 2,017 | Computation and Language |
Friends and Enemies of Clinton and Trump: Using Context for Detecting
Stance in Political Tweets | Stance detection, the task of identifying the speaker's opinion towards a
particular target, has attracted the attention of researchers. This paper
describes a novel approach for detecting stance in Twitter. We define a set of
features in order to consider the context surrounding a target of interest with
the final aim of training a model for predicting the stance towards the
mentioned targets. In particular, we are interested in investigating political
debates in social media. For this reason we evaluated our approach focusing on
two targets of the SemEval-2016 Task6 on Detecting stance in tweets, which are
related to the political campaign for the 2016 U.S. presidential elections:
Hillary Clinton vs. Donald Trump. For the sake of comparison with the state of
the art, we evaluated our model against the dataset released in the
SemEval-2016 Task 6 shared task competition. Our results outperform the best
ones obtained by participating teams, and show that information about enemies
and friends of politicians help in detecting stance towards them.
| 2,020 | Computation and Language |
A case study on English-Malayalam Machine Translation | In this paper we present our work on a case study on Statistical Machine
Translation (SMT) and Rule based machine translation (RBMT) for translation
from English to Malayalam and Malayalam to English. One of the motivations of
our study is to make a three way performance comparison, such as, a) SMT and
RBMT b) English to Malayalam SMT and Malayalam to English SMT c) English to
Malayalam RBMT and Malayalam to English RBMT. We describe the development of
English to Malayalam and Malayalam to English baseline phrase based SMT system
and the evaluation of its performance compared against the RBMT system. Based
on our study the observations are: a) SMT systems outperform RBMT systems, b)
In the case of SMT, English - Malayalam systems perform better than that of
Malayalam - English systems, c) In the case RBMT, Malayalam to English systems
are performing better than English to Malayalam systems. Based on our
evaluations and detailed error analysis, we describe the requirements of
incorporating morphological processing into the SMT to improve the accuracy of
translation.
| 2,017 | Computation and Language |
Identifying beneficial task relations for multi-task learning in deep
neural networks | Multi-task learning (MTL) in deep neural networks for NLP has recently
received increasing interest due to some compelling benefits, including its
potential to efficiently regularize models and to reduce the need for labeled
data. While it has brought significant improvements in a number of NLP tasks,
mixed results have been reported, and little is known about the conditions
under which MTL leads to gains in NLP. This paper sheds light on the specific
task relations that can lead to gains from MTL models over single-task setups.
| 2,017 | Computation and Language |
Political Homophily in Independence Movements: Analysing and Classifying
Social Media Users by National Identity | Social media and data mining are increasingly being used to analyse political
and societal issues. Here we undertake the classification of social media users
as supporting or opposing ongoing independence movements in their territories.
Independence movements occur in territories whose citizens have conflicting
national identities; users with opposing national identities will then support
or oppose the sense of being part of an independent nation that differs from
the officially recognised country. We describe a methodology that relies on
users' self-reported location to build large-scale datasets for three
territories -- Catalonia, the Basque Country and Scotland. An analysis of these
datasets shows that homophily plays an important role in determining who people
connect with, as users predominantly choose to follow and interact with others
from the same national identity. We show that a classifier relying on users'
follow networks can achieve accurate, language-independent classification
performances ranging from 85% to 97% for the three territories.
| 2,018 | Computation and Language |
A Knowledge-Based Approach to Word Sense Disambiguation by
distributional selection and semantic features | Word sense disambiguation improves many Natural Language Processing (NLP)
applications such as Information Retrieval, Information Extraction, Machine
Translation, or Lexical Simplification. Roughly speaking, the aim is to choose
for each word in a text its best sense. One of the most popular method
estimates local semantic similarity relatedness between two word senses and
then extends it to all words from text. The most direct method computes a rough
score for every pair of word senses and chooses the lexical chain that has the
best score (we can imagine the exponential complexity that returns this
comprehensive approach). In this paper, we propose to use a combinatorial
optimization metaheuristic for choosing the nearest neighbors obtained by
distributional selection around the word to disambiguate. The test and the
evaluation of our method concern a corpus written in French by means of the
semantic network BabelNet. The obtained accuracy rate is 78 % on all names and
verbs chosen for the evaluation.
| 2,015 | Computation and Language |
Approches d'analyse distributionnelle pour am\'eliorer la
d\'esambigu\"isation s\'emantique | Word sense disambiguation (WSD) improves many Natural Language Processing
(NLP) applications such as Information Retrieval, Machine Translation or
Lexical Simplification. WSD is the ability of determining a word sense among
different ones within a polysemic lexical unit taking into account the context.
The most straightforward approach uses a semantic proximity measure between the
word sense candidates of the target word and those of its context. Such a
method very easily entails a combinatorial explosion. In this paper, we propose
two methods based on distributional analysis which enable to reduce the
exponential complexity without losing the coherence. We present a comparison
between the selection of distributional neighbors and the linearly nearest
neighbors. The figures obtained show that selecting distributional neighbors
leads to better results.
| 2,017 | Computation and Language |
Soft Label Memorization-Generalization for Natural Language Inference | Often when multiple labels are obtained for a training example it is assumed
that there is an element of noise that must be accounted for. It has been shown
that this disagreement can be considered signal instead of noise. In this work
we investigate using soft labels for training data to improve generalization in
machine learning models. However, using soft labels for training Deep Neural
Networks (DNNs) is not practical due to the costs involved in obtaining
multiple labels for large data sets. We propose soft label
memorization-generalization (SLMG), a fine-tuning approach to using soft labels
for training DNNs. We assume that differences in labels provided by human
annotators represent ambiguity about the true label instead of noise.
Experiments with SLMG demonstrate improved generalization performance on the
Natural Language Inference (NLI) task. Our experiments show that by injecting a
small percentage of soft label training data (0.03% of training set size) we
can improve generalization performance over several baselines.
| 2,019 | Computation and Language |
Scaffolding Networks: Incremental Learning and Teaching Through
Questioning | We introduce a new paradigm of learning for reasoning, understanding, and
prediction, as well as the scaffolding network to implement this paradigm. The
scaffolding network embodies an incremental learning approach that is
formulated as a teacher-student network architecture to teach machines how to
understand text and do reasoning. The key to our computational scaffolding
approach is the interactions between the teacher and the student through
sequential questioning. The student observes each sentence in the text
incrementally, and it uses an attention-based neural net to discover and
register the key information in relation to its current memory. Meanwhile, the
teacher asks questions about the observed text, and the student network gets
rewarded by correctly answering these questions. The entire network is updated
continually using reinforcement learning. Our experimental results on synthetic
and real datasets show that the scaffolding network not only outperforms
state-of-the-art methods but also learns to do reasoning in a scalable way even
with little human generated input.
| 2,017 | Computation and Language |
Studying Positive Speech on Twitter | We present results of empirical studies on positive speech on Twitter. By
positive speech we understand speech that works for the betterment of a given
situation, in this case relations between different communities in a
conflict-prone country. We worked with four Twitter data sets. Through
semi-manual opinion mining, we found that positive speech accounted for < 1% of
the data . In fully automated studies, we tested two approaches: unsupervised
statistical analysis, and supervised text classification based on distributed
word representation. We discuss benefits and challenges of those approaches and
report empirical evidence obtained in the study.
| 2,017 | Computation and Language |
A Joint Identification Approach for Argumentative Writing Revisions | Prior work on revision identification typically uses a pipeline method:
revision extraction is first conducted to identify the locations of revisions
and revision classification is then conducted on the identified revisions. Such
a setting propagates the errors of the revision extraction step to the revision
classification step. This paper proposes an approach that identifies the
revision location and the revision type jointly to solve the issue of error
propagation. It utilizes a sequence representation of revisions and conducts
sequence labeling for revision identification. A mutation-based approach is
utilized to update identification sequences. Results demonstrate that our
proposed approach yields better performance on both revision location
extraction and revision type classification compared to a pipeline baseline.
| 2,017 | Computation and Language |
Gram-CTC: Automatic Unit Selection and Target Decomposition for Sequence
Labelling | Most existing sequence labelling models rely on a fixed decomposition of a
target sequence into a sequence of basic units. These methods suffer from two
major drawbacks: 1) the set of basic units is fixed, such as the set of words,
characters or phonemes in speech recognition, and 2) the decomposition of
target sequences is fixed. These drawbacks usually result in sub-optimal
performance of modeling sequences. In this pa- per, we extend the popular CTC
loss criterion to alleviate these limitations, and propose a new loss function
called Gram-CTC. While preserving the advantages of CTC, Gram-CTC automatically
learns the best set of basic units (grams), as well as the most suitable
decomposition of tar- get sequences. Unlike CTC, Gram-CTC allows the model to
output variable number of characters at each time step, which enables the model
to capture longer term dependency and improves the computational efficiency. We
demonstrate that the proposed Gram-CTC improves CTC in terms of both
performance and efficiency on the large vocabulary speech recognition task at
multiple scales of data, and that with Gram-CTC we can outperform the
state-of-the-art on a standard speech benchmark.
| 2,017 | Computation and Language |
Learning Conversational Systems that Interleave Task and Non-Task
Content | Task-oriented dialog systems have been applied in various tasks, such as
automated personal assistants, customer service providers and tutors. These
systems work well when users have clear and explicit intentions that are
well-aligned to the systems' capabilities. However, they fail if users
intentions are not explicit. To address this shortcoming, we propose a
framework to interleave non-task content (i.e. everyday social conversation)
into task conversations. When the task content fails, the system can still keep
the user engaged with the non-task content. We trained a policy using
reinforcement learning algorithms to promote long-turn conversation coherence
and consistency, so that the system can have smooth transitions between task
and non-task content. To test the effectiveness of the proposed framework, we
developed a movie promotion dialog system. Experiments with human users
indicate that a system that interleaves social and task content achieves a
better task success rate and is also rated as more engaging compared to a pure
task-oriented system.
| 2,018 | Computation and Language |
Tracing Linguistic Relations in Winning and Losing Sides of Explicit
Opposing Groups | Linguistic relations in oral conversations present how opinions are
constructed and developed in a restricted time. The relations bond ideas,
arguments, thoughts, and feelings, re-shape them during a speech, and finally
build knowledge out of all information provided in the conversation. Speakers
share a common interest to discuss. It is expected that each speaker's reply
includes duplicated forms of words from previous speakers. However, linguistic
adaptation is observed and evolves in a more complex path than just
transferring slightly modified versions of common concepts. A conversation
aiming a benefit at the end shows an emergent cooperation inducing the
adaptation. Not only cooperation, but also competition drives the adaptation or
an opposite scenario and one can capture the dynamic process by tracking how
the concepts are linguistically linked. To uncover salient complex dynamic
events in verbal communications, we attempt to discover self-organized
linguistic relations hidden in a conversation with explicitly stated winners
and losers. We examine open access data of the United States Supreme Court. Our
understanding is crucial in big data research to guide how transition states in
opinion mining and decision-making should be modeled and how this required
knowledge to guide the model should be pinpointed, by filtering large amount of
data.
| 2,017 | Computation and Language |
Unsupervised Ensemble Ranking of Terms in Electronic Health Record Notes
Based on Their Importance to Patients | Background: Electronic health record (EHR) notes contain abundant medical
jargon that can be difficult for patients to comprehend. One way to help
patients is to reduce information overload and help them focus on medical terms
that matter most to them.
Objective: The aim of this work was to develop FIT (Finding Important Terms
for patients), an unsupervised natural language processing (NLP) system that
ranks medical terms in EHR notes based on their importance to patients.
Methods: We built FIT on a new unsupervised ensemble ranking model derived
from the biased random walk algorithm to combine heterogeneous information
resources for ranking candidate terms from each EHR note. Specifically, FIT
integrates four single views for term importance: patient use of medical
concepts, document-level term salience, word-occurrence based term relatedness,
and topic coherence. It also incorporates partial information of term
importance as conveyed by terms' unfamiliarity levels and semantic types. We
evaluated FIT on 90 expert-annotated EHR notes and compared it with three
benchmark unsupervised ensemble ranking methods.
Results: FIT achieved 0.885 AUC-ROC for ranking candidate terms from EHR
notes to identify important terms. When including term identification, the
performance of FIT for identifying important terms from EHR notes was 0.813
AUC-ROC. It outperformed the three ensemble rankers for most metrics. Its
performance is relatively insensitive to its parameter.
Conclusions: FIT can automatically identify EHR terms important to patients
and may help develop personalized interventions to improve quality of care. By
using unsupervised learning as well as a robust and flexible framework for
information fusion, FIT can be readily applied to other domains and
applications.
| 2,017 | Computation and Language |
Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ | Scattertext is an open source tool for visualizing linguistic variation
between document categories in a language-independent way. The tool presents a
scatterplot, where each axis corresponds to the rank-frequency a term occurs in
a category of documents. Through a tie-breaking strategy, the tool is able to
display thousands of visible term-representing points and find space to legibly
label hundreds of them. Scattertext also lends itself to a query-based
visualization of how the use of terms with similar embeddings differs between
document categories, as well as a visualization for comparing the importance
scores of bag-of-words features to univariate metrics.
| 2,017 | Computation and Language |
Structural Embedding of Syntactic Trees for Machine Comprehension | Deep neural networks for machine comprehension typically utilizes only word
or character embeddings without explicitly taking advantage of structured
linguistic information such as constituency trees and dependency trees. In this
paper, we propose structural embedding of syntactic trees (SEST), an algorithm
framework to utilize structured information and encode them into vector
representations that can boost the performance of algorithms for the machine
comprehension. We evaluate our approach using a state-of-the-art neural
attention model on the SQuAD dataset. Experimental results demonstrate that our
model can accurately identify the syntactic boundaries of the sentences and
extract answers that are syntactically coherent over the baseline methods.
| 2,017 | Computation and Language |
Dynamic Word Embeddings for Evolving Semantic Discovery | Word evolution refers to the changing meanings and associations of words
throughout time, as a byproduct of human language evolution. By studying word
evolution, we can infer social trends and language constructs over different
periods of human history. However, traditional techniques such as word
representation learning do not adequately capture the evolving language
structure and vocabulary. In this paper, we develop a dynamic statistical model
to learn time-aware word vector representation. We propose a model that
simultaneously learns time-aware embeddings and solves the resulting "alignment
problem". This model is trained on a crawled NYTimes dataset. Additionally, we
develop multiple intuitive evaluation strategies of temporal word embeddings.
Our qualitative and quantitative tests indicate that our method not only
reliably captures this evolution over time, but also consistently outperforms
state-of-the-art temporal embedding approaches on both semantic accuracy and
alignment quality.
| 2,018 | Computation and Language |
Lock-Free Parallel Perceptron for Graph-based Dependency Parsing | Dependency parsing is an important NLP task. A popular approach for
dependency parsing is structured perceptron. Still, graph-based dependency
parsing has the time complexity of $O(n^3)$, and it suffers from slow training.
To deal with this problem, we propose a parallel algorithm called parallel
perceptron. The parallel algorithm can make full use of a multi-core computer
which saves a lot of training time. Based on experiments we observe that
dependency parsing with parallel perceptron can achieve 8-fold faster training
speed than traditional structured perceptron methods when using 10 threads, and
with no loss at all in accuracy.
| 2,017 | Computation and Language |
A Generic Online Parallel Learning Framework for Large Margin Models | To speed up the training process, many existing systems use parallel
technology for online learning algorithms. However, most research mainly focus
on stochastic gradient descent (SGD) instead of other algorithms. We propose a
generic online parallel learning framework for large margin models, and also
analyze our framework on popular large margin algorithms, including MIRA and
Structured Perceptron. Our framework is lock-free and easy to implement on
existing systems. Experiments show that systems with our framework can gain
near linear speed up by increasing running threads, and with no loss in
accuracy.
| 2,017 | Computation and Language |
A Comparative Study of Word Embeddings for Reading Comprehension | The focus of past machine learning research for Reading Comprehension tasks
has been primarily on the design of novel deep learning architectures. Here we
show that seemingly minor choices made on (1) the use of pre-trained word
embeddings, and (2) the representation of out-of-vocabulary tokens at test
time, can turn out to have a larger impact than architectural choices on the
final performance. We systematically explore several options for these choices,
and provide recommendations to researchers working in this area.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.