Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Self-Normalization Properties of Language Modeling | Self-normalizing discriminative models approximate the normalized probability
of a class without having to compute the partition function. In the context of
language modeling, this property is particularly appealing as it may
significantly reduce run-times due to large word vocabularies. In this study,
we provide a comprehensive investigation of language modeling
self-normalization. First, we theoretically analyze the inherent
self-normalization properties of Noise Contrastive Estimation (NCE) language
models. Then, we compare them empirically to softmax-based approaches, which
are self-normalized using explicit regularization, and suggest a hybrid model
with compelling properties. Finally, we uncover a surprising negative
correlation between self-normalization and perplexity across the board, as well
as some regularity in the observed errors, which may potentially be used for
improving self-normalization algorithms in the future.
| 2,018 | Computation and Language |
DRCD: a Chinese Machine Reading Comprehension Dataset | In this paper, we introduce DRCD (Delta Reading Comprehension Dataset), an
open domain traditional Chinese machine reading comprehension (MRC) dataset.
This dataset aimed to be a standard Chinese machine reading comprehension
dataset, which can be a source dataset in transfer learning. The dataset
contains 10,014 paragraphs from 2,108 Wikipedia articles and 30,000+ questions
generated by annotators. We build a baseline model that achieves an F1 score of
89.59%. F1 score of Human performance is 93.30%.
| 2,019 | Computation and Language |
Neural Adversarial Training for Semi-supervised Japanese
Predicate-argument Structure Analysis | Japanese predicate-argument structure (PAS) analysis involves zero anaphora
resolution, which is notoriously difficult. To improve the performance of
Japanese PAS analysis, it is straightforward to increase the size of corpora
annotated with PAS. However, since it is prohibitively expensive, it is
promising to take advantage of a large amount of raw corpora. In this paper, we
propose a novel Japanese PAS analysis model based on semi-supervised
adversarial training with a raw corpus. In our experiments, our model
outperforms existing state-of-the-art models for Japanese PAS analysis.
| 2,018 | Computation and Language |
Topic Modelling of Empirical Text Corpora: Validity, Reliability, and
Reproducibility in Comparison to Semantic Maps | Using the 6,638 case descriptions of societal impact submitted for evaluation
in the Research Excellence Framework (REF 2014), we replicate the topic model
(Latent Dirichlet Allocation or LDA) made in this context and compare the
results with factor-analytic results using a traditional word-document matrix
(Principal Component Analysis or PCA). Removing a small fraction of documents
from the sample, for example, has on average a much larger impact on LDA than
on PCA-based models to the extent that the largest distortion in the case of
PCA has less effect than the smallest distortion of LDA-based models. In terms
of semantic coherence, however, LDA models outperform PCA-based models. The
topic models inform us about the statistical properties of the document sets
under study, but the results are statistical and should not be used for a
semantic interpretation - for example, in grant selections and micro-decision
making, or scholarly work-without follow-up using domain-specific semantic
maps.
| 2,018 | Computation and Language |
Efficient Online Scalar Annotation with Bounded Support | We describe a novel method for efficiently eliciting scalar annotations for
dataset construction and system quality estimation by human judgments. We
contrast direct assessment (annotators assign scores to items directly), online
pairwise ranking aggregation (scores derive from annotator comparison of
items), and a hybrid approach (EASL: Efficient Annotation of Scalar Labels)
proposed here. Our proposal leads to increased correlation with ground truth,
at far greater annotator efficiency, suggesting this strategy as an improved
mechanism for dataset creation and manual system evaluation.
| 2,018 | Computation and Language |
History Playground: A Tool for Discovering Temporal Trends in Massive
Textual Corpora | Recent studies have shown that macroscopic patterns of continuity and change
over the course of centuries can be detected through the analysis of time
series extracted from massive textual corpora. Similar data-driven approaches
have already revolutionised the natural sciences, and are widely believed to
hold similar potential for the humanities and social sciences, driven by the
mass-digitisation projects that are currently under way, and coupled with the
ever-increasing number of documents which are "born digital". As such, new
interactive tools are required to discover and extract macroscopic patterns
from these vast quantities of textual data. Here we present History Playground,
an interactive web-based tool for discovering trends in massive textual
corpora. The tool makes use of scalable algorithms to first extract trends from
textual corpora, before making them available for real-time search and
discovery, presenting users with an interface to explore the data. Included in
the tool are algorithms for standardization, regression, change-point detection
in the relative frequencies of ngrams, multi-term indices and comparison of
trends across different corpora.
| 2,018 | Computation and Language |
OpenTag: Open Attribute Value Extraction from Product Profiles [Deep
Learning, Active Learning, Named Entity Recognition] | Extraction of missing attribute values is to find values describing an
attribute of interest from a free text input. Most past related work on
extraction of missing attribute values work with a closed world assumption with
the possible set of values known beforehand, or use dictionaries of values and
hand-crafted features. How can we discover new attribute values that we have
never seen before? Can we do this with limited human annotation or supervision?
We study this problem in the context of product catalogs that often have
missing values for many attributes of interest.
In this work, we leverage product profile information such as titles and
descriptions to discover missing values of product attributes. We develop a
novel deep tagging model OpenTag for this extraction problem with the following
contributions: (1) we formalize the problem as a sequence tagging task, and
propose a joint model exploiting recurrent neural networks (specifically,
bidirectional LSTM) to capture context and semantics, and Conditional Random
Fields (CRF) to enforce tagging consistency, (2) we develop a novel attention
mechanism to provide interpretable explanation for our model's decisions, (3)
we propose a novel sampling strategy exploring active learning to reduce the
burden of human annotation. OpenTag does not use any dictionary or hand-crafted
features as in prior works. Extensive experiments in real-life datasets in
different domains show that OpenTag with our active learning strategy discovers
new attribute values from as few as 150 annotated samples (reduction in 3.3x
amount of annotation effort) with a high F-score of 83%, outperforming
state-of-the-art models.
| 2,018 | Computation and Language |
Closed Form Word Embedding Alignment | We develop a family of techniques to align word embeddings which are derived
from different source datasets or created using different mechanisms (e.g.,
GloVe or word2vec). Our methods are simple and have a closed form to optimally
rotate, translate, and scale to minimize root mean squared errors or maximize
the average cosine similarity between two embeddings of the same vocabulary
into the same dimensional space. Our methods extend approaches known as
Absolute Orientation, which are popular for aligning objects in
three-dimensions, and generalize an approach by Smith etal (ICLR 2017). We
prove new results for optimal scaling and for maximizing cosine similarity.
Then we demonstrate how to evaluate the similarity of embeddings from different
sources or mechanisms, and that certain properties like synonyms and analogies
are preserved across the embeddings and can be enhanced by simply aligning and
averaging ensembles of embeddings.
| 2,020 | Computation and Language |
Document Chunking and Learning Objective Generation for Instruction
Design | Instructional Systems Design is the practice of creating of instructional
experiences that make the acquisition of knowledge and skill more efficient,
effective, and appealing. Specifically in designing courses, an hour of
training material can require between 30 to 500 hours of effort in sourcing and
organizing reference data for use in just the preparation of course material.
In this paper, we present the first system of its kind that helps reduce the
effort associated with sourcing reference material and course creation. We
present algorithms for document chunking and automatic generation of learning
objectives from content, creating descriptive content metadata to improve
content-discoverability. Unlike existing methods, the learning objectives
generated by our system incorporate pedagogically motivated Bloom's verbs. We
demonstrate the usefulness of our methods using real world data from the
banking industry and through a live deployment at a large pharmaceutical
company.
| 2,018 | Computation and Language |
Natural Language Generation for Electronic Health Records | A variety of methods existing for generating synthetic electronic health
records (EHRs), but they are not capable of generating unstructured text, like
emergency department (ED) chief complaints, history of present illness or
progress notes. Here, we use the encoder-decoder model, a deep learning
algorithm that features in many contemporary machine translation systems, to
generate synthetic chief complaints from discrete variables in EHRs, like age
group, gender, and discharge diagnosis. After being trained end-to-end on
authentic records, the model can generate realistic chief complaint text that
preserves much of the epidemiological information in the original data. As a
side effect of the model's optimization goal, these synthetic chief complaints
are also free of relatively uncommon abbreviation and misspellings, and they
include none of the personally-identifiable information (PII) that was in the
training data, suggesting it may be used to support the de-identification of
text in EHRs. When combined with algorithms like generative adversarial
networks (GANs), our model could be used to generate fully-synthetic EHRs,
facilitating data sharing between healthcare providers and researchers and
improving our ability to develop machine learning methods tailored to the
information in healthcare data.
| 2,018 | Computation and Language |
JTAV: Jointly Learning Social Media Content Representation by Fusing
Textual, Acoustic, and Visual Features | Learning social media content is the basis of many real-world applications,
including information retrieval and recommendation systems, among others. In
contrast with previous works that focus mainly on single modal or bi-modal
learning, we propose to learn social media content by fusing jointly textual,
acoustic, and visual information (JTAV). Effective strategies are proposed to
extract fine-grained features of each modality, that is, attBiGRU and DCRNN. We
also introduce cross-modal fusion and attentive pooling techniques to integrate
multi-modal information comprehensively. Extensive experimental evaluation
conducted on real-world datasets demonstrates our proposed model outperforms
the state-of-the-art approaches by a large margin.
| 2,021 | Computation and Language |
Information Aggregation via Dynamic Routing for Sequence Encoding | While much progress has been made in how to encode a text sequence into a
sequence of vectors, less attention has been paid to how to aggregate these
preceding vectors (outputs of RNN/CNN) into fixed-size encoding vector.
Usually, a simple max or average pooling is used, which is a bottom-up and
passive way of aggregation and lack of guidance by task information. In this
paper, we propose an aggregation mechanism to obtain a fixed-size encoding with
a dynamic routing policy. The dynamic routing policy is dynamically deciding
that what and how much information need be transferred from each word to the
final encoding of the text sequence. Following the work of Capsule Network, we
design two dynamic routing policies to aggregate the outputs of RNN/CNN
encoding layer into a final encoding vector. Compared to the other aggregation
methods, dynamic routing can refine the messages according to the state of
final encoding vector. Experimental results on five text classification tasks
show that our method outperforms other aggregating models by a significant
margin. Related source code is released on our github page.
| 2,018 | Computation and Language |
How Do Source-side Monolingual Word Embeddings Impact Neural Machine
Translation? | Using pre-trained word embeddings as input layer is a common practice in many
natural language processing (NLP) tasks, but it is largely neglected for neural
machine translation (NMT). In this paper, we conducted a systematic analysis on
the effect of using pre-trained source-side monolingual word embedding in NMT.
We compared several strategies, such as fixing or updating the embeddings
during NMT training on varying amounts of data, and we also proposed a novel
strategy called dual-embedding that blends the fixing and updating strategies.
Our results suggest that pre-trained embeddings can be helpful if properly
incorporated into NMT, especially when parallel data is limited or additional
in-domain monolingual data is readily available.
| 2,018 | Computation and Language |
Multi-Task Active Learning for Neural Semantic Role Labeling on Low
Resource Conversational Corpus | Most Semantic Role Labeling (SRL) approaches are supervised methods which
require a significant amount of annotated corpus, and the annotation requires
linguistic expertise. In this paper, we propose a Multi-Task Active Learning
framework for Semantic Role Labeling with Entity Recognition (ER) as the
auxiliary task to alleviate the need for extensive data and use additional
information from ER to help SRL. We evaluate our approach on Indonesian
conversational dataset. Our experiments show that multi-task active learning
can outperform single-task active learning method and standard multi-task
learning. According to our results, active learning is more efficient by using
12% less of training data compared to passive learning in both single-task and
multi-task setting. We also introduce a new dataset for SRL in Indonesian
conversational domain to encourage further research in this area.
| 2,018 | Computation and Language |
Explaining Away Syntactic Structure in Semantic Document Representations | Most generative document models act on bag-of-words input in an attempt to
focus on the semantic content and thereby partially forego syntactic
information. We argue that it is preferable to keep the original word order
intact and explicitly account for the syntactic structure instead. We propose
an extension to the Neural Variational Document Model (Miao et al., 2016) that
does exactly that to separate local (syntactic) context from the global
(semantic) representation of the document. Our model builds on the variational
autoencoder framework to define a generative document model based on next-word
prediction. We name our approach Sequence-Aware Variational Autoencoder since
in contrast to its predecessor, it operates on the true input sequence. In a
series of experiments we observe stronger topicality of the learned
representations as well as increased robustness to syntactic noise in our
training data.
| 2,018 | Computation and Language |
Understanding Meanings in Multilingual Customer Feedback | Understanding and being able to react to customer feedback is the most
fundamental task in providing good customer service. However, there are two
major obstacles for international companies to automatically detect the meaning
of customer feedback in a global multilingual environment. Firstly, there is no
widely acknowledged categorisation (classes) of meaning for customer feedback.
Secondly, the applicability of one meaning categorisation, if it exists, to
customer feedback in multiple languages is questionable. In this paper, we
extracted representative real world samples of customer feedback from Microsoft
Office customers in multiple languages, English, Spanish and Japanese,and
concluded a five-class categorisation(comment, request, bug, complaint and
meaningless) for meaning classification that could be used across languages in
the realm of customer feedback analysis.
| 2,018 | Computation and Language |
Luminoso at SemEval-2018 Task 10: Distinguishing Attributes Using Text
Corpora and Relational Knowledge | Luminoso participated in the SemEval 2018 task on "Capturing Discriminative
Attributes" with a system based on ConceptNet, an open knowledge graph focused
on general knowledge. In this paper, we describe how we trained a linear
classifier on a small number of semantically-informed features to achieve an
$F_1$ score of 0.7368 on the task, close to the task's high score of 0.75.
| 2,018 | Computation and Language |
Contextual Slot Carryover for Disparate Schemas | In the slot-filling paradigm, where a user can refer back to slots in the
context during a conversation, the goal of the contextual understanding system
is to resolve the referring expressions to the appropriate slots in the
context. In large-scale multi-domain systems, this presents two challenges -
scaling to a very large and potentially unbounded set of slot values, and
dealing with diverse schemas. We present a neural network architecture that
addresses the slot value scalability challenge by reformulating the contextual
interpretation as a decision to carryover a slot from a set of possible
candidates. To deal with heterogenous schemas, we introduce a simple
data-driven method for trans- forming the candidate slots. Our experiments show
that our approach can scale to multiple domains and provides competitive
results over a strong baseline.
| 2,018 | Computation and Language |
Open Domain Suggestion Mining: Problem Definition and Datasets | We propose a formal definition for the task of suggestion mining in the
context of a wide range of open domain applications. Human perception of the
term \emph{suggestion} is subjective and this effects the preparation of hand
labeled datasets for the task of suggestion mining. Existing work either lacks
a formal problem definition and annotation procedure, or provides domain and
application specific definitions. Moreover, many previously used manually
labeled datasets remain proprietary. We first present an annotation study, and
based on our observations propose a formal task definition and annotation
procedure for creating benchmark datasets for suggestion mining. With this
study, we also provide publicly available labeled datasets for suggestion
mining in multiple domains.
| 2,018 | Computation and Language |
The Limitations of Cross-language Word Embeddings Evaluation | The aim of this work is to explore the possible limitations of existing
methods of cross-language word embeddings evaluation, addressing the lack of
correlation between intrinsic and extrinsic cross-language evaluation methods.
To prove this hypothesis, we construct English-Russian datasets for extrinsic
and intrinsic evaluation tasks and compare performances of 5 different
cross-language models on them. The results say that the scores even on
different intrinsic benchmarks do not correlate to each other. We can conclude
that the use of human references as ground truth for cross-language word
embeddings is not proper unless one does not understand how do native speakers
process semantics in their cognition.
| 2,018 | Computation and Language |
Finding Convincing Arguments Using Scalable Bayesian Preference Learning | We introduce a scalable Bayesian preference learning method for identifying
convincing arguments in the absence of gold-standard rat- ings or rankings. In
contrast to previous work, we avoid the need for separate methods to perform
quality control on training data, predict rankings and perform pairwise
classification. Bayesian approaches are an effective solution when faced with
sparse or noisy training data, but have not previously been used to identify
convincing arguments. One issue is scalability, which we address by developing
a stochastic variational inference method for Gaussian process (GP) preference
learning. We show how our method can be applied to predict argument
convincingness from crowdsourced data, outperforming the previous
state-of-the-art, particularly when trained with small amounts of unreliable
data. We demonstrate how the Bayesian approach enables more effective active
learning, thereby reducing the amount of data required to identify convincing
arguments for new users and domains. While word embeddings are principally used
with neural networks, our results show that word embeddings in combination with
linguistic features also benefit GPs when predicting argument convincingness.
| 2,018 | Computation and Language |
Studying the Difference Between Natural and Programming Language Corpora | Code corpora, as observed in large software systems, are now known to be far
more repetitive and predictable than natural language corpora. But why? Does
the difference simply arise from the syntactic limitations of programming
languages? Or does it arise from the differences in authoring decisions made by
the writers of these natural and programming language texts? We conjecture that
the differences are not entirely due to syntax, but also from the fact that
reading and writing code is un-natural for humans, and requires substantial
mental effort; so, people prefer to write code in ways that are familiar to
both reader and writer. To support this argument, we present results from two
sets of studies: 1) a first set aimed at attenuating the effects of syntax, and
2) a second, aimed at measuring repetitiveness of text written in other
settings (e.g. second language, technical/specialized jargon), which are also
effortful to write. We find find that this repetition in source code is not
entirely the result of grammar constraints, and thus some repetition must
result from human choice. While the evidence we find of similar repetitive
behavior in technical and learner corpora does not conclusively show that such
language is used by humans to mitigate difficulty, it is consistent with that
theory.
| 2,018 | Computation and Language |
Multi-Source Neural Machine Translation with Missing Data | Multi-source translation is an approach to exploit multiple inputs (e.g. in
two different languages) to increase translation accuracy. In this paper, we
examine approaches for multi-source neural machine translation (NMT) using an
incomplete multilingual corpus in which some translations are missing. In
practice, many multilingual corpora are not complete due to the difficulty to
provide translations in all of the relevant languages (for example, in TED
talks, most English talks only have subtitles for a small portion of the
languages that TED supports). Existing studies on multi-source translation did
not explicitly handle such situations. This study focuses on the use of
incomplete multilingual corpora in multi-encoder NMT and mixture of NMT experts
and examines a very simple implementation where missing source translations are
replaced by a special symbol <NULL>. These methods allow us to use incomplete
corpora both at training time and test time. In experiments with real
incomplete multilingual corpora of TED Talks, the multi-source NMT with the
<NULL> tokens achieved higher translation accuracies measured by BLEU than
those by any one-to-one NMT systems.
| 2,018 | Computation and Language |
A Challenge Set for French --> English Machine Translation | We present a challenge set for French --> English machine translation based
on the approach introduced in Isabelle, Cherry and Foster (EMNLP 2017). Such
challenge sets are made up of sentences that are expected to be relatively
difficult for machines to translate correctly because their most
straightforward translations tend to be linguistically divergent. We present
here a set of 506 manually constructed French sentences, 307 of which are
targeted to the same kinds of structural divergences as in the paper mentioned
above. The remaining 199 sentences are designed to test the ability of the
systems to correctly translate difficult grammatical words such as
prepositions. We report on the results of using this challenge set for testing
two different systems, namely Google Translate and DEEPL, each on two different
dates (October 2017 and January 2018). All the resulting data are made publicly
available.
| 2,018 | Computation and Language |
Training Augmentation with Adversarial Examples for Robust Speech
Recognition | This paper explores the use of adversarial examples in training speech
recognition systems to increase robustness of deep neural network acoustic
models. During training, the fast gradient sign method is used to generate
adversarial examples augmenting the original training data. Different from
conventional data augmentation based on data transformations, the examples are
dynamically generated based on current acoustic model parameters. We assess the
impact of adversarial data augmentation in experiments on the Aurora-4 and
CHiME-4 single-channel tasks, showing improved robustness against noise and
channel variation. Further improvement is obtained when combining adversarial
examples with teacher/student training, leading to a 23% relative word error
rate reduction on Aurora-4.
| 2,018 | Computation and Language |
Domain Adversarial Training for Accented Speech Recognition | In this paper, we propose a domain adversarial training (DAT) algorithm to
alleviate the accented speech recognition problem. In order to reduce the
mismatch between labeled source domain data ("standard" accent) and unlabeled
target domain data (with heavy accents), we augment the learning objective for
a Kaldi TDNN network with a domain adversarial training (DAT) objective to
encourage the model to learn accent-invariant features. In experiments with
three Mandarin accents, we show that DAT yields up to 7.45% relative character
error rate reduction when we do not have transcriptions of the accented speech,
compared with the baseline trained on standard accent data only. We also find a
benefit from DAT when used in combination with training from automatic
transcriptions on the accented data. Furthermore, we find that DAT is superior
to multi-task learning for accented speech recognition.
| 2,018 | Computation and Language |
Embedding Transfer for Low-Resource Medical Named Entity Recognition: A
Case Study on Patient Mobility | Functioning is gaining recognition as an important indicator of global
health, but remains under-studied in medical natural language processing
research. We present the first analysis of automatically extracting
descriptions of patient mobility, using a recently-developed dataset of free
text electronic health records. We frame the task as a named entity recognition
(NER) problem, and investigate the applicability of NER techniques to mobility
extraction. As text corpora focused on patient functioning are scarce, we
explore domain adaptation of word embeddings for use in a recurrent neural
network NER system. We find that embeddings trained on a small in-domain corpus
perform nearly as well as those learned from large out-of-domain corpora, and
that domain adaptation techniques yield additional improvements in both
precision and recall. Our analysis identifies several significant challenges in
extracting descriptions of patient mobility, including the length and
complexity of annotated entities and high linguistic variability in mobility
descriptions.
| 2,018 | Computation and Language |
Medical Concept Embedding with Time-Aware Attention | Embeddings of medical concepts such as medication, procedure and diagnosis
codes in Electronic Medical Records (EMRs) are central to healthcare analytics.
Previous work on medical concept embedding takes medical concepts and EMRs as
words and documents respectively. Nevertheless, such models miss out the
temporal nature of EMR data. On the one hand, two consecutive medical concepts
do not indicate they are temporally close, but the correlations between them
can be revealed by the time gap. On the other hand, the temporal scopes of
medical concepts often vary greatly (e.g., \textit{common cold} and
\textit{diabetes}). In this paper, we propose to incorporate the temporal
information to embed medical codes. Based on the Continuous Bag-of-Words model,
we employ the attention mechanism to learn a "soft" time-aware context window
for each medical concept. Experiments on public and proprietary datasets
through clustering and nearest neighbour search tasks demonstrate the
effectiveness of our model, showing that it outperforms five state-of-the-art
baselines.
| 2,018 | Computation and Language |
An Exploration of Unreliable News Classification in Brazil and The U.S | The propagation of unreliable information is on the rise in many places
around the world. This expansion is facilitated by the rapid spread of
information and anonymity granted by the Internet. The spread of unreliable
information is a wellstudied issue and it is associated with negative social
impacts. In a previous work, we have identified significant differences in the
structure of news articles from reliable and unreliable sources in the US
media. Our goal in this work was to explore such differences in the Brazilian
media. We found significant features in two data sets: one with Brazilian news
in Portuguese and another one with US news in English. Our results show that
features related to the writing style were prominent in both data sets and,
despite the language difference, some features have a universal behavior, being
significant to both US and Brazilian news articles. Finally, we combined both
data sets and used the universal features to build a machine learning
classifier to predict the source type of a news article as reliable or
unreliable.
| 2,018 | Computation and Language |
Probabilistic FastText for Multi-Sense Word Embeddings | We introduce Probabilistic FastText, a new model for word embeddings that can
capture multiple word senses, sub-word structure, and uncertainty information.
In particular, we represent each word with a Gaussian mixture density, where
the mean of a mixture component is given by the sum of n-grams. This
representation allows the model to share statistical strength across sub-word
structures (e.g. Latin roots), producing accurate representations of rare,
misspelt, or even unseen words. Moreover, each component of the mixture can
capture a different word sense. Probabilistic FastText outperforms both
FastText, which has no probabilistic model, and dictionary-level probabilistic
embeddings, which do not incorporate subword structures, on several
word-similarity benchmarks, including English RareWord and foreign language
datasets. We also achieve state-of-art performance on benchmarks that measure
ability to discern different meanings. Thus, the proposed model is the first to
achieve multi-sense representations while having enriched semantics on rare
words.
| 2,018 | Computation and Language |
Is preprocessing of text really worth your time for online comment
classification? | A large proportion of online comments present on public domains are
constructive, however a significant proportion are toxic in nature. The
comments contain lot of typos which increases the number of features manifold,
making the ML model difficult to train. Considering the fact that the data
scientists spend approximately 80% of their time in collecting, cleaning and
organizing their data [1], we explored how much effort should we invest in the
preprocessing (transformation) of raw comments before feeding it to the
state-of-the-art classification models. With the help of four models on Jigsaw
toxic comment classification data, we demonstrated that the training of model
without any transformation produce relatively decent model. Applying even basic
transformations, in some cases, lead to worse performance and should be applied
with caution.
| 2,018 | Computation and Language |
Multimodal Relational Tensor Network for Sentiment and Emotion
Classification | Understanding Affect from video segments has brought researchers from the
language, audio and video domains together. Most of the current multimodal
research in this area deals with various techniques to fuse the modalities, and
mostly treat the segments of a video independently. Motivated by the work of
(Zadeh et al., 2017) and (Poria et al., 2017), we present our architecture,
Relational Tensor Network, where we use the inter-modal interactions within a
segment (intra-segment) and also consider the sequence of segments in a video
to model the inter-segment inter-modal interactions. We also generate rich
representations of text and audio modalities by leveraging richer audio and
linguistic context alongwith fusing fine-grained knowledge based polarity
scores from text. We present the results of our model on CMU-MOSEI dataset and
show that our model outperforms many baselines and state of the art methods for
sentiment classification and emotion recognition.
| 2,018 | Computation and Language |
Findings of the Second Workshop on Neural Machine Translation and
Generation | This document describes the findings of the Second Workshop on Neural Machine
Translation and Generation, held in concert with the annual conference of the
Association for Computational Linguistics (ACL 2018). First, we summarize the
research trends of papers presented in the proceedings, and note that there is
particular interest in linguistic structure, domain adaptation, data
augmentation, handling inadequate resources, and analysis of models. Second, we
describe the results of the workshop's shared task on efficient neural machine
translation, where participants were tasked with creating MT systems that are
both accurate and efficient.
| 2,018 | Computation and Language |
Representation Learning of Entities and Documents from Knowledge Base
Descriptions | In this paper, we describe TextEnt, a neural network model that learns
distributed representations of entities and documents directly from a knowledge
base (KB). Given a document in a KB consisting of words and entity annotations,
we train our model to predict the entity that the document describes and map
the document and its target entity close to each other in a continuous vector
space. Our model is trained using a large number of documents extracted from
Wikipedia. The performance of the proposed model is evaluated using two tasks,
namely fine-grained entity typing and multiclass text classification. The
results demonstrate that our model achieves state-of-the-art performance on
both tasks. The code and the trained representations are made available online
for further academic research.
| 2,018 | Computation and Language |
Hearst Patterns Revisited: Automatic Hypernym Detection from Large Text
Corpora | Methods for unsupervised hypernym detection may broadly be categorized
according to two paradigms: pattern-based and distributional methods. In this
paper, we study the performance of both approaches on several hypernymy tasks
and find that simple pattern-based methods consistently outperform
distributional methods on common benchmark datasets. Our results show that
pattern-based models provide important contextual constraints which are not yet
captured in distributional methods.
| 2,018 | Computation and Language |
ChangeMyView Through Concessions: Do Concessions Increase Persuasion? | In discourse studies concessions are considered among those argumentative
strategies that increase persuasion. We aim to empirically test this hypothesis
by calculating the distribution of argumentative concessions in persuasive vs.
non-persuasive comments from the ChangeMyView subreddit. This constitutes a
challenging task since concessions are not always part of an argument. Drawing
from a theoretically-informed typology of concessions, we conduct an annotation
task to label a set of polysemous lexical markers as introducing an
argumentative concession or not and we observe their distribution in threads
that achieved and did not achieve persuasion. For the annotation, we used both
expert and novice annotators. With the ultimate goal of conducting the study on
large datasets, we present a self-training method to automatically identify
argumentative concessions using linguistically motivated features. We achieve a
moderate F1 of 57.4% on the development set and 46.0% on the test set via the
self-training method. These results are comparable to state of the art results
on similar tasks of identifying explicit discourse connective types from the
Penn Discourse Treebank. Our findings from the manual labeling and the
classification experiments indicate that the type of argumentative concessions
we investigated is almost equally likely to be used in winning and losing
arguments from the ChangeMyView dataset. While this result seems to contradict
theoretical assumptions, we provide some reasons for this discrepancy related
to the ChangeMyView subreddit.
| 2,018 | Computation and Language |
Multilingual Neural Machine Translation with Task-Specific Attention | Multilingual machine translation addresses the task of translating between
multiple source and target languages. We propose task-specific attention
models, a simple but effective technique for improving the quality of
sequence-to-sequence neural multilingual translation. Our approach seeks to
retain as much of the parameter sharing generalization of NMT models as
possible, while still allowing for language-specific specialization of the
attention model to a particular language-pair or task. Our experiments on four
languages of the Europarl corpus show that using a target-specific model of
attention provides consistent gains in translation quality for all possible
translation directions, compared to a model in which all parameters are shared.
We observe improved translation quality even in the (extreme) low-resource
zero-shot translation directions for which the model never saw explicitly
paired parallel data.
| 2,018 | Computation and Language |
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing | Dynamic oracles provide strong supervision for training constituency parsers
with exploration, but must be custom defined for a given parser's transition
system. We explore using a policy gradient method as a parser-agnostic
alternative. In addition to directly optimizing for a tree-level metric such as
F1, policy gradient has the potential to reduce exposure bias by allowing
exploration during training; moreover, it does not require a dynamic oracle for
supervision. On four constituency parsers in three languages, the method
substantially outperforms static oracle likelihood training in almost all
settings. For parsers where a dynamic oracle is available (including a novel
oracle which we define for the transition system of Dyer et al. 2016), policy
gradient typically recaptures a substantial fraction of the performance gain
afforded by the dynamic oracle.
| 2,018 | Computation and Language |
Measuring Conversational Productivity in Child Forensic Interviews | Child Forensic Interviewing (FI) presents a challenge for effective
information retrieval and decision making. The high stakes associated with the
process demand that expert legal interviewers are able to effectively establish
a channel of communication and elicit substantive knowledge from the
child-client while minimizing potential for experiencing trauma. As a first
step toward computationally modeling and producing quality spoken interviewing
strategies and a generalized understanding of interview dynamics, we propose a
novel methodology to computationally model effectiveness criteria, by applying
summarization and topic modeling techniques to objectively measure and rank the
responsiveness and conversational productivity of a child during FI. We score
information retrieval by constructing an agenda to represent general topics of
interest and measuring alignment with a given response and leveraging lexical
entrainment for responsiveness. For comparison, we present our methods along
with traditional metrics of evaluation and discuss the use of prior information
for generating situational awareness.
| 2,018 | Computation and Language |
#SarcasmDetection is soooo general! Towards a Domain-Independent
Approach for Detecting Sarcasm | Automatic sarcasm detection methods have traditionally been designed for
maximum performance on a specific domain. This poses challenges for those
wishing to transfer those approaches to other existing or novel domains, which
may be typified by very different language characteristics. We develop a
general set of features and evaluate it under different training scenarios
utilizing in-domain and/or out-of-domain training data. The best-performing
scenario, training on both while employing a domain adaptation step, achieves
an F1 of 0.780, which is well above baseline F1-measures of 0.515 and 0.345. We
also show that the approach outperforms the best results from prior work on the
same target domain.
| 2,018 | Computation and Language |
Word Familiarity and Frequency | Word frequency is assumed to correlate with word familiarity, but the
strength of this correlation has not been thoroughly investigated. In this
paper, we report on our analysis of the correlation between a word familiarity
rating list obtained through a psycholinguistic experiment and the
log-frequency obtained from various corpora of different kinds and sizes (up to
the terabyte scale) for English and Japanese. Major findings are threefold:
First, for a given corpus, familiarity is necessary for a word to achieve high
frequency, but familiar words are not necessarily frequent. Second, correlation
increases with the corpus data size. Third, a corpus of spoken language
correlates better than one of written language. These findings suggest that
cognitive familiarity ratings are correlated to frequency, but more highly to
that of spoken rather than written language.
| 2,018 | Computation and Language |
Robust Lexical Features for Improved Neural Network Named-Entity
Recognition | Neural network approaches to Named-Entity Recognition reduce the need for
carefully hand-crafted features. While some features do remain in
state-of-the-art systems, lexical features have been mostly discarded, with the
exception of gazetteers. In this work, we show that this is unfair: lexical
features are actually quite useful. We propose to embed words and entity types
into a low-dimensional vector space we train from annotated data produced by
distant supervision thanks to Wikipedia. From this, we compute - offline - a
feature vector representing each word. When used with a vanilla recurrent
neural network model, this representation yields substantial improvements. We
establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while
matching state-of-the-art performance with a F1 score of 91.73 on the
over-studied CONLL-2003 dataset.
| 2,018 | Computation and Language |
Learning to Search in Long Documents Using Document Structure | Reading comprehension models are based on recurrent neural networks that
sequentially process the document tokens. As interest turns to answering more
complex questions over longer documents, sequential reading of large portions
of text becomes a substantial bottleneck. Inspired by how humans use document
structure, we propose a novel framework for reading comprehension. We represent
documents as trees, and model an agent that learns to interleave quick
navigation through the document tree with more expensive answer extraction. To
encourage exploration of the document tree, we propose a new algorithm, based
on Deep Q-Network (DQN), which strategically samples tree nodes at training
time. Empirically we find our algorithm improves question answering performance
compared to DQN and a strong information-retrieval (IR) baseline, and that
ensembling our model with the IR baseline results in further gains in
performance.
| 2,018 | Computation and Language |
Diachronic word embeddings and semantic shifts: a survey | Recent years have witnessed a surge of publications aimed at tracing temporal
changes in lexical semantics using distributional methods, particularly
prediction-based word embedding models. However, this vein of research lacks
the cohesion, common terminology and shared practices of more established areas
of natural language processing. In this paper, we survey the current state of
academic research related to diachronic word embeddings and semantic shifts
detection. We start with discussing the notion of semantic shifts, and then
continue with an overview of the existing methods for tracing such time-related
shifts with word embedding models. We propose several axes along which these
methods can be compared, and outline the main challenges before this emerging
subfield of NLP, as well as prospects and possible applications.
| 2,018 | Computation and Language |
What Knowledge is Needed to Solve the RTE5 Textual Entailment Challenge? | This document gives a knowledge-oriented analysis of about 20 interesting
Recognizing Textual Entailment (RTE) examples, drawn from the 2005 RTE5
competition test set. The analysis ignores shallow statistical matching
techniques between T and H, and rather asks: What would it take to reasonably
infer that T implies H? What world knowledge would be needed for this task?
Although such knowledge-intensive techniques have not had much success in RTE
evaluations, ultimately an intelligent system should be expected to know and
deploy this kind of world knowledge required to perform this kind of reasoning.
The selected examples are typically ones which our RTE system (called BLUE)
got wrong and ones which require world knowledge to answer. In particular, the
analysis covers cases where there was near-perfect lexical overlap between T
and H, yet the entailment was NO, i.e., examples that most likely all current
RTE systems will have got wrong. A nice example is #341 (page 26), that
requires inferring from "a river floods" that "a river overflows its banks".
Seems it should be easy, right? Enjoy!
| 2,018 | Computation and Language |
Adaptations of ROUGE and BLEU to Better Evaluate Machine Reading
Comprehension Task | Current evaluation metrics to question answering based machine reading
comprehension (MRC) systems generally focus on the lexical overlap between the
candidate and reference answers, such as ROUGE and BLEU. However, bias may
appear when these metrics are used for specific question types, especially
questions inquiring yes-no opinions and entity lists. In this paper, we make
adaptations on the metrics to better correlate n-gram overlap with the human
judgment for answers to these two question types. Statistical analysis proves
the effectiveness of our approach. Our adaptations may provide positive
guidance for the development of real-scene MRC systems.
| 2,018 | Computation and Language |
Cross-Lingual Task-Specific Representation Learning for Text
Classification in Resource Poor Languages | Neural network models have shown promising results for text classification.
However, these solutions are limited by their dependence on the availability of
annotated data.
The prospect of leveraging resource-rich languages to enhance the text
classification of resource-poor languages is fascinating. The performance on
resource-poor languages can significantly improve if the resource availability
constraints can be offset. To this end, we present a twin Bidirectional Long
Short Term Memory (Bi-LSTM) network with shared parameters consolidated by a
contrastive loss function (based on a similarity metric). The model learns the
representation of resource-poor and resource-rich sentences in a common space
by using the similarity between their assigned annotation tags. Hence, the
model projects sentences with similar tags closer and those with different tags
farther from each other. We evaluated our model on the classification tasks of
sentiment analysis and emoji prediction for resource-poor languages - Hindi and
Telugu and resource-rich languages - English and Spanish. Our model
significantly outperforms the state-of-the-art approaches in both the tasks
across all metrics.
| 2,018 | Computation and Language |
Learning Acoustic Word Embeddings with Temporal Context for
Query-by-Example Speech Search | We propose to learn acoustic word embeddings with temporal context for
query-by-example (QbE) speech search. The temporal context includes the leading
and trailing word sequences of a word. We assume that there exist spoken word
pairs in the training database. We pad the word pairs with their original
temporal context to form fixed-length speech segment pairs. We obtain the
acoustic word embeddings through a deep convolutional neural network (CNN)
which is trained on the speech segment pairs with a triplet loss. Shifting a
fixed-length analysis window through the search content, we obtain a running
sequence of embeddings. In this way, searching for the spoken query is
equivalent to the matching of acoustic word embeddings. The experiments show
that our proposed acoustic word embeddings learned with temporal context are
effective in QbE speech search. They outperform the state-of-the-art
frame-level feature representations and reduce run-time computation since no
dynamic time warping is required in QbE speech search. We also find that it is
important to have sufficient speech segment pairs to train the deep CNN for
effective acoustic word embeddings.
| 2,018 | Computation and Language |
Neural Disease Named Entity Extraction with Character-based BiLSTM+CRF
in Japanese Medical Text | We propose an 'end-to-end' character-based recurrent neural network that
extracts disease named entities from a Japanese medical text and simultaneously
judges its modality as either positive or negative; i.e., the mentioned disease
or symptom is affirmed or negated. The motivation to adopt neural networks is
to learn effective lexical and structural representation features for Entity
Recognition and also for Positive/Negative classification from an annotated
corpora without explicitly providing any rule-based or manual feature sets. We
confirmed the superiority of our method over previous char-based CRF or SVM
methods in the results.
| 2,018 | Computation and Language |
SciDTB: Discourse Dependency TreeBank for Scientific Abstracts | Annotation corpus for discourse relations benefits NLP tasks such as machine
translation and question answering. In this paper, we present SciDTB, a
domain-specific discourse treebank annotated on scientific articles. Different
from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent
discourse structure, which is flexible and simplified to some extent but do not
sacrifice structural integrity. We discuss the labeling framework, annotation
workflow and some statistics about SciDTB. Furthermore, our treebank is made as
a benchmark for evaluating discourse dependency parsers, on which we provide
several baselines as fundamental work.
| 2,018 | Computation and Language |
Incremental Decoding and Training Methods for Simultaneous Translation
in Neural Machine Translation | We address the problem of simultaneous translation by modifying the Neural MT
decoder to operate with dynamically built encoder and attention. We propose a
tunable agent which decides the best segmentation strategy for a user-defined
BLEU loss and Average Proportion (AP) constraint. Our agent outperforms
previously proposed Wait-if-diff and Wait-if-worse agents (Cho and Esipova,
2016) on BLEU with a lower latency. Secondly we proposed data-driven changes to
Neural MT training to better match the incremental decoding framework.
| 2,018 | Computation and Language |
LexNLP: Natural language processing and information extraction for legal
and regulatory texts | LexNLP is an open source Python package focused on natural language
processing and machine learning for legal and regulatory text. The package
includes functionality to (i) segment documents, (ii) identify key text such as
titles and section headings, (iii) extract over eighteen types of structured
information like distances and dates, (iv) extract named entities such as
companies and geopolitical entities, (v) transform text into features for model
training, and (vi) build unsupervised and supervised models such as word
embedding or tagging models. LexNLP includes pre-trained models based on
thousands of unit tests drawn from real documents available from the SEC EDGAR
database as well as various judicial and regulatory proceedings. LexNLP is
designed for use in both academic research and industrial applications, and is
distributed at https://github.com/LexPredict/lexpredict-lexnlp.
| 2,018 | Computation and Language |
Deconvolution-Based Global Decoding for Neural Machine Translation | A great proportion of sequence-to-sequence (Seq2Seq) models for Neural
Machine Translation (NMT) adopt Recurrent Neural Network (RNN) to generate
translation word by word following a sequential order. As the studies of
linguistics have proved that language is not linear word sequence but sequence
of complex structure, translation at each step should be conditioned on the
whole target-side context. To tackle the problem, we propose a new NMT model
that decodes the sequence with the guidance of its structural prediction of the
context of the target sequence. Our model generates translation based on the
structural prediction of the target-side context so that the translation can be
freed from the bind of sequential order. Experimental results demonstrate that
our model is more competitive compared with the state-of-the-art methods, and
the analysis reflects that our model is also robust to translating sentences of
different lengths and it also reduces repetition with the instruction from the
target-side context for decoding.
| 2,018 | Computation and Language |
Deep Reinforcement Learning for Chinese Zero pronoun Resolution | Deep neural network models for Chinese zero pronoun resolution learn semantic
information for zero pronoun and candidate antecedents, but tend to be
short-sighted---they often make local decisions. They typically predict
coreference chains between the zero pronoun and one single candidate antecedent
one link at a time, while overlooking their long-term influence on future
decisions. Ideally, modeling useful information of preceding potential
antecedents is critical when later predicting zero pronoun-candidate antecedent
pairs. In this study, we show how to integrate local and global decision-making
by exploiting deep reinforcement learning models. With the help of the
reinforcement learning agent, our model learns the policy of selecting
antecedents in a sequential manner, where useful information provided by
earlier predicted antecedents could be utilized for making later coreference
decisions. Experimental results on OntoNotes 5.0 dataset show that our
technique surpasses the state-of-the-art models.
| 2,018 | Computation and Language |
All-in-one: Multi-task Learning for Rumour Verification | Automatic resolution of rumours is a challenging task that can be broken down
into smaller components that make up a pipeline, including rumour detection,
rumour tracking and stance classification, leading to the final outcome of
determining the veracity of a rumour. In previous work, these steps in the
process of rumour verification have been developed as separate components where
the output of one feeds into the next. We propose a multi-task learning
approach that allows joint training of the main and auxiliary tasks, improving
the performance of rumour verification. We examine the connection between the
dataset properties and the outcomes of the multi-task learning models used.
| 2,018 | Computation and Language |
Unsupervised Disambiguation of Syncretism in Inflected Lexicons | Lexical ambiguity makes it difficult to compute various useful statistics of
a corpus. A given word form might represent any of several morphological
feature bundles. One can, however, use unsupervised learning (as in EM) to fit
a model that probabilistically disambiguates word forms. We present such an
approach, which employs a neural network to smoothly model a prior distribution
over feature bundles (even rare ones). Although this basic model does not
consider a token's context, that very property allows it to operate on a simple
list of unigram type counts, partitioning each count among different analyses
of that unigram. We discuss evaluation metrics for this novel task and report
results on 5 languages.
| 2,020 | Computation and Language |
Are All Languages Equally Hard to Language-Model? | For general modeling methods applied to diverse languages, a natural question
is: how well should we expect our models to work on languages with differing
typological profiles? In this work, we develop an evaluation framework for fair
cross-linguistic comparison of language models, using translated text so that
all models are asked to predict approximately the same information. We then
conduct a study on 21 languages, demonstrating that in some languages, the
textual expression of the information is harder to predict with both $n$-gram
and LSTM language models. We show complex inflectional morphology to be a cause
of performance differences among languages.
| 2,020 | Computation and Language |
A Structured Variational Autoencoder for Contextual Morphological
Inflection | Statistical morphological inflectors are typically trained on fully
supervised, type-level data. One remaining open research question is the
following: How can we effectively exploit raw, token-level data to improve
their performance? To this end, we introduce a novel generative latent-variable
model for the semi-supervised learning of inflection generation. To enable
posterior inference over the latent variables, we derive an efficient
variational inference procedure based on the wake-sleep algorithm. We
experiment on 23 languages, using the Universal Dependencies corpora in a
simulated low-resource setting, and find improvements of over 10% absolute
accuracy in some cases.
| 2,020 | Computation and Language |
Part-of-Speech Tagging on an Endangered Language: a Parallel
Griko-Italian Resource | Most work on part-of-speech (POS) tagging is focused on high resource
languages, or examines low-resource and active learning settings through
simulated studies. We evaluate POS tagging techniques on an actual endangered
language, Griko. We present a resource that contains 114 narratives in Griko,
along with sentence-level translations in Italian, and provides gold
annotations for the test set. Based on a previously collected small corpus, we
investigate several traditional methods, as well as methods that take advantage
of monolingual data or project cross-lingual POS tags. We show that the
combination of a semi-supervised method with cross-lingual transfer is more
appropriate for this extremely challenging setting, with the best tagger
achieving an accuracy of 72.9%. With an applied active learning scheme, which
we use to collect sentence-level annotations over the test set, we achieve
improvements of more than 21 percentage points.
| 2,018 | Computation and Language |
Addition of Code Mixed Features to Enhance the Sentiment Prediction of
Song Lyrics | Sentiment analysis, also called opinion mining, is the field of study that
analyzes people's opinions,sentiments, attitudes and emotions. Songs are
important to sentiment analysis since the songs and mood are mutually dependent
on each other. Based on the selected song it becomes easy to find the mood of
the listener, in future it can be used for recommendation. The song lyric is a
rich source of datasets containing words that are helpful in analysis and
classification of sentiments generated from it. Now a days we observe a lot of
inter-sentential and intra-sentential code-mixing in songs which has a varying
impact on audience. To study this impact we created a Telugu songs dataset
which contained both Telugu-English code-mixed and pure Telugu songs. In this
paper, we classify the songs based on its arousal as exciting or non-exciting.
We develop a language identification tool and introduce code-mixing features
obtained from it as additional features. Our system with these additional
features attains 4-5% accuracy greater than traditional approaches on our
dataset.
| 2,018 | Computation and Language |
Know What You Don't Know: Unanswerable Questions for SQuAD | Extractive reading comprehension systems can often locate the correct answer
to a question in a context document, but they also tend to make unreliable
guesses on questions for which the correct answer is not stated in the context.
Existing datasets either focus exclusively on answerable questions, or use
automatically generated unanswerable questions that are easy to identify. To
address these weaknesses, we present SQuAD 2.0, the latest version of the
Stanford Question Answering Dataset (SQuAD). SQuAD 2.0 combines existing SQuAD
data with over 50,000 unanswerable questions written adversarially by
crowdworkers to look similar to answerable ones. To do well on SQuAD 2.0,
systems must not only answer questions when possible, but also determine when
no answer is supported by the paragraph and abstain from answering. SQuAD 2.0
is a challenging natural language understanding task for existing models: a
strong neural system that gets 86% F1 on SQuAD 1.1 achieves only 66% F1 on
SQuAD 2.0.
| 2,018 | Computation and Language |
Distance-Free Modeling of Multi-Predicate Interactions in End-to-End
Japanese Predicate-Argument Structure Analysis | Capturing interactions among multiple predicate-argument structures (PASs) is
a crucial issue in the task of analyzing PAS in Japanese. In this paper, we
propose new Japanese PAS analysis models that integrate the label prediction
information of arguments in multiple PASs by extending the input and last
layers of a standard deep bidirectional recurrent neural network (bi-RNN)
model. In these models, using the mechanisms of pooling and attention, we aim
to directly capture the potential interactions among multiple PASs, without
being disturbed by the word order and distance. Our experiments show that the
proposed models improve the prediction accuracy specifically for cases where
the predicate and argument are in an indirect dependency relation and achieve a
new state of the art in the overall $F_1$ on a standard benchmark corpus.
| 2,018 | Computation and Language |
Prosody Modifications for Question-Answering in Voice-Only Settings | Many popular form factors of digital assistants---such as Amazon Echo, Apple
Homepod, or Google Home---enable the user to hold a conversation with these
systems based only on the speech modality. The lack of a screen presents unique
challenges. To satisfy the information need of a user, the presentation of the
answer needs to be optimized for such voice-only interactions. In this paper,
we propose a task of evaluating the usefulness of audio transformations (i.e.,
prosodic modifications) for voice-only question answering. We introduce a
crowdsourcing setup where we evaluate the quality of our proposed modifications
along multiple dimensions corresponding to the informativeness, naturalness,
and ability of the user to identify key parts of the answer. We offer a set of
prosodic modifications that highlight potentially important parts of the answer
using various acoustic cues. Our experiments show that some of these prosodic
modifications lead to better comprehension at the expense of only slightly
degraded naturalness of the audio.
| 2,019 | Computation and Language |
A Co-Matching Model for Multi-choice Reading Comprehension | Multi-choice reading comprehension is a challenging task, which involves the
matching between a passage and a question-answer pair. This paper proposes a
new co-matching approach to this problem, which jointly models whether a
passage can match both a question and a candidate answer. Experimental results
on the RACE dataset demonstrate that our approach achieves state-of-the-art
performance.
| 2,018 | Computation and Language |
WikiRef: Wikilinks as a route to recommending appropriate references for
scientific Wikipedia pages | The exponential increase in the usage of Wikipedia as a key source of
scientific knowledge among the researchers is making it absolutely necessary to
metamorphose this knowledge repository into an integral and self-contained
source of information for direct utilization. Unfortunately, the references
which support the content of each Wikipedia entity page, are far from complete.
Why are the reference section ill-formed for most Wikipedia pages? Is this
section edited as frequently as the other sections of a page? Can there be
appropriate surrogates that can automatically enhance the reference section? In
this paper, we propose a novel two step approach -- WikiRef -- that (i)
leverages the wikilinks present in a scientific Wikipedia target page and,
thereby, (ii) recommends highly relevant references to be included in that
target page appropriately and automatically borrowed from the reference section
of the wikilinks. In the first step, we build a classifier to ascertain whether
a wikilink is a potential source of reference or not. In the following step, we
recommend references to the target page from the reference section of the
wikilinks that are classified as potential sources of references in the first
step. We perform an extensive evaluation of our approach on datasets from two
different domains -- Computer Science and Physics. For Computer Science we
achieve a notably good performance with a precision@1 of 0.44 for reference
recommendation as opposed to 0.38 obtained from the most competitive baseline.
For the Physics dataset, we obtain a similar performance boost of 10% with
respect to the most competitive baseline.
| 2,018 | Computation and Language |
Finding Syntax in Human Encephalography with Beam Search | Recurrent neural network grammars (RNNGs) are generative models of
(tree,string) pairs that rely on neural networks to evaluate derivational
choices. Parsing with them using beam search yields a variety of incremental
complexity metrics such as word surprisal and parser action count. When used as
regressors against human electrophysiological responses to naturalistic text,
they derive two amplitude effects: an early peak and a P600-like later peak. By
contrast, a non-syntactic neural language model yields no reliable effects.
Model comparisons attribute the early peak to syntactic composition within the
RNNG. This pattern of results recommends the RNNG+beam search combination as a
mechanistic model of the syntactic processing that occurs during normal human
language comprehension.
| 2,018 | Computation and Language |
Straight to the Tree: Constituency Parsing with Neural Syntactic
Distance | In this work, we propose a novel constituency parsing scheme. The model
predicts a vector of real-valued scalars, named syntactic distances, for each
split position in the input sentence. The syntactic distances specify the order
in which the split points will be selected, recursively partitioning the input,
in a top-down fashion. Compared to traditional shift-reduce parsing schemes,
our approach is free from the potential problem of compounding errors, while
being faster and easier to parallelize. Our model achieves competitive
performance amongst single model, discriminative parsers in the PTB dataset and
outperforms previous models in the CTB dataset.
| 2,018 | Computation and Language |
A Corpus with Multi-Level Annotations of Patients, Interventions and
Outcomes to Support Language Processing for Medical Literature | We present a corpus of 5,000 richly annotated abstracts of medical articles
describing clinical randomized controlled trials. Annotations include
demarcations of text spans that describe the Patient population enrolled, the
Interventions studied and to what they were Compared, and the Outcomes measured
(the `PICO' elements). These spans are further annotated at a more granular
level, e.g., individual interventions within them are marked and mapped onto a
structured medical vocabulary. We acquired annotations from a diverse set of
workers with varying levels of expertise and cost. We describe our data
collection process and the corpus itself in detail. We then outline a set of
challenging NLP tasks that would aid searching of the medical literature and
the practice of evidence-based medicine.
| 2,018 | Computation and Language |
Navigating with Graph Representations for Fast and Scalable Decoding of
Neural Language Models | Neural language models (NLMs) have recently gained a renewed interest by
achieving state-of-the-art performance across many natural language processing
(NLP) tasks. However, NLMs are very computationally demanding largely due to
the computational cost of the softmax layer over a large vocabulary. We observe
that, in decoding of many NLP tasks, only the probabilities of the top-K
hypotheses need to be calculated preciously and K is often much smaller than
the vocabulary size. This paper proposes a novel softmax layer approximation
algorithm, called Fast Graph Decoder (FGD), which quickly identifies, for a
given context, a set of K words that are most likely to occur according to a
NLM. We demonstrate that FGD reduces the decoding time by an order of magnitude
while attaining close to the full softmax baseline accuracy on neural machine
translation and language modeling tasks. We also prove the theoretical
guarantee on the softmax approximation quality.
| 2,018 | Computation and Language |
Degree based Classification of Harmful Speech using Twitter Data | Harmful speech has various forms and it has been plaguing the social media in
different ways. If we need to crackdown different degrees of hate speech and
abusive behavior amongst it, the classification needs to be based on complex
ramifications which needs to be defined and hold accountable for, other than
racist, sexist or against some particular group and community. This paper
primarily describes how we created an ontological classification of harmful
speech based on degree of hateful intent, and used it to annotate twitter data
accordingly. The key contribution of this paper is the new dataset of tweets we
created based on ontological classes and degrees of harmful speech found in the
text. We also propose supervised classification system for recognizing these
respective harmful speech classes in the texts hence.
| 2,018 | Computation and Language |
Let's do it "again": A First Computational Approach to Detecting
Adverbial Presupposition Triggers | We introduce the task of predicting adverbial presupposition triggers such as
also and again. Solving such a task requires detecting recurring or similar
events in the discourse context, and has applications in natural language
generation tasks such as summarization and dialogue systems. We create two new
datasets for the task, derived from the Penn Treebank and the Annotated English
Gigaword corpora, as well as a novel attention mechanism tailored to this task.
Our attention mechanism augments a baseline recurrent neural network without
the need for additional trainable parameters, minimizing the added
computational cost of our mechanism. We demonstrate that our model
statistically outperforms a number of baselines, including an LSTM-based
language model.
| 2,018 | Computation and Language |
Learning Multilingual Topics from Incomparable Corpus | Multilingual topic models enable crosslingual tasks by extracting consistent
topics from multilingual corpora. Most models require parallel or comparable
training corpora, which limits their ability to generalize. In this paper, we
first demystify the knowledge transfer mechanism behind multilingual topic
models by defining an alternative but equivalent formulation. Based on this
analysis, we then relax the assumption of training data required by most
existing models, creating a model that only requires a dictionary for training.
Experiments show that our new method effectively learns coherent multilingual
topics from partially and fully incomparable corpora with limited amounts of
dictionary resources.
| 2,018 | Computation and Language |
iParaphrasing: Extracting Visually Grounded Paraphrases via an Image | A paraphrase is a restatement of the meaning of a text in other words.
Paraphrases have been studied to enhance the performance of many natural
language processing tasks. In this paper, we propose a novel task iParaphrasing
to extract visually grounded paraphrases (VGPs), which are different phrasal
expressions describing the same visual concept in an image. These extracted
VGPs have the potential to improve language and image multimodal tasks such as
visual question answering and image captioning. How to model the similarity
between VGPs is the key of iParaphrasing. We apply various existing methods as
well as propose a novel neural network-based method with image attention, and
report the results of the first attempt toward iParaphrasing.
| 2,018 | Computation and Language |
Challenges of language technologies for the indigenous languages of the
Americas | Indigenous languages of the American continent are highly diverse. However,
they have received little attention from the technological perspective. In this
paper, we review the research, the digital resources and the available NLP
systems that focus on these languages. We present the main challenges and
research questions that arise when distant languages and low-resource scenarios
are faced. We would like to encourage NLP research in linguistically rich and
diverse areas like the Americas.
| 2,018 | Computation and Language |
Embedding Text in Hyperbolic Spaces | Natural language text exhibits hierarchical structure in a variety of
respects. Ideally, we could incorporate our prior knowledge of this
hierarchical structure into unsupervised learning algorithms that work on text
data. Recent work by Nickel & Kiela (2017) proposed using hyperbolic instead of
Euclidean embedding spaces to represent hierarchical data and demonstrated
encouraging results when embedding graphs. In this work, we extend their method
with a re-parameterization technique that allows us to learn hyperbolic
embeddings of arbitrarily parameterized objects. We apply this framework to
learn word and sentence embeddings in hyperbolic space in an unsupervised
manner from text corpora. The resulting embeddings seem to encode certain
intuitive notions of hierarchy, such as word-context frequency and phrase
constituency. However, the implicit continuous hierarchy in the learned
hyperbolic space makes interrogating the model's learned hierarchies more
difficult than for models that learn explicit edges between items. The learned
hyperbolic embeddings show improvements over Euclidean embeddings in some --
but not all -- downstream tasks, suggesting that hierarchical organization is
more useful for some tasks than others.
| 2,018 | Computation and Language |
ISO-Standard Domain-Independent Dialogue Act Tagging for Conversational
Agents | Dialogue Act (DA) tagging is crucial for spoken language understanding
systems, as it provides a general representation of speakers' intents, not
bound to a particular dialogue system. Unfortunately, publicly available data
sets with DA annotation are all based on different annotation schemes and thus
incompatible with each other. Moreover, their schemes often do not cover all
aspects necessary for open-domain human-machine interaction. In this paper, we
propose a methodology to map several publicly available corpora to a subset of
the ISO standard, in order to create a large task-independent training corpus
for DA classification. We show the feasibility of using this corpus to train a
domain-independent DA tagger testing it on out-of-domain conversational data,
and argue the importance of training on multiple corpora to achieve robustness
across different DA categories.
| 2,018 | Computation and Language |
Neural Network Models for Paraphrase Identification, Semantic Textual
Similarity, Natural Language Inference, and Question Answering | In this paper, we analyze several neural network designs (and their
variations) for sentence pair modeling and compare their performance
extensively across eight datasets, including paraphrase identification,
semantic textual similarity, natural language inference, and question answering
tasks. Although most of these models have claimed state-of-the-art performance,
the original papers often reported on only one or two selected datasets. We
provide a systematic study and show that (i) encoding contextual information by
LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help
as much as previously claimed but surprisingly improves performance on Twitter
datasets, (iii) the Enhanced Sequential Inference Model is the best so far for
larger datasets, while the Pairwise Word Interaction Model achieves the best
performance when less data is available. We release our implementations as an
open-source toolkit.
| 2,018 | Computation and Language |
Exploiting Document Knowledge for Aspect-level Sentiment Classification | Attention-based long short-term memory (LSTM) networks have proven to be
useful in aspect-level sentiment classification. However, due to the
difficulties in annotating aspect-level data, existing public datasets for this
task are all relatively small, which largely limits the effectiveness of those
neural models. In this paper, we explore two approaches that transfer knowledge
from document- level data, which is much less expensive to obtain, to improve
the performance of aspect-level sentiment classification. We demonstrate the
effectiveness of our approaches on 4 public datasets from SemEval 2014, 2015,
and 2016, and we show that attention-based LSTM benefits from document-level
knowledge in multiple ways.
| 2,018 | Computation and Language |
Multi-Task Neural Models for Translating Between Styles Within and
Across Languages | Generating natural language requires conveying content in an appropriate
style. We explore two related tasks on generating text of varying formality:
monolingual formality transfer and formality-sensitive machine translation. We
propose to solve these tasks jointly using multi-task learning, and show that
our models achieve state-of-the-art performance for formality transfer and are
able to perform formality-sensitive translation without being explicitly
trained on style-annotated translation examples.
| 2,018 | Computation and Language |
Projecting Embeddings for Domain Adaptation: Joint Modeling of Sentiment
Analysis in Diverse Domains | Domain adaptation for sentiment analysis is challenging due to the fact that
supervised classifiers are very sensitive to changes in domain. The two most
prominent approaches to this problem are structural correspondence learning and
autoencoders. However, they either require long training times or suffer
greatly on highly divergent domains. Inspired by recent advances in
cross-lingual sentiment analysis, we provide a novel perspective and cast the
domain adaptation problem as an embedding projection task. Our model takes as
input two mono-domain embedding spaces and learns to project them to a
bi-domain space, which is jointly optimized to (1) project across domains and
to (2) predict sentiment. We perform domain adaptation experiments on 20
source-target domain pairs for sentiment classification and report novel
state-of-the-art results on 11 domain pairs, including the Amazon domain
adaptation datasets and SemEval 2013 and 2016 datasets. Our analysis shows that
our model performs comparably to state-of-the-art approaches on domains that
are similar, while performing significantly better on highly divergent domains.
Our code is available at https://github.com/jbarnesspain/domain_blse
| 2,018 | Computation and Language |
Knowledge Amalgam: Generating Jokes and Quotes Together | Generating humor and quotes are very challenging problems in the field of
computational linguistics and are often tackled separately. In this paper, we
present a controlled Long Short-Term Memory (LSTM) architecture which is
trained with categorical data like jokes and quotes together by passing
category as an input along with the sequence of words. The idea is that a
single neural net will learn the structure of both jokes and quotes to generate
them on demand according to input category. Importantly, we believe the neural
net has more knowledge as it's trained on different datasets and hence will
enable it to generate more creative jokes or quotes from the mixture of
information. May the network generate a funny inspirational joke!
| 2,018 | Computation and Language |
Explaining and Generalizing Back-Translation through Wake-Sleep | Back-translation has become a commonly employed heuristic for semi-supervised
neural machine translation. The technique is both straightforward to apply and
has led to state-of-the-art results. In this work, we offer a principled
interpretation of back-translation as approximate inference in a generative
model of bitext and show how the standard implementation of back-translation
corresponds to a single iteration of the wake-sleep algorithm in our proposed
model. Moreover, this interpretation suggests a natural iterative
generalization, which we demonstrate leads to further improvement of up to 1.6
BLEU.
| 2,018 | Computation and Language |
Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue
State Representation | Classic pipeline models for task-oriented dialogue system require explicit
modeling the dialogue states and hand-crafted action spaces to query a
domain-specific knowledge base. Conversely, sequence-to-sequence models learn
to map dialogue history to the response in current turn without explicit
knowledge base querying. In this work, we propose a novel framework that
leverages the advantages of classic pipeline and sequence-to-sequence models.
Our framework models a dialogue state as a fixed-size distributed
representation and use this representation to query a knowledge base via an
attention mechanism. Experiment on Stanford Multi-turn Multi-domain
Task-oriented Dialogue Dataset shows that our framework significantly
outperforms other sequence-to-sequence based baseline models on both automatic
and human evaluation.
| 2,018 | Computation and Language |
An Ensemble Model for Sentiment Analysis of Hindi-English Code-Mixed
Data | In multilingual societies like India, code-mixed social media texts comprise
the majority of the Internet. Detecting the sentiment of the code-mixed user
opinions plays a crucial role in understanding social, economic and political
trends. In this paper, we propose an ensemble of character-trigrams based LSTM
model and word-ngrams based Multinomial Naive Bayes (MNB) model to identify the
sentiments of Hindi-English (Hi-En) code-mixed data. The ensemble model
combines the strengths of rich sequential patterns from the LSTM model and
polarity of keywords from the probabilistic ngram model to identify sentiments
in sparse and inconsistent code-mixed data. Experiments on reallife user
code-mixed data reveals that our approach yields state-of-the-art results as
compared to several baselines and other deep learning based proposed methods.
| 2,018 | Computation and Language |
Impersonation: Modeling Persona in Smart Responses to Email | In this paper, we present design, implementation, and effectiveness of
generating personalized suggestions for email replies. To personalize email
responses based on users style and personality, we model the users persona
based on her past responses to emails. This model is added to the
language-based model created across users using past responses of the all user
emails.
A users model captures the typical responses of the user given a particular
context. The context includes the email received, recipient of the email, and
other external signals such as calendar activities, preferences, etc. The
context along with users personality (e.g., extrovert, formal, reserved, etc.)
is used to suggest responses. These responses can be a mixture of multiple
modes: email replies (textual), audio clips, etc. This helps in making
responses mimic the user as much as possible and helps the user to be more
productive while retaining her mark in the responses.
| 2,018 | Computation and Language |
Fusing Recency into Neural Machine Translation with an Inter-Sentence
Gate Model | Neural machine translation (NMT) systems are usually trained on a large
amount of bilingual sentence pairs and translate one sentence at a time,
ignoring inter-sentence information. This may make the translation of a
sentence ambiguous or even inconsistent with the translations of neighboring
sentences. In order to handle this issue, we propose an inter-sentence gate
model that uses the same encoder to encode two adjacent sentences and controls
the amount of information flowing from the preceding sentence to the
translation of the current sentence with an inter-sentence gate. In this way,
our proposed model can capture the connection between sentences and fuse
recency from neighboring sentences into neural machine translation. On several
NIST Chinese-English translation tasks, our experiments demonstrate that the
proposed inter-sentence gate model achieves substantial improvements over the
baseline.
| 2,018 | Computation and Language |
Design Challenges and Misconceptions in Neural Sequence Labeling | We investigate the design challenges of constructing effective and efficient
neural sequence labeling systems, by reproducing twelve neural sequence
labeling models, which include most of the state-of-the-art structures, and
conduct a systematic model comparison on three benchmarks (i.e. NER, Chunking,
and POS tagging). Misconceptions and inconsistent conclusions in existing
literature are examined and clarified under statistical experiments. In the
comparison and analysis process, we reach several practical conclusions which
can be useful to practitioners.
| 2,018 | Computation and Language |
Characterizing Departures from Linearity in Word Translation | We investigate the behavior of maps learned by machine translation methods.
The maps translate words by projecting between word embedding spaces of
different languages. We locally approximate these maps using linear maps, and
find that they vary across the word embedding space. This demonstrates that the
underlying maps are non-linear. Importantly, we show that the locally linear
maps vary by an amount that is tightly correlated with the distance between the
neighborhoods on which they are trained. Our results can be used to test
non-linear methods, and to drive the design of more accurate maps for word
translation.
| 2,018 | Computation and Language |
Dank Learning: Generating Memes Using Deep Neural Networks | We introduce a novel meme generation system, which given any image can
produce a humorous and relevant caption. Furthermore, the system can be
conditioned on not only an image but also a user-defined label relating to the
meme template, giving a handle to the user on meme content. The system uses a
pretrained Inception-v3 network to return an image embedding which is passed to
an attention-based deep-layer LSTM model producing the caption - inspired by
the widely recognised Show and Tell Model. We implement a modified beam search
to encourage diversity in the captions. We evaluate the quality of our model
using perplexity and human assessment on both the quality of memes generated
and whether they can be differentiated from real ones. Our model produces
original memes that cannot on the whole be differentiated from real ones.
| 2,018 | Computation and Language |
Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data | Sentiment analysis is a widely studied NLP task where the goal is to
determine opinions, emotions, and evaluations of users towards a product, an
entity or a service that they are reviewing. One of the biggest challenges for
sentiment analysis is that it is highly language dependent. Word embeddings,
sentiment lexicons, and even annotated data are language specific. Further,
optimizing models for each language is very time consuming and labor intensive
especially for recurrent neural network models. From a resource perspective, it
is very challenging to collect data for different languages.
In this paper, we look for an answer to the following research question: can
a sentiment analysis model trained on a language be reused for sentiment
analysis in other languages, Russian, Spanish, Turkish, and Dutch, where the
data is more limited? Our goal is to build a single model in the language with
the largest dataset available for the task, and reuse it for languages that
have limited resources. For this purpose, we train a sentiment analysis model
using recurrent neural networks with reviews in English. We then translate
reviews in other languages and reuse this model to evaluate the sentiments.
Experimental results show that our robust approach of single model trained on
English reviews statistically significantly outperforms the baselines in
several different languages.
| 2,018 | Computation and Language |
Recurrent One-Hop Predictions for Reasoning over Knowledge Graphs | Large scale knowledge graphs (KGs) such as Freebase are generally incomplete.
Reasoning over multi-hop (mh) KG paths is thus an important capability that is
needed for question answering or other NLP tasks that require knowledge about
the world. mh-KG reasoning includes diverse scenarios, e.g., given a head
entity and a relation path, predict the tail entity; or given two entities
connected by some relation paths, predict the unknown relation between them. We
present ROPs, recurrent one-hop predictors, that predict entities at each step
of mh-KB paths by using recurrent neural networks and vector representations of
entities and relations, with two benefits: (i) modeling mh-paths of arbitrary
lengths while updating the entity and relation representations by the training
signal at each step; (ii) handling different types of mh-KG reasoning in a
unified framework. Our models show state-of-the-art for two important multi-hop
KG reasoning tasks: Knowledge Base Completion and Path Query Answering.
| 2,018 | Computation and Language |
Learning to Automatically Generate Fill-In-The-Blank Quizzes | In this paper we formalize the problem automatic fill-in-the-blank question
generation using two standard NLP machine learning schemes, proposing concrete
deep learning models for each. We present an empirical study based on data
obtained from a language learning platform showing that both of our proposed
settings offer promising results.
| 2,018 | Computation and Language |
Second Language Acquisition Modeling: An Ensemble Approach | Accurate prediction of students knowledge is a fundamental building block of
personalized learning systems. Here, we propose a novel ensemble model to
predict student knowledge gaps. Applying our approach to student trace data
from the online educational platform Duolingo we achieved highest score on both
evaluation metrics for all three datasets in the 2018 Shared Task on Second
Language Acquisition Modeling. We describe our model and discuss relevance of
the task compared to how it would be setup in a production environment for
personalized education.
| 2,018 | Computation and Language |
Term Definitions Help Hypernymy Detection | Existing methods of hypernymy detection mainly rely on statistics over a big
corpus, either mining some co-occurring patterns like "animals such as cats" or
embedding words of interest into context-aware vectors. These approaches are
therefore limited by the availability of a large enough corpus that can cover
all terms of interest and provide sufficient contextual information to
represent their meaning. In this work, we propose a new paradigm, HyperDef, for
hypernymy detection -- expressing word meaning by encoding word definitions,
along with context driven representation. This has two main benefits: (i)
Definitional sentences express (sense-specific) corpus-independent meanings of
words, hence definition-driven approaches enable strong generalization -- once
trained, the model is expected to work well in open-domain testbeds; (ii)
Global context from a large corpus and definitions provide complementary
information for words. Consequently, our model, HyperDef, once trained on
task-agnostic data, gets state-of-the-art results in multiple benchmarks
| 2,018 | Computation and Language |
Automatic Target Recovery for Hindi-English Code Mixed Puns | In order for our computer systems to be more human-like, with a higher
emotional quotient, they need to be able to process and understand intrinsic
human language phenomena like humour. In this paper, we consider a subtype of
humour - puns, which are a common type of wordplay-based jokes. In particular,
we consider code-mixed puns which have become increasingly mainstream on social
media, in informal conversations and advertisements and aim to build a system
which can automatically identify the pun location and recover the target of
such puns. We first study and classify code-mixed puns into two categories
namely intra-sentential and intra-word, and then propose a four-step algorithm
to recover the pun targets for puns belonging to the intra-sentential category.
Our algorithm uses language models, and phonetic similarity-based features to
get the desired results. We test our approach on a small set of code-mixed
punning advertisements, and observe that our system is successfully able to
recover the targets for 67% of the puns.
| 2,018 | Computation and Language |
Transfer Learning from Speaker Verification to Multispeaker
Text-To-Speech Synthesis | We describe a neural network-based system for text-to-speech (TTS) synthesis
that is able to generate speech audio in the voice of many different speakers,
including those unseen during training. Our system consists of three
independently trained components: (1) a speaker encoder network, trained on a
speaker verification task using an independent dataset of noisy speech from
thousands of speakers without transcripts, to generate a fixed-dimensional
embedding vector from seconds of reference speech from a target speaker; (2) a
sequence-to-sequence synthesis network based on Tacotron 2, which generates a
mel spectrogram from text, conditioned on the speaker embedding; (3) an
auto-regressive WaveNet-based vocoder that converts the mel spectrogram into a
sequence of time domain waveform samples. We demonstrate that the proposed
model is able to transfer the knowledge of speaker variability learned by the
discriminatively-trained speaker encoder to the new task, and is able to
synthesize natural speech from speakers that were not seen during training. We
quantify the importance of training the speaker encoder on a large and diverse
speaker set in order to obtain the best generalization performance. Finally, we
show that randomly sampled speaker embeddings can be used to synthesize speech
in the voice of novel speakers dissimilar from those used in training,
indicating that the model has learned a high quality speaker representation.
| 2,018 | Computation and Language |
Evaluation of Unsupervised Compositional Representations | We evaluated various compositional models, from bag-of-words representations
to compositional RNN-based models, on several extrinsic supervised and
unsupervised evaluation benchmarks. Our results confirm that weighted vector
averaging can outperform context-sensitive models in most benchmarks, but
structural features encoded in RNN models can also be useful in certain
classification tasks. We analyzed some of the evaluation datasets to identify
the aspects of meaning they measure and the characteristics of the various
models that explain their performance variance.
| 2,018 | Computation and Language |
Using Clinical Narratives and Structured Data to Identify Distant
Recurrences in Breast Cancer | Accurately identifying distant recurrences in breast cancer from the
Electronic Health Records (EHR) is important for both clinical care and
secondary analysis. Although multiple applications have been developed for
computational phenotyping in breast cancer, distant recurrence identification
still relies heavily on manual chart review. In this study, we aim to develop a
model that identifies distant recurrences in breast cancer using clinical
narratives and structured data from EHR. We apply MetaMap to extract features
from clinical narratives and also retrieve structured clinical data from EHR.
Using these features, we train a support vector machine model to identify
distant recurrences in breast cancer patients. We train the model using 1,396
double-annotated subjects and validate the model using 599 double-annotated
subjects. In addition, we validate the model on a set of 4,904 single-annotated
subjects as a generalization test. We obtained a high area under curve (AUC)
score of 0.92 (SD=0.01) in the cross-validation using the training dataset,
then obtained AUC scores of 0.95 and 0.93 in the held-out test and
generalization test using 599 and 4,904 samples respectively. Our model can
accurately and efficiently identify distant recurrences in breast cancer by
combining features extracted from unstructured clinical narratives and
structured clinical data.
| 2,018 | Computation and Language |
Natural Language Processing for EHR-Based Computational Phenotyping | This article reviews recent advances in applying natural language processing
(NLP) to Electronic Health Records (EHRs) for computational phenotyping.
NLP-based computational phenotyping has numerous applications including
diagnosis categorization, novel phenotype discovery, clinical trial screening,
pharmacogenomics, drug-drug interaction (DDI) and adverse drug event (ADE)
detection, as well as genome-wide and phenome-wide association studies.
Significant progress has been made in algorithm development and resource
construction for computational phenotyping. Among the surveyed methods,
well-designed keyword search and rule-based systems often achieve good
performance. However, the construction of keyword and rule lists requires
significant manual effort, which is difficult to scale. Supervised machine
learning models have been favored because they are capable of acquiring both
classification patterns and structures from data. Recently, deep learning and
unsupervised learning have received growing attention, with the former favored
for its performance and the latter for its ability to find novel phenotypes.
Integrating heterogeneous data sources have become increasingly important and
have shown promise in improving model performance. Often better performance is
achieved by combining multiple modalities of information. Despite these many
advances, challenges and opportunities remain for NLP-based computational
phenotyping, including better model interpretability and generalizability, and
proper characterization of feature relations in clinical narratives
| 2,018 | Computation and Language |
SGM: Sequence Generation Model for Multi-label Classification | Multi-label classification is an important yet challenging task in natural
language processing. It is more complex than single-label classification in
that the labels tend to be correlated. Existing methods tend to ignore the
correlations between labels. Besides, different parts of the text can
contribute differently for predicting different labels, which is not considered
by existing models. In this paper, we propose to view the multi-label
classification task as a sequence generation problem, and apply a sequence
generation model with a novel decoder structure to solve it. Extensive
experimental results show that our proposed methods outperform previous work by
a substantial margin. Further analysis of experimental results demonstrates
that the proposed methods not only capture the correlations between labels, but
also select the most informative words automatically when predicting different
labels.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.