Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Attribute Extraction from Product Titles in eCommerce | This paper presents a named entity extraction system for detecting attributes
in product titles of eCommerce retailers like Walmart. The absence of syntactic
structure in such short pieces of text makes extracting attribute values a
challenging problem. We find that combining sequence labeling algorithms such
as Conditional Random Fields and Structured Perceptron with a curated
normalization scheme produces an effective system for the task of extracting
product attribute values from titles. To keep the discussion concrete, we will
illustrate the mechanics of the system from the point of view of a particular
attribute - brand. We also discuss the importance of an attribute extraction
system in the context of retail websites with large product catalogs, compare
our approach to other potential approaches to this problem and end the paper
with a discussion of the performance of our system for extracting attributes.
| 2,016 | Computation and Language |
An Efficient Character-Level Neural Machine Translation | Neural machine translation aims at building a single large neural network
that can be trained to maximize translation performance. The encoder-decoder
architecture with an attention mechanism achieves a translation performance
comparable to the existing state-of-the-art phrase-based systems on the task of
English-to-French translation. However, the use of large vocabulary becomes the
bottleneck in both training and improving the performance. In this paper, we
propose an efficient architecture to train a deep character-level neural
machine translation by introducing a decimator and an interpolator. The
decimator is used to sample the source sequence before encoding while the
interpolator is used to resample after decoding. Such a deep model has two
major advantages. It avoids the large vocabulary issue radically; at the same
time, it is much faster and more memory-efficient in training than conventional
character-based models. More interestingly, our model is able to translate the
misspelled word like human beings.
| 2,016 | Computation and Language |
Proceedings of the LexSem+Logics Workshop 2016 | Lexical semantics continues to play an important role in driving research
directions in NLP, with the recognition and understanding of context becoming
increasingly important in delivering successful outcomes in NLP tasks. Besides
traditional processing areas such as word sense and named entity
disambiguation, the creation and maintenance of dictionaries, annotated corpora
and resources have become cornerstones of lexical semantics research and
produced a wealth of contextual information that NLP processes can exploit. New
efforts both to link and construct from scratch such information - as Linked
Open Data or by way of formal tools coming from logic, ontologies and automated
reasoning - have increased the interoperability and accessibility of resources
for lexical and computational semantics, even in those languages for which they
have previously been limited.
LexSem+Logics 2016 combines the 1st Workshop on Lexical Semantics for
Lesser-Resources Languages and the 3rd Workshop on Logics and Ontologies. The
accepted papers in our program covered topics across these two areas,
including: the encoding of plurals in Wordnets, the creation of a thesaurus
from multiple sources based on semantic similarity metrics, and the use of
cross-lingual treebanks and annotations for universal part-of-speech tagging.
We also welcomed talks from two distinguished speakers: on Portuguese lexical
knowledge bases (different approaches, results and their application in NLP
tasks) and on new strategies for open information extraction (the capture of
verb-based propositions from massive text corpora).
| 2,016 | Computation and Language |
Cohesion and Coalition Formation in the European Parliament: Roll-Call
Votes and Twitter Activities | We study the cohesion within and the coalitions between political groups in
the Eighth European Parliament (2014--2019) by analyzing two entirely different
aspects of the behavior of the Members of the European Parliament (MEPs) in the
policy-making processes. On one hand, we analyze their co-voting patterns and,
on the other, their retweeting behavior. We make use of two diverse datasets in
the analysis. The first one is the roll-call vote dataset, where cohesion is
regarded as the tendency to co-vote within a group, and a coalition is formed
when the members of several groups exhibit a high degree of co-voting agreement
on a subject. The second dataset comes from Twitter; it captures the retweeting
(i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within
the same group) and coalitions (retweets between groups) from a completely
different perspective.
We employ two different methodologies to analyze the cohesion and coalitions.
The first one is based on Krippendorff's Alpha reliability, used to measure the
agreement between raters in data-analysis scenarios, and the second one is
based on Exponential Random Graph Models, often used in social-network
analysis. We give general insights into the cohesion of political groups in the
European Parliament, explore whether coalitions are formed in the same way for
different policy areas, and examine to what degree the retweeting behavior of
MEPs corresponds to their co-voting patterns. A novel and interesting aspect of
our work is the relationship between the co-voting and retweeting patterns.
| 2,016 | Computation and Language |
Ensemble of Jointly Trained Deep Neural Network-Based Acoustic Models
for Reverberant Speech Recognition | Distant speech recognition is a challenge, particularly due to the corruption
of speech signals by reverberation caused by large distances between the
speaker and microphone. In order to cope with a wide range of reverberations in
real-world situations, we present novel approaches for acoustic modeling
including an ensemble of deep neural networks (DNNs) and an ensemble of jointly
trained DNNs. First, multiple DNNs are established, each of which corresponds
to a different reverberation time 60 (RT60) in a setup step. Also, each model
in the ensemble of DNN acoustic models is further jointly trained, including
both feature mapping and acoustic modeling, where the feature mapping is
designed for the dereverberation as a front-end. In a testing phase, the two
most likely DNNs are chosen from the DNN ensemble using maximum a posteriori
(MAP) probabilities, computed in an online fashion by using maximum likelihood
(ML)-based blind RT60 estimation and then the posterior probability outputs
from two DNNs are combined using the ML-based weights as a simple average.
Extensive experiments demonstrate that the proposed approach leads to
substantial improvements in speech recognition accuracy over the conventional
DNN baseline systems under diverse reverberant conditions.
| 2,016 | Computation and Language |
Path-based vs. Distributional Information in Recognizing Lexical
Semantic Relations | Recognizing various semantic relations between terms is beneficial for many
NLP tasks. While path-based and distributional information sources are
considered complementary for this task, the superior results the latter showed
recently suggested that the former's contribution might have become obsolete.
We follow the recent success of an integrated neural method for hypernymy
detection (Shwartz et al., 2016) and extend it to recognize multiple relations.
The empirical results show that this method is effective in the multiclass
setting as well. We further show that the path-based information source always
contributes to the classification, and analyze the cases in which it mostly
complements the distributional information.
| 2,016 | Computation and Language |
SlangSD: Building and Using a Sentiment Dictionary of Slang Words for
Short-Text Sentiment Classification | Sentiment in social media is increasingly considered as an important resource
for customer segmentation, market understanding, and tackling other
socio-economic issues. However, sentiment in social media is difficult to
measure since user-generated content is usually short and informal. Although
many traditional sentiment analysis methods have been proposed, identifying
slang sentiment words remains untackled. One of the reasons is that slang
sentiment words are not available in existing dictionaries or sentiment
lexicons. To this end, we propose to build the first sentiment dictionary of
slang words to aid sentiment analysis of social media content. It is laborious
and time-consuming to collect and label the sentiment polarity of a
comprehensive list of slang words. We present an approach to leverage web
resources to construct an extensive Slang Sentiment word Dictionary (SlangSD)
that is easy to maintain and extend. SlangSD is publicly available for research
purposes. We empirically show the advantages of using SlangSD, the newly-built
slang sentiment word dictionary for sentiment classification, and provide
examples demonstrating its ease of use with an existing sentiment system.
| 2,016 | Computation and Language |
Multilingual Modal Sense Classification using a Convolutional Neural
Network | Modal sense classification (MSC) is a special WSD task that depends on the
meaning of the proposition in the modal's scope. We explore a CNN architecture
for classifying modal sense in English and German. We show that CNNs are
superior to manually designed feature-based classifiers and a standard NN
classifier. We analyze the feature maps learned by the CNN and identify known
and previously unattested linguistic features. We benchmark the CNN on a
standard WSD task, where it compares favorably to models using
sense-disambiguated target vectors.
| 2,016 | Computation and Language |
DNN-based Speech Synthesis for Indian Languages from ASCII text | Text-to-Speech synthesis in Indian languages has a seen lot of progress over
the decade partly due to the annual Blizzard challenges. These systems assume
the text to be written in Devanagari or Dravidian scripts which are nearly
phonemic orthography scripts. However, the most common form of computer
interaction among Indians is ASCII written transliterated text. Such text is
generally noisy with many variations in spelling for the same word. In this
paper we evaluate three approaches to synthesize speech from such noisy ASCII
text: a naive Uni-Grapheme approach, a Multi-Grapheme approach, and a
supervised Grapheme-to-Phoneme (G2P) approach. These methods first convert the
ASCII text to a phonetic script, and then learn a Deep Neural Network to
synthesize speech from that. We train and test our models on Blizzard Challenge
datasets that were transliterated to ASCII using crowdsourcing. Our experiments
on Hindi, Tamil and Telugu demonstrate that our models generate speech of
competetive quality from ASCII text compared to the speech synthesized from the
native scripts. All the accompanying transliterated datasets are released for
public access.
| 2,016 | Computation and Language |
A Strong Baseline for Learning Cross-Lingual Word Embeddings from
Sentence Alignments | While cross-lingual word embeddings have been studied extensively in recent
years, the qualitative differences between the different algorithms remain
vague. We observe that whether or not an algorithm uses a particular feature
set (sentence IDs) accounts for a significant performance gap among these
algorithms. This feature set is also used by traditional alignment algorithms,
such as IBM Model-1, which demonstrate similar performance to state-of-the-art
embedding algorithms on a variety of benchmarks. Overall, we observe that
different algorithmic approaches for utilizing the sentence ID feature space
result in similar performance. This paper draws both empirical and theoretical
parallels between the embedding and alignment literature, and suggests that
adding additional sources of information, which go beyond the traditional
signal of bilingual sentence-aligned corpora, may substantially improve
cross-lingual word embeddings, and that future baselines should at least take
such features into account.
| 2,017 | Computation and Language |
Who did What: A Large-Scale Person-Centered Cloze Dataset | We have constructed a new "Who-did-What" dataset of over 200,000
fill-in-the-gap (cloze) multiple choice reading comprehension problems
constructed from the LDC English Gigaword newswire corpus. The WDW dataset has
a variety of novel features. First, in contrast with the CNN and Daily Mail
datasets (Hermann et al., 2015) we avoid using article summaries for question
formation. Instead, each problem is formed from two independent articles --- an
article given as the passage to be read and a separate article on the same
events used to form the question. Second, we avoid anonymization --- each
choice is a person named entity. Third, the problems have been filtered to
remove a fraction that are easily solved by simple baselines, while remaining
84% solvable by humans. We report performance benchmarks of standard systems
and propose the WDW dataset as a challenge task for the community.
| 2,016 | Computation and Language |
Automatic Selection of Context Configurations for Improved
Class-Specific Word Representations | This paper is concerned with identifying contexts useful for training word
representation models for different word classes such as adjectives (A), verbs
(V), and nouns (N). We introduce a simple yet effective framework for an
automatic selection of class-specific context configurations. We construct a
context configuration space based on universal dependency relations between
words, and efficiently search this space with an adapted beam search algorithm.
In word similarity tasks for each word class, we show that our framework is
both effective and efficient. Particularly, it improves the Spearman's rho
correlation with human scores on SimLex-999 over the best previously proposed
class-specific contexts by 6 (A), 6 (V) and 5 (N) rho points. With our selected
context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of
all dependency-based contexts, resulting in a reduced training time. Our
results generalise: we show that the configurations our algorithm learns for
one English training setup outperform previously proposed context types in
another training setup for English. Moreover, basing the configuration space on
universal dependencies, it is possible to transfer the learned configurations
to German and Italian. We also demonstrate improved per-class results over
other context types in these two languages.
| 2,017 | Computation and Language |
Learning to Start for Sequence to Sequence Architecture | The sequence to sequence architecture is widely used in the response
generation and neural machine translation to model the potential relationship
between two sentences. It typically consists of two parts: an encoder that
reads from the source sentence and a decoder that generates the target sentence
word by word according to the encoder's output and the last generated word.
However, it faces to the cold start problem when generating the first word as
there is no previous word to refer. Existing work mainly use a special start
symbol </s>to generate the first word. An obvious drawback of these work is
that there is not a learnable relationship between words and the start symbol.
Furthermore, it may lead to the error accumulation for decoding when the first
word is incorrectly generated. In this paper, we proposed a novel approach to
learning to generate the first word in the sequence to sequence architecture
rather than using the start symbol. Experimental results on the task of
response generation of short text conversation show that the proposed approach
outperforms the state-of-the-art approach in both of the automatic and manual
evaluations.
| 2,016 | Computation and Language |
Modeling Human Reading with Neural Attention | When humans read text, they fixate some words and skip others. However, there
have been few attempts to explain skipping behavior with computational models,
as most existing work has focused on predicting reading times (e.g.,~using
surprisal). In this paper, we propose a novel approach that models both
skipping and reading, using an unsupervised architecture that combines a neural
attention with autoencoding, trained on raw text using reinforcement learning.
Our model explains human reading behavior as a tradeoff between precision of
language understanding (encoding the input accurately) and economy of attention
(fixating as few words as possible). We evaluate the model on the Dundee
eye-tracking corpus, showing that it accurately predicts skipping behavior and
reading times, is competitive with surprisal, and captures known qualitative
features of human reading.
| 2,017 | Computation and Language |
Using Distributed Representations to Disambiguate Biomedical and
Clinical Concepts | In this paper, we report a knowledge-based method for Word Sense
Disambiguation in the domains of biomedical and clinical text. We combine word
representations created on large corpora with a small number of definitions
from the UMLS to create concept representations, which we then compare to
representations of the context of ambiguous terms. Using no relational
information, we obtain comparable performance to previous approaches on the
MSH-WSD dataset, which is a well-known dataset in the biomedical domain.
Additionally, our method is fast and easy to set up and extend to other
domains. Supplementary materials, including source code, can be found at https:
//github.com/clips/yarn
| 2,016 | Computation and Language |
Topic Sensitive Neural Headline Generation | Neural models have recently been used in text summarization including
headline generation. The model can be trained using a set of document-headline
pairs. However, the model does not explicitly consider topical similarities and
differences of documents. We suggest to categorizing documents into various
topics so that documents within the same topic are similar in content and share
similar summarization patterns. Taking advantage of topic information of
documents, we propose topic sensitive neural headline generation model. Our
model can generate more accurate summaries guided by document topics. We test
our model on LCSTS dataset, and experiments show that our method outperforms
other baselines on each topic and achieves the state-of-art performance.
| 2,016 | Computation and Language |
phi-LSTM: A Phrase-based Hierarchical LSTM Model for Image Captioning | A picture is worth a thousand words. Not until recently, however, we noticed
some success stories in understanding of visual scenes: a model that is able to
detect/name objects, describe their attributes, and recognize their
relationships/interactions. In this paper, we propose a phrase-based
hierarchical Long Short-Term Memory (phi-LSTM) model to generate image
description. The proposed model encodes sentence as a sequence of combination
of phrases and words, instead of a sequence of words alone as in those
conventional solutions. The two levels of this model are dedicated to i) learn
to generate image relevant noun phrases, and ii) produce appropriate image
description from the phrases and other words in the corpus. Adopting a
convolutional neural network to learn image features and the LSTM to learn the
word sequence in a sentence, the proposed model has shown better or competitive
results in comparison to the state-of-the-art models on Flickr8k and Flickr30k
datasets.
| 2,017 | Computation and Language |
Learning Word Embeddings from Intrinsic and Extrinsic Views | While word embeddings are currently predominant for natural language
processing, most of existing models learn them solely from their contexts.
However, these context-based word embeddings are limited since not all words'
meaning can be learned based on only context. Moreover, it is also difficult to
learn the representation of the rare words due to data sparsity problem. In
this work, we address these issues by learning the representations of words by
integrating their intrinsic (descriptive) and extrinsic (contextual)
information. To prove the effectiveness of our model, we evaluate it on four
tasks, including word similarity, reverse dictionaries,Wiki link prediction,
and document classification. Experiment results show that our model is powerful
in both word and document modeling.
| 2,016 | Computation and Language |
Using the Output Embedding to Improve Language Models | We study the topmost weight matrix of neural network language models. We show
that this matrix constitutes a valid word embedding. When training language
models, we recommend tying the input embedding and this output embedding. We
analyze the resulting update rules and show that the tied embedding evolves in
a more similar way to the output embedding than to the input embedding in the
untied model. We also offer a new method of regularizing the output embedding.
Our methods lead to a significant reduction in perplexity, as we are able to
show on a variety of neural network language models. Finally, we show that
weight tying can reduce the size of neural translation models to less than half
of their original size without harming their performance.
| 2,017 | Computation and Language |
Context Gates for Neural Machine Translation | In neural machine translation (NMT), generation of a target word depends on
both source and target contexts. We find that source contexts have a direct
impact on the adequacy of a translation while target contexts affect the
fluency. Intuitively, generation of a content word should rely more on the
source context and generation of a functional word should rely more on the
target context. Due to the lack of effective control over the influence from
source and target contexts, conventional NMT tends to yield fluent but
inadequate translations. To address this problem, we propose context gates
which dynamically control the ratios at which source and target contexts
contribute to the generation of target words. In this way, we can enhance both
the adequacy and fluency of NMT with more careful control of the information
flow from contexts. Experiments show that our approach significantly improves
upon a standard attention-based NMT system by +2.3 BLEU points.
| 2,017 | Computation and Language |
An Incremental Parser for Abstract Meaning Representation | Meaning Representation (AMR) is a semantic representation for natural
language that embeds annotations related to traditional tasks such as named
entity recognition, semantic role labeling, word sense disambiguation and
co-reference resolution. We describe a transition-based parser for AMR that
parses sentences left-to-right, in linear time. We further propose a test-suite
that assesses specific subtasks that are helpful in comparing AMR parsers, and
show that our parser is competitive with the state of the art on the LDC2015E86
dataset and that it outperforms state-of-the-art parsers for recovering named
entities and handling polarity.
| 2,017 | Computation and Language |
Median-Based Generation of Synthetic Speech Durations using a
Non-Parametric Approach | This paper proposes a new approach to duration modelling for statistical
parametric speech synthesis in which a recurrent statistical model is trained
to output a phone transition probability at each timestep (acoustic frame).
Unlike conventional approaches to duration modelling -- which assume that
duration distributions have a particular form (e.g., a Gaussian) and use the
mean of that distribution for synthesis -- our approach can in principle model
any distribution supported on the non-negative integers. Generation from this
model can be performed in many ways; here we consider output generation based
on the median predicted duration. The median is more typical (more probable)
than the conventional mean duration, is robust to training-data irregularities,
and enables incremental generation. Furthermore, a frame-level approach to
duration prediction is consistent with a longer-term goal of modelling
durations and acoustic features together. Results indicate that the proposed
method is competitive with baseline approaches in approximating the median
duration of held-out natural speech.
| 2,020 | Computation and Language |
Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening
Comprehension Test by Machine | Multimedia or spoken content presents more attractive information than plain
text content, but it's more difficult to display on a screen and be selected by
a user. As a result, accessing large collections of the former is much more
difficult and time-consuming than the latter for humans. It's highly attractive
to develop a machine which can automatically understand spoken content and
summarize the key information for humans to browse over. In this endeavor, we
propose a new task of machine comprehension of spoken content. We define the
initial goal as the listening comprehension test of TOEFL, a challenging
academic English examination for English learners whose native language is not
English. We further propose an Attention-based Multi-hop Recurrent Neural
Network (AMRNN) architecture for this task, achieving encouraging results in
the initial tests. Initial results also have shown that word-level attention is
probably more robust than sentence-level attention for this task with ASR
errors.
| 2,016 | Computation and Language |
Which techniques does your application use?: An information extraction
framework for scientific articles | Every field of research consists of multiple application areas with various
techniques routinely used to solve problems in these wide range of application
areas. With the exponential growth in research volumes, it has become difficult
to keep track of the ever-growing number of application areas as well as the
corresponding problem solving techniques. In this paper, we consider the
computational linguistics domain and present a novel information extraction
system that automatically constructs a pool of all application areas in this
domain and appropriately links them with corresponding problem solving
techniques. Further, we categorize individual research articles based on their
application area and the techniques proposed/used in the article. k-gram based
discounting method along with handwritten rules and bootstrapped pattern
learning is employed to extract application areas. Subsequently, a language
modeling approach is proposed to characterize each article based on its
application area. Similarly, regular expressions and high-scoring noun phrases
are used for the extraction of the problem solving techniques. We propose a
greedy approach to characterize each article based on the techniques. Towards
the end, we present a table representing the most frequent techniques adopted
for a particular application area. Finally, we propose three use cases
presenting an extensive temporal analysis of the usage of techniques and
application areas.
| 2,016 | Computation and Language |
Tracking Amendments to Legislation and Other Political Texts with a
Novel Minimum-Edit-Distance Algorithm: DocuToads | Political scientists often find themselves tracking amendments to political
texts. As different actors weigh in, texts change as they are drafted and
redrafted, reflecting political preferences and power. This study provides a
novel solution to the prob- lem of detecting amendments to political text based
upon minimum edit distances. We demonstrate the usefulness of two
language-insensitive, transparent, and efficient minimum-edit-distance
algorithms suited for the task. These algorithms are capable of providing an
account of the types (insertions, deletions, substitutions, and trans-
positions) and substantive amount of amendments made between version of texts.
To illustrate the usefulness and efficiency of the approach we replicate two
existing stud- ies from the field of legislative studies. Our results
demonstrate that minimum edit distance methods can produce superior measures of
text amendments to hand-coded efforts in a fraction of the time and resource
costs.
| 2,016 | Computation and Language |
Semantic descriptions of 24 evaluational adjectives, for application in
sentiment analysis | We apply the Natural Semantic Metalanguage (NSM) approach (Goddard and
Wierzbicka 2014) to the lexical-semantic analysis of English evaluational
adjectives and compare the results with the picture developed in the Appraisal
Framework (Martin and White 2005). The analysis is corpus-assisted, with
examples mainly drawn from film and book reviews, and supported by
collocational and statistical information from WordBanks Online. We propose NSM
explications for 24 evaluational adjectives, arguing that they fall into five
groups, each of which corresponds to a distinct semantic template. The groups
can be sketched as follows: "First-person thought-plus-affect", e.g. wonderful;
"Experiential", e.g. entertaining; "Experiential with bodily reaction", e.g.
gripping; "Lasting impact", e.g. memorable; "Cognitive evaluation", e.g.
complex, excellent. These groupings and semantic templates are compared with
the classifications in the Appraisal Framework's system of Appreciation. In
addition, we are particularly interested in sentiment analysis, the automatic
identification of evaluation and subjectivity in text. We discuss the relevance
of the two frameworks for sentiment analysis and other language technology
applications.
| 2,016 | Computation and Language |
A Large-Scale Multilingual Disambiguation of Glosses | Linking concepts and named entities to knowledge bases has become a crucial
Natural Language Understanding task. In this respect, recent works have shown
the key advantage of exploiting textual definitions in various Natural Language
Processing applications. However, to date there are no reliable large-scale
corpora of sense-annotated textual definitions available to the research
community. In this paper we present a large-scale high-quality corpus of
disambiguated glosses in multiple languages, comprising sense annotations of
both concepts and named entities from a unified sense inventory. Our approach
for the construction and disambiguation of the corpus builds upon the structure
of a large multilingual semantic network and a state-of-the-art disambiguation
system; first, we gather complementary information of equivalent definitions
across different languages to provide context for disambiguation, and then we
combine it with a semantic similarity-based refinement. As a result we obtain a
multilingual corpus of textual definitions featuring over 38 million
definitions in 263 languages, and we make it freely available at
http://lcl.uniroma1.it/disambiguated-glosses. Experiments on Open Information
Extraction and Sense Clustering show how two state-of-the-art approaches
improve their performance by integrating our disambiguated corpus into their
pipeline.
| 2,016 | Computation and Language |
Robust Named Entity Recognition in Idiosyncratic Domains | Named entity recognition often fails in idiosyncratic domains. That causes a
problem for depending tasks, such as entity linking and relation extraction. We
propose a generic and robust approach for high-recall named entity recognition.
Our approach is easy to train and offers strong generalization over diverse
domain-specific language, such as news documents (e.g. Reuters) or biomedical
text (e.g. Medline). Our approach is based on deep contextual sequence learning
and utilizes stacked bidirectional LSTM networks. Our model is trained with
only few hundred labeled sentences and does not rely on further external
knowledge. We report from our results F1 scores in the range of 84-94% on
standard datasets.
| 2,016 | Computation and Language |
Improving Sparse Word Representations with Distributional Inference for
Semantic Composition | Distributional models are derived from co-occurrences in a corpus, where only
a small proportion of all possible plausible co-occurrences will be observed.
This results in a very sparse vector space, requiring a mechanism for inferring
missing knowledge. Most methods face this challenge in ways that render the
resulting word representations uninterpretable, with the consequence that
semantic composition becomes hard to model. In this paper we explore an
alternative which involves explicitly inferring unobserved co-occurrences using
the distributional neighbourhood. We show that distributional inference
improves sparse word representations on several word similarity benchmarks and
demonstrate that our model is competitive with the state-of-the-art for
adjective-noun, noun-noun and verb-object compositions while being fully
interpretable.
| 2,016 | Computation and Language |
A Context-aware Natural Language Generator for Dialogue Systems | We present a novel natural language generation system for spoken dialogue
systems capable of entraining (adapting) to users' way of speaking, providing
contextually appropriate responses. The generator is based on recurrent neural
networks and the sequence-to-sequence approach. It is fully trainable from data
which include preceding context along with responses to be generated. We show
that the context-aware generator yields significant improvements over the
baseline in both automatic metrics and a human pairwise preference test.
| 2,016 | Computation and Language |
Aligning Packed Dependency Trees: a theory of composition for
distributional semantics | We present a new framework for compositional distributional semantics in
which the distributional contexts of lexemes are expressed in terms of anchored
packed dependency trees. We show that these structures have the potential to
capture the full sentential contexts of a lexeme and provide a uniform basis
for the composition of distributional knowledge in a way that captures both
mutual disambiguation and generalization.
| 2,016 | Computation and Language |
A Bi-LSTM-RNN Model for Relation Classification Using Low-Cost Sequence
Features | Relation classification is associated with many potential applications in the
artificial intelligence area. Recent approaches usually leverage neural
networks based on structure features such as syntactic or dependency features
to solve this problem. However, high-cost structure features make such
approaches inconvenient to be directly used. In addition, structure features
are probably domain-dependent. Therefore, this paper proposes a bi-directional
long-short-term-memory recurrent-neural-network (Bi-LSTM-RNN) model based on
low-cost sequence features to address relation classification. This model
divides a sentence or text segment into five parts, namely two target entities
and their three contexts. It learns the representations of entities and their
contexts, and uses them to classify relations. We evaluate our model on two
standard benchmark datasets in different domains, namely SemEval-2010 Task 8
and BioNLP-ST 2016 Task BB3. In the former dataset, our model achieves
comparable performance compared with other models using sequence features. In
the latter dataset, our model obtains the third best results compared with
other models in the official evaluation. Moreover, we find that the context
between two target entities plays the most important role in relation
classification. Furthermore, statistic experiments show that the context
between two target entities can be used as an approximate replacement of the
shortest dependency path when dependency parsing is not used.
| 2,016 | Computation and Language |
Testing APSyn against Vector Cosine on Similarity Estimation | In Distributional Semantic Models (DSMs), Vector Cosine is widely used to
estimate similarity between word vectors, although this measure was noticed to
suffer from several shortcomings. The recent literature has proposed other
methods which attempt to mitigate such biases. In this paper, we intend to
investigate APSyn, a measure that computes the extent of the intersection
between the most associated contexts of two target words, weighting it by
context relevance. We evaluated this metric in a similarity estimation task on
several popular test sets, and our results show that APSyn is in fact highly
competitive, even with respect to the results reported in the literature for
word embeddings. On top of it, APSyn addresses some of the weaknesses of Vector
Cosine, performing well also on genuine similarity estimation.
| 2,016 | Computation and Language |
Hierarchical Attention Model for Improved Machine Comprehension of
Spoken Content | Multimedia or spoken content presents more attractive information than plain
text content, but the former is more difficult to display on a screen and be
selected by a user. As a result, accessing large collections of the former is
much more difficult and time-consuming than the latter for humans. It's
therefore highly attractive to develop machines which can automatically
understand spoken content and summarize the key information for humans to
browse over. In this endeavor, a new task of machine comprehension of spoken
content was proposed recently. The initial goal was defined as the listening
comprehension test of TOEFL, a challenging academic English examination for
English learners whose native languages are not English. An Attention-based
Multi-hop Recurrent Neural Network (AMRNN) architecture was also proposed for
this task, which considered only the sequential relationship within the speech
utterances. In this paper, we propose a new Hierarchical Attention Model (HAM),
which constructs multi-hopped attention mechanism over tree-structured rather
than sequential representations for the utterances. Improved comprehension
performance robust with respect to ASR errors were obtained.
| 2,017 | Computation and Language |
What to do about non-standard (or non-canonical) language in NLP | Real world data differs radically from the benchmark corpora we use in
natural language processing (NLP). As soon as we apply our technologies to the
real world, performance drops. The reason for this problem is obvious: NLP
models are trained on samples from a limited set of canonical varieties that
are considered standard, most prominently English newswire. However, there are
many dimensions, e.g., socio-demographics, language, genre, sentence type, etc.
on which texts can differ from the standard. The solution is not obvious: we
cannot control for all factors, and it is not clear how to best go beyond the
current practice of training on homogeneous data from a single domain and
language.
In this paper, I review the notion of canonicity, and how it shapes our
community's approach to language. I argue for leveraging what I call fortuitous
data, i.e., non-obvious data that is hitherto neglected, hidden in plain sight,
or raw data that needs to be refined. If we embrace the variety of this
heterogeneous data by combining it with proper algorithms, we will not only
produce more robust models, but will also enable adaptive language technology
capable of addressing natural language variation.
| 2,016 | Computation and Language |
Quantitative Analyses of Chinese Poetry of Tang and Song Dynasties:
Using Changing Colors and Innovative Terms as Examples | Tang (618-907 AD) and Song (960-1279) dynasties are two very important
periods in the development of Chinese literary. The most influential forms of
the poetry in Tang and Song were Shi and Ci, respectively. Tang Shi and Song Ci
established crucial foundations of the Chinese literature, and their influences
in both literary works and daily lives of the Chinese communities last until
today.
We can analyze and compare the Complete Tang Shi and the Complete Song Ci
from various viewpoints. In this presentation, we report our findings about the
differences in their vocabularies. Interesting new words that started to appear
in Song Ci and continue to be used in modern Chinese were identified. Colors
are an important ingredient of the imagery in poetry, and we discuss the most
frequent color words that appeared in Tang Shi and Song Ci.
| 2,016 | Computation and Language |
Machine Comprehension Using Match-LSTM and Answer Pointer | Machine comprehension of text is an important problem in natural language
processing. A recently released dataset, the Stanford Question Answering
Dataset (SQuAD), offers a large number of real questions and their answers
created by humans through crowdsourcing. SQuAD provides a challenging testbed
for evaluating machine comprehension algorithms, partly because compared with
previous datasets, in SQuAD the answers do not come from a small set of
candidate answers and they have variable lengths. We propose an end-to-end
neural architecture for the task. The architecture is based on match-LSTM, a
model we proposed previously for textual entailment, and Pointer Net, a
sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the
output tokens to be from the input sequences. We propose two ways of using
Pointer Net for our task. Our experiments show that both of our two models
substantially outperform the best results obtained by Rajpurkar et al.(2016)
using logistic regression and manually crafted features.
| 2,016 | Computation and Language |
American Sign Language fingerspelling recognition from video: Methods
for unrestricted recognition and signer-independence | In this thesis, we study the problem of recognizing video sequences of
fingerspelled letters in American Sign Language (ASL). Fingerspelling comprises
a significant but relatively understudied part of ASL, and recognizing it is
challenging for a number of reasons: It involves quick, small motions that are
often highly coarticulated; it exhibits significant variation between signers;
and there has been a dearth of continuous fingerspelling data collected. In
this work, we propose several types of recognition approaches, and explore the
signer variation problem. Our best-performing models are segmental
(semi-Markov) conditional random fields using deep neural network-based
features. In the signer-dependent setting, our recognizers achieve up to about
8% letter error rates. The signer-independent setting is much more challenging,
but with neural network adaptation we achieve up to 17% letter error rates.
| 2,016 | Computation and Language |
Language Detection For Short Text Messages In Social Media | With the constant growth of the World Wide Web and the number of documents in
different languages accordingly, the need for reliable language detection tools
has increased as well. Platforms such as Twitter with predominantly short texts
are becoming important information resources, which additionally imposes the
need for short texts language detection algorithms. In this paper, we show how
incorporating personalized user-specific information into the language
detection algorithm leads to an important improvement of detection results. To
choose the best algorithm for language detection for short text messages, we
investigate several machine learning approaches. These approaches include the
use of the well-known classifiers such as SVM and logistic regression, a
dictionary based approach, and a probabilistic model based on modified
Kneser-Ney smoothing. Furthermore, the extension of the probabilistic model to
include additional user-specific information such as evidence accumulation per
user and user interface language is explored, with the goal of improving the
classification performance. The proposed approaches are evaluated on randomly
collected Twitter data containing Latin as well as non-Latin alphabet languages
and the quality of the obtained results is compared, followed by the selection
of the best performing algorithm. This algorithm is then evaluated against two
already existing general language detection tools: Chromium Compact Language
Detector 2 (CLD2) and langid, where our method significantly outperforms the
results achieved by both of the mentioned methods. Additionally, a preview of
benefits and possible applications of having a reliable language detection
algorithm is given.
| 2,016 | Computation and Language |
A Dictionary-based Approach to Racism Detection in Dutch Social Media | We present a dictionary-based approach to racism detection in Dutch social
media comments, which were retrieved from two public Belgian social media sites
likely to attract racist reactions. These comments were labeled as racist or
non-racist by multiple annotators. For our approach, three discourse
dictionaries were created: first, we created a dictionary by retrieving
possibly racist and more neutral terms from the training data, and then
augmenting these with more general words to remove some bias. A second
dictionary was created through automatic expansion using a \texttt{word2vec}
model trained on a large corpus of general Dutch text. Finally, a third
dictionary was created by manually filtering out incorrect expansions. We
trained multiple Support Vector Machines, using the distribution of words over
the different categories in the dictionaries as features. The best-performing
model used the manually cleaned dictionary and obtained an F-score of 0.46 for
the racist class on a test set consisting of unseen Dutch comments, retrieved
from the same sites used for the training set. The automated expansion of the
dictionary only slightly boosted the model's performance, and this increase in
performance was not statistically significant. The fact that the coverage of
the expanded dictionaries did increase indicates that the words that were
automatically added did occur in the corpus, but were not able to meaningfully
impact performance. The dictionaries, code, and the procedure for requesting
the corpus are available at: https://github.com/clips/hades
| 2,016 | Computation and Language |
Demographic Dialectal Variation in Social Media: A Case Study of
African-American English | Though dialectal language is increasingly abundant on social media, few
resources exist for developing NLP tools to handle such language. We conduct a
case study of dialectal language in online conversational text by investigating
African-American English (AAE) on Twitter. We propose a distantly supervised
model to identify AAE-like language from demographics associated with
geo-located messages, and we verify that this language follows well-known AAE
linguistic phenomena. In addition, we analyze the quality of existing language
identification and dependency parsing tools on AAE-like text, demonstrating
that they perform poorly on such text compared to text associated with white
speakers. We also provide an ensemble classifier for language identification
which eliminates this disparity and release a new corpus of tweets containing
AAE-like language.
| 2,016 | Computation and Language |
The Generalized Smallest Grammar Problem | The Smallest Grammar Problem -- the problem of finding the smallest
context-free grammar that generates exactly one given sequence -- has never
been successfully applied to grammatical inference. We investigate the reasons
and propose an extended formulation that seeks to minimize non-recursive
grammars, instead of straight-line programs. In addition, we provide very
efficient algorithms that approximate the minimization problem of this class of
grammars. Our empirical evaluation shows that we are able to find smaller
models than the current best approximations to the Smallest Grammar Problem on
standard benchmarks, and that the inferred rules capture much better the
syntactic structure of natural language.
| 2,016 | Computation and Language |
Hash2Vec, Feature Hashing for Word Embeddings | In this paper we propose the application of feature hashing to create word
embeddings for natural language processing. Feature hashing has been used
successfully to create document vectors in related tasks like document
classification. In this work we show that feature hashing can be applied to
obtain word embeddings in linear time with the size of the data. The results
show that this algorithm, that does not need training, is able to capture the
semantic meaning of words. We compare the results against GloVe showing that
they are similar. As far as we know this is the first application of feature
hashing to the word embeddings problem and the results indicate this is a
scalable technique with practical results for NLP applications.
| 2,016 | Computation and Language |
How Much is 131 Million Dollars? Putting Numbers in Perspective with
Compositional Descriptions | How much is 131 million US dollars? To help readers put such numbers in
context, we propose a new task of automatically generating short descriptions
known as perspectives, e.g. "$131 million is about the cost to employ everyone
in Texas over a lunch period". First, we collect a dataset of numeric mentions
in news articles, where each mention is labeled with a set of rated
perspectives. We then propose a system to generate these descriptions
consisting of two steps: formula construction and description generation. In
construction, we compose formulae from numeric facts in a knowledge base and
rank the resulting formulas based on familiarity, numeric proximity and
semantic compatibility. In generation, we convert a formula into natural
language using a sequence-to-sequence recurrent neural network. Our system
obtains a 15.2% F1 improvement over a non-compositional baseline at formula
construction and a 12.5 BLEU point improvement over a baseline description
generation.
| 2,016 | Computation and Language |
All Fingers are not Equal: Intensity of References in Scientific
Articles | Research accomplishment is usually measured by considering all citations with
equal importance, thus ignoring the wide variety of purposes an article is
being cited for. Here, we posit that measuring the intensity of a reference is
crucial not only to perceive better understanding of research endeavor, but
also to improve the quality of citation-based applications. To this end, we
collect a rich annotated dataset with references labeled by the intensity, and
propose a novel graph-based semi-supervised model, GraLap to label the
intensity of references. Experiments with AAN datasets show a significant
improvement compared to the baselines to achieve the true labels of the
references (46% better correlation). Finally, we provide four applications to
demonstrate how the knowledge of reference intensity leads to design better
real-world applications.
| 2,016 | Computation and Language |
Identifying Dogmatism in Social Media: Signals and Models | We explore linguistic and behavioral features of dogmatism in social media
and construct statistical models that can identify dogmatic comments. Our model
is based on a corpus of Reddit posts, collected across a diverse set of
conversational topics and annotated via paid crowdsourcing. We operationalize
key aspects of dogmatism described by existing psychology theories (such as
over-confidence), finding they have predictive power. We also find evidence for
new signals of dogmatism, such as the tendency of dogmatic posts to refrain
from signaling cognitive processes. When we use our predictive model to analyze
millions of other Reddit posts, we find evidence that suggests dogmatism is a
deeper personality trait, present for dogmatic users across many different
domains, and that users who engage on dogmatic comments tend to show increases
in dogmatic posts themselves.
| 2,016 | Computation and Language |
Citation Classification for Behavioral Analysis of a Scientific Field | Citations are an important indicator of the state of a scientific field,
reflecting how authors frame their work, and influencing uptake by future
scholars. However, our understanding of citation behavior has been limited to
small-scale manual citation analysis. We perform the largest behavioral study
of citations to date, analyzing how citations are both framed and taken up by
scholars in one entire field: natural language processing. We introduce a new
dataset of nearly 2,000 citations annotated for function and centrality, and
use it to develop a state-of-the-art classifier and label the entire ACL
Reference Corpus. We then study how citations are framed by authors and use
both papers and online traces to track how citations are followed by readers.
We demonstrate that authors are sensitive to discourse structure and
publication venue when citing, that online readers follow temporal links to
previous and future work rather than methodological links, and that how a paper
cites related work is predictive of its citation count. Finally, we use changes
in citation roles to show that the field of NLP is undergoing a significant
increase in consensus.
| 2,016 | Computation and Language |
Improving Correlation with Human Judgments by Integrating Semantic
Similarity with Second--Order Vectors | Vector space methods that measure semantic similarity and relatedness often
rely on distributional information such as co--occurrence frequencies or
statistical measures of association to weight the importance of particular
co--occurrences. In this paper, we extend these methods by incorporating a
measure of semantic similarity based on a human curated taxonomy into a
second--order vector representation. This results in a measure of semantic
relatedness that combines both the contextual information available in a
corpus--based vector space representation with the semantic knowledge found in
a biomedical ontology. Our results show that incorporating semantic similarity
into a second order co--occurrence matrices improves correlation with human
judgments for both similarity and relatedness, and that our method compares
favorably to various different word embedding methods that have recently been
evaluated on the same reference standards we have used.
| 2,017 | Computation and Language |
Skipping Word: A Character-Sequential Representation based Framework for
Question Answering | Recent works using artificial neural networks based on word distributed
representation greatly boost the performance of various natural language
learning tasks, especially question answering. Though, they also carry along
with some attendant problems, such as corpus selection for embedding learning,
dictionary transformation for different learning tasks, etc. In this paper, we
propose to straightforwardly model sentences by means of character sequences,
and then utilize convolutional neural networks to integrate character embedding
learning together with point-wise answer selection training. Compared with deep
models pre-trained on word embedding (WE) strategy, our character-sequential
representation (CSR) based method shows a much simpler procedure and more
stable performance across different benchmarks. Extensive experiments on two
benchmark answer selection datasets exhibit the competitive performance
compared with the state-of-the-art methods.
| 2,016 | Computation and Language |
SynsetRank: Degree-adjusted Random Walk for Relation Identification | In relation extraction, a key process is to obtain good detectors that find
relevant sentences describing the target relation. To minimize the necessity of
labeled data for refining detectors, previous work successfully made use of
BabelNet, a semantic graph structure expressing relationships between synsets,
as side information or prior knowledge. The goal of this paper is to enhance
the use of graph structure in the framework of random walk with a few
adjustable parameters. Actually, a straightforward application of random walk
degrades the performance even after parameter optimization. With the insight
from this unsuccessful trial, we propose SynsetRank, which adjusts the initial
probability so that high degree nodes influence the neighbors as strong as low
degree nodes. In our experiment on 13 relations in the FB15K-237 dataset,
SynsetRank significantly outperforms baselines and the plain random walk
approach.
| 2,016 | Computation and Language |
Convolutional Neural Networks for Text Categorization: Shallow
Word-level vs. Deep Character-level | This paper reports the performances of shallow word-level convolutional
neural networks (CNN), our earlier work (2015), on the eight datasets with
relatively large training data that were used for testing the very deep
character-level CNN in Conneau et al. (2016). Our findings are as follows. The
shallow word-level CNNs achieve better error rates than the error rates
reported in Conneau et al., though the results should be interpreted with some
consideration due to the unique pre-processing of Conneau et al. The shallow
word-level CNN uses more parameters and therefore requires more storage than
the deep character-level CNN; however, the shallow word-level CNN computes much
faster.
| 2,016 | Computation and Language |
Towards End-to-End Reinforcement Learning of Dialogue Agents for
Information Access | This paper proposes KB-InfoBot -- a multi-turn dialogue agent which helps
users search Knowledge Bases (KBs) without composing complicated queries. Such
goal-oriented dialogue agents typically need to interact with an external
database to access real-world knowledge. Previous systems achieved this by
issuing a symbolic query to the KB to retrieve entries based on their
attributes. However, such symbolic operations break the differentiability of
the system and prevent end-to-end training of neural dialogue agents. In this
paper, we address this limitation by replacing symbolic queries with an induced
"soft" posterior distribution over the KB that indicates which entities the
user is interested in. Integrating the soft retrieval process with a
reinforcement learner leads to higher task success rate and reward in both
simulations and against real users. We also present a fully neural end-to-end
agent, trained entirely from user feedback, and discuss its application towards
personalized dialogue agents. The source code is available at
https://github.com/MiuLab/KB-InfoBot.
| 2,017 | Computation and Language |
Bi-Text Alignment of Movie Subtitles for Spoken English-Arabic
Statistical Machine Translation | We describe efforts towards getting better resources for English-Arabic
machine translation of spoken text. In particular, we look at movie subtitles
as a unique, rich resource, as subtitles in one language often get translated
into other languages. Movie subtitles are not new as a resource and have been
explored in previous research; however, here we create a much larger bi-text
(the biggest to date), and we further generate better quality alignment for it.
Given the subtitles for the same movie in different languages, a key problem is
how to align them at the fragment level. Typically, this is done using
length-based alignment, but for movie subtitles, there is also time
information. Here we exploit this information to develop an original algorithm
that outperforms the current best subtitle alignment tool, subalign. The
evaluation results show that adding our bi-text to the IWSLT training bi-text
yields an improvement of over two BLEU points absolute.
| 2,016 | Computation and Language |
PMI Matrix Approximations with Applications to Neural Language Modeling | The negative sampling (NEG) objective function, used in word2vec, is a
simplification of the Noise Contrastive Estimation (NCE) method. NEG was found
to be highly effective in learning continuous word representations. However,
unlike NCE, it was considered inapplicable for the purpose of learning the
parameters of a language model. In this study, we refute this assertion by
providing a principled derivation for NEG-based language modeling, founded on a
novel analysis of a low-dimensional approximation of the matrix of pointwise
mutual information between the contexts and the predicted words. The obtained
language modeling is closely related to NCE language models but is based on a
simplified objective function. We thus provide a unified formulation for two
main language processing tasks, namely word embedding and language modeling,
based on the NEG objective function. Experimental results on two popular
language modeling benchmarks show comparable perplexity results, with a small
advantage to NEG over NCE.
| 2,016 | Computation and Language |
Attention-Based Recurrent Neural Network Models for Joint Intent
Detection and Slot Filling | Attention-based encoder-decoder neural network models have recently shown
promising results in machine translation and speech recognition. In this work,
we propose an attention-based neural network model for joint intent detection
and slot filling, both of which are critical steps for many speech
understanding and dialog systems. Unlike in machine translation and speech
recognition, alignment is explicit in slot filling. We explore different
strategies in incorporating this alignment information to the encoder-decoder
framework. Learning from the attention mechanism in encoder-decoder model, we
further propose introducing attention to the alignment-based RNN models. Such
attentions provide additional information to the intent classification and slot
label prediction. Our independent task models achieve state-of-the-art intent
detection error rate and slot filling F1 score on the benchmark ATIS task. Our
joint training model further obtains 0.56% absolute (23.8% relative) error
reduction on intent detection and 0.23% absolute gain on slot filling over the
independent task models.
| 2,016 | Computation and Language |
Joint Online Spoken Language Understanding and Language Modeling with
Recurrent Neural Networks | Speaker intent detection and semantic slot filling are two critical tasks in
spoken language understanding (SLU) for dialogue systems. In this paper, we
describe a recurrent neural network (RNN) model that jointly performs intent
detection, slot filling, and language modeling. The neural network model keeps
updating the intent estimation as word in the transcribed utterance arrives and
uses it as contextual features in the joint model. Evaluation of the language
model and online SLU model is made on the ATIS benchmarking data set. On
language modeling task, our joint model achieves 11.8% relative reduction on
perplexity comparing to the independent training language model. On SLU tasks,
our joint model outperforms the independent task training model by 22.3% on
intent detection error rate, with slight degradation on slot filling F1 score.
The joint model also shows advantageous performance in the realistic ASR
settings with noisy speech input.
| 2,016 | Computation and Language |
Automatically extracting, ranking and visually summarizing the
treatments for a disease | Clinicians are expected to have up-to-date and broad knowledge of disease
treatment options for a patient. Online health knowledge resources contain a
wealth of information. However, because of the time investment needed to
disseminate and rank pertinent information, there is a need to summarize the
information in a more concise format. Our aim of the study is to provide
clinicians with a concise overview of popular treatments for a given disease
using information automatically computed from Medline abstracts. We analyzed
the treatments of two disorders - Atrial Fibrillation and Congestive Heart
Failure. We calculated the precision, recall, and f-scores of our two ranking
methods to measure the accuracy of the results. For Atrial Fibrillation
disorder, maximum f-score for the New Treatments weighing method is 0.611,
which occurs at 60 treatments. For Congestive Heart Failure disorder, maximum
f-score for the New Treatments weighing method is 0.503, which occurs at 80
treatments.
| 2,016 | Computation and Language |
Using Natural Language Processing to Screen Patients with Active Heart
Failure: An Exploration for Hospital-wide Surveillance | In this paper, we proposed two different approaches, a rule-based approach
and a machine-learning based approach, to identify active heart failure cases
automatically by analyzing electronic health records (EHR). For the rule-based
approach, we extracted cardiovascular data elements from clinical notes and
matched patients to different colors according their heart failure condition by
using rules provided by experts in heart failure. It achieved 69.4% accuracy
and 0.729 F1-Score. For the machine learning approach, with bigram of clinical
notes as features, we tried four different models while SVM with linear kernel
achieved the best performance with 87.5% accuracy and 0.86 F1-Score. Also, from
the classification comparison between the four different models, we believe
that linear models fit better for this problem. Once we combine the
machine-learning and rule-based algorithms, we will enable hospital-wide
surveillance of active heart failure through increased accuracy and
interpretability of the outputs.
| 2,016 | Computation and Language |
CRTS: A type system for representing clinical recommendations | Background: Clinical guidelines and recommendations are the driving wheels of
the evidence-based medicine (EBM) paradigm, but these are available primarily
as unstructured text and are generally highly heterogeneous in nature. This
significantly reduces the dissemination and automatic application of these
recommendations at the point of care. A comprehensive structured representation
of these recommendations is highly beneficial in this regard. Objective: The
objective of this paper to present Clinical Recommendation Type System (CRTS),
a common type system that can effectively represent a clinical recommendation
in a structured form. Methods: CRTS is built by analyzing 125 recommendations
and 195 research articles corresponding to 6 different diseases available from
UpToDate, a publicly available clinical knowledge system, and from the National
Guideline Clearinghouse, a public resource for evidence-based clinical practice
guidelines. Results: We show that CRTS not only covers the recommendations but
also is flexible to be extended to represent information from primary
literature. We also describe how our developed type system can be applied for
clinical decision support, medical knowledge summarization, and citation
retrieval. Conclusion: We showed that our proposed type system is precise and
comprehensive in representing a large sample of recommendations available for
various disorders. CRTS can now be used to build interoperable information
extraction systems that automatically extract clinical recommendations and
related data elements from clinical evidence resources, guidelines, systematic
reviews and primary publications.
Keywords: guidelines and recommendations, type system, clinical decision
support, evidence-based medicine, information storage and retrieval
| 2,016 | Computation and Language |
An Information Extraction Approach to Prescreen Heart Failure Patients
for Clinical Trials | To reduce the large amount of time spent screening, identifying, and
recruiting patients into clinical trials, we need prescreening systems that are
able to automate the data extraction and decision-making tasks that are
typically relegated to clinical research study coordinators. However, a major
obstacle is the vast amount of patient data available as unstructured free-form
text in electronic health records. Here we propose an information
extraction-based approach that first automatically converts unstructured text
into a structured form. The structured data are then compared against a list of
eligibility criteria using a rule-based system to determine which patients
qualify for enrollment in a heart failure clinical trial. We show that we can
achieve highly accurate results, with recall and precision values of 0.95 and
0.86, respectively. Our system allowed us to significantly reduce the time
needed for prescreening patients from a few weeks to a few minutes. Our
open-source information extraction modules are available for researchers and
could be tested and validated in other cardiovascular trials. An approach such
as the one we demonstrate here may decrease costs and expedite clinical trials,
and could enhance the reproducibility of trials across institutions and
populations.
| 2,016 | Computation and Language |
A Hybrid Citation Retrieval Algorithm for Evidence-based Clinical
Knowledge Summarization: Combining Concept Extraction, Vector Similarity and
Query Expansion for High Precision | Novel information retrieval methods to identify citations relevant to a
clinical topic can overcome the knowledge gap existing between the primary
literature (MEDLINE) and online clinical knowledge resources such as UpToDate.
Searching the MEDLINE database directly or with query expansion methods returns
a large number of citations that are not relevant to the query. The current
study presents a citation retrieval system that retrieves citations for
evidence-based clinical knowledge summarization. This approach combines query
expansion, concept-based screening algorithm, and concept-based vector
similarity. We also propose an information extraction framework for automated
concept (Population, Intervention, Comparison, and Disease) extraction. We
evaluated our proposed system on all topics (as queries) available from
UpToDate for two diseases, heart failure (HF) and atrial fibrillation (AFib).
The system achieved an overall F-score of 41.2% on HF topics and 42.4% on AFib
topics on a gold standard of citations available in UpToDate. This is
significantly high when compared to a query-expansion based baseline (F-score
of 1.3% on HF and 2.2% on AFib) and a system that uses query expansion with
disease hyponyms and journal names, concept-based screening, and term-based
vector similarity system (F-score of 37.5% on HF and 39.5% on AFib). Evaluating
the system with top K relevant citations, where K is the number of citations in
the gold standard achieved a much higher overall F-score of 69.9% on HF topics
and 75.1% on AFib topics. In addition, the system retrieved up to 18 new
relevant citations per topic when tested on ten HF and six AFib clinical
topics.
| 2,016 | Computation and Language |
Sentiment Classification of Food Reviews | Sentiment analysis of reviews is a popular task in natural language
processing. In this work, the goal is to predict the score of food reviews on a
scale of 1 to 5 with two recurrent neural networks that are carefully tuned. As
for baseline, we train a simple RNN for classification. Then we extend the
baseline to GRU. In addition, we present two different methods to deal with
highly skewed data, which is a common problem for reviews. Models are evaluated
using accuracies.
| 2,016 | Computation and Language |
Using Gaussian Processes for Rumour Stance Classification in Social
Media | Social media tend to be rife with rumours while new reports are released
piecemeal during breaking news. Interestingly, one can mine multiple reactions
expressed by social media users in those situations, exploring their stance
towards rumours, ultimately enabling the flagging of highly disputed rumours as
being potentially false. In this work, we set out to develop an automated,
supervised classifier that uses multi-task learning to classify the stance
expressed in each individual tweet in a rumourous conversation as either
supporting, denying or questioning the rumour. Using a classifier based on
Gaussian Processes, and exploring its effectiveness on two datasets with very
different characteristics and varying distributions of stances, we show that
our approach consistently outperforms competitive baseline classifiers. Our
classifier is especially effective in estimating the distribution of different
types of stance associated with a given rumour, which we set forth as a desired
characteristic for a rumour-tracking system that will warn both ordinary users
of Twitter and professional news practitioners when a rumour is being rebutted.
| 2,016 | Computation and Language |
The Social Dynamics of Language Change in Online Networks | Language change is a complex social phenomenon, revealing pathways of
communication and sociocultural influence. But, while language change has long
been a topic of study in sociolinguistics, traditional linguistic research
methods rely on circumstantial evidence, estimating the direction of change
from differences between older and younger speakers. In this paper, we use a
data set of several million Twitter users to track language changes in
progress. First, we show that language change can be viewed as a form of social
influence: we observe complex contagion for phonetic spellings and "netspeak"
abbreviations (e.g., lol), but not for older dialect markers from spoken
language. Next, we test whether specific types of social network connections
are more influential than others, using a parametric Hawkes process model. We
find that tie strength plays an important role: densely embedded social ties
are significantly better conduits of linguistic influence. Geographic locality
appears to play a more limited role: we find relatively little evidence to
support the hypothesis that individuals are more influenced by geographically
local social ties, even in their usage of geographical dialect markers.
| 2,016 | Computation and Language |
Learning Lexical Entries for Robotic Commands using Crowdsourcing | Robotic commands in natural language usually contain various spatial
descriptions that are semantically similar but syntactically different. Mapping
such syntactic variants into semantic concepts that can be understood by robots
is challenging due to the high flexibility of natural language expressions. To
tackle this problem, we collect robotic commands for navigation and
manipulation tasks using crowdsourcing. We further define a robot language and
use a generative machine translation model to translate robotic commands from
natural language to robot language. The main purpose of this paper is to
simulate the interaction process between human and robots using crowdsourcing
platforms, and investigate the possibility of translating natural language to
robot language with paraphrases.
| 2,016 | Computation and Language |
Detecting Singleton Review Spammers Using Semantic Similarity | Online reviews have increasingly become a very important resource for
consumers when making purchases. Though it is becoming more and more difficult
for people to make well-informed buying decisions without being deceived by
fake reviews. Prior works on the opinion spam problem mostly considered
classifying fake reviews using behavioral user patterns. They focused on
prolific users who write more than a couple of reviews, discarding one-time
reviewers. The number of singleton reviewers however is expected to be high for
many review websites. While behavioral patterns are effective when dealing with
elite users, for one-time reviewers, the review text needs to be exploited. In
this paper we tackle the problem of detecting fake reviews written by the same
person using multiple names, posting each review under a different name. We
propose two methods to detect similar reviews and show the results generally
outperform the vectorial similarity measures used in prior works. The first
method extends the semantic similarity between words to the reviews level. The
second method is based on topic modeling and exploits the similarity of the
reviews topic distributions using two models: bag-of-words and
bag-of-opinion-phrases. The experiments were conducted on reviews from three
different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset
(800 reviews).
| 2,015 | Computation and Language |
A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis | Opinion mining from customer reviews has become pervasive in recent years.
Sentences in reviews, however, are usually classified independently, even
though they form part of a review's argumentative structure. Intuitively,
sentences in a review build and elaborate upon each other; knowledge of the
review structure and sentential context should thus inform the classification
of each sentence. We demonstrate this hypothesis for the task of aspect-based
sentiment analysis by modeling the interdependencies of sentences in a review
with a hierarchical bidirectional LSTM. We show that the hierarchical model
outperforms two non-hierarchical baselines, obtains results competitive with
the state-of-the-art, and outperforms the state-of-the-art on five
multilingual, multi-domain datasets without any hand-engineered features or
external resources.
| 2,016 | Computation and Language |
INSIGHT-1 at SemEval-2016 Task 4: Convolutional Neural Networks for
Sentiment Classification and Quantification | This paper describes our deep learning-based approach to sentiment analysis
in Twitter as part of SemEval-2016 Task 4. We use a convolutional neural
network to determine sentiment and participate in all subtasks, i.e. two-point,
three-point, and five-point scale sentiment classification and two-point and
five-point scale sentiment quantification. We achieve competitive results for
two-point scale sentiment classification and quantification, ranking fifth and
a close fourth (third and second by alternative metrics) respectively despite
using only pre-trained embeddings that contain no sentiment information. We
achieve good performance on three-point scale sentiment classification, ranking
eighth out of 35, while performing poorly on five-point scale sentiment
classification and quantification. An error analysis reveals that this is due
to low expressiveness of the model to capture negative sentiment as well as an
inability to take into account ordinal information. We propose improvements in
order to address these and other issues.
| 2,016 | Computation and Language |
INSIGHT-1 at SemEval-2016 Task 5: Deep Learning for Multilingual
Aspect-based Sentiment Analysis | This paper describes our deep learning-based approach to multilingual
aspect-based sentiment analysis as part of SemEval 2016 Task 5. We use a
convolutional neural network (CNN) for both aspect extraction and aspect-based
sentiment analysis. We cast aspect extraction as a multi-label classification
problem, outputting probabilities over aspects parameterized by a threshold. To
determine the sentiment towards an aspect, we concatenate an aspect vector with
every word embedding and apply a convolution over it. Our constrained system
(unconstrained for English) achieves competitive results across all languages
and domains, placing first or second in 5 and 7 out of 11 language-domain pairs
for aspect category detection (slot 1) and sentiment polarity (slot 3)
respectively, thereby demonstrating the viability of a deep learning-based
approach for multilingual aspect-based sentiment analysis.
| 2,016 | Computation and Language |
Harassment detection: a benchmark on the #HackHarassment dataset | Online harassment has been a problem to a greater or lesser extent since the
early days of the internet. Previous work has applied anti-spam techniques like
machine-learning based text classification (Reynolds, 2011) to detecting
harassing messages. However, existing public datasets are limited in size, with
labels of varying quality. The #HackHarassment initiative (an alliance of 1
tech companies and NGOs devoted to fighting bullying on the internet) has begun
to address this issue by creating a new dataset superior to its predecssors in
terms of both size and quality. As we (#HackHarassment) complete further rounds
of labelling, later iterations of this dataset will increase the available
samples by at least an order of magnitude, enabling corresponding improvements
in the quality of machine learning models for harassment detection. In this
paper, we introduce the first models built on the #HackHarassment dataset v1.0
(a new open dataset, which we are delighted to share with any interested
researcherss) as a benchmark for future research.
| 2,016 | Computation and Language |
Dialogue manager domain adaptation using Gaussian process reinforcement
learning | Spoken dialogue systems allow humans to interact with machines using natural
speech. As such, they have many benefits. By using speech as the primary
communication medium, a computer interface can facilitate swift, human-like
acquisition of information. In recent years, speech interfaces have become ever
more popular, as is evident from the rise of personal assistants such as Siri,
Google Now, Cortana and Amazon Alexa. Recently, data-driven machine learning
methods have been applied to dialogue modelling and the results achieved for
limited-domain applications are comparable to or outperform traditional
approaches. Methods based on Gaussian processes are particularly effective as
they enable good models to be estimated from limited training data.
Furthermore, they provide an explicit estimate of the uncertainty which is
particularly useful for reinforcement learning. This article explores the
additional steps that are necessary to extend these methods to model multiple
dialogue domains. We show that Gaussian process reinforcement learning is an
elegant framework that naturally supports a range of methods, including prior
knowledge, Bayesian committee machines and multi-agent learning, for
facilitating extensible and adaptable dialogue systems.
| 2,016 | Computation and Language |
A Large Scale Corpus of Gulf Arabic | Most Arabic natural language processing tools and resources are developed to
serve Modern Standard Arabic (MSA), which is the official written language in
the Arab World. Some Dialectal Arabic varieties, notably Egyptian Arabic, have
received some attention lately and have a growing collection of resources that
include annotated corpora and morphological analyzers and taggers. Gulf Arabic,
however, lags behind in that respect. In this paper, we present the Gumar
Corpus, a large-scale corpus of Gulf Arabic consisting of 110 million words
from 1,200 forum novels. We annotate the corpus for sub-dialect information at
the document level. We also present results of a preliminary study in the
morphological annotation of Gulf Arabic which includes developing guidelines
for a conventional orthography. The text of the corpus is publicly browsable
through a web interface we developed for it.
| 2,016 | Computation and Language |
Divide and...conquer? On the limits of algorithmic approaches to
syntactic semantic structure | In computer science, divide and conquer (D&C) is an algorithm design paradigm
based on multi-branched recursion. A D&C algorithm works by recursively and
monotonically breaking down a problem into sub problems of the same (or a
related) type, until these become simple enough to be solved directly. The
solutions to the sub problems are then combined to give a solution to the
original problem. The present work identifies D&C algorithms assumed within
contemporary syntactic theory, and discusses the limits of their applicability
in the realms of the syntax semantics and syntax morphophonology interfaces. We
will propose that D&C algorithms, while valid for some processes, fall short on
flexibility given a mixed approach to the structure of linguistic phrase
markers. Arguments in favour of a computationally mixed approach to linguistic
structure will be presented as an alternative that offers advantages to uniform
D&C approaches.
| 2,016 | Computation and Language |
On the Similarities Between Native, Non-native and Translated Texts | We present a computational analysis of three language varieties: native,
advanced non-native, and translation. Our goal is to investigate the
similarities and differences between non-native language productions and
translations, contrasting both with native language. Using a collection of
computational methods we establish three main results: (1) the three types of
texts are easily distinguishable; (2) non-native language and translations are
closer to each other than each of them is to native language; and (3) some of
these characteristics depend on the source or native language, while others do
not, reflecting, perhaps, unified principles that similarly affect translations
and non-native language.
| 2,016 | Computation and Language |
Unsupervised Identification of Translationese | Translated texts are distinctively different from original ones, to the
extent that supervised text classification methods can distinguish between them
with high accuracy. These differences were proven useful for statistical
machine translation. However, it has been suggested that the accuracy of
translation detection deteriorates when the classifier is evaluated outside the
domain it was trained on. We show that this is indeed the case, in a variety of
evaluation scenarios. We then show that unsupervised classification is highly
accurate on this task. We suggest a method for determining the correct labels
of the clustering outcomes, and then use the labels for voting, improving the
accuracy even further. Moreover, we suggest a simple method for clustering in
the challenging case of mixed-domain datasets, in spite of the dominance of
domain-related features over translation-related ones. The result is an
effective, fully-unsupervised method for distinguishing between original and
translated texts that can be applied to new domains with reasonable accuracy.
| 2,016 | Computation and Language |
Modelling Creativity: Identifying Key Components through a Corpus-Based
Approach | Creativity is a complex, multi-faceted concept encompassing a variety of
related aspects, abilities, properties and behaviours. If we wish to study
creativity scientifically, then a tractable and well-articulated model of
creativity is required. Such a model would be of great value to researchers
investigating the nature of creativity and in particular, those concerned with
the evaluation of creative practice. This paper describes a unique approach to
developing a suitable model of how creative behaviour emerges that is based on
the words people use to describe the concept. Using techniques from the field
of statistical natural language processing, we identify a collection of
fourteen key components of creativity through an analysis of a corpus of
academic papers on the topic. Words are identified which appear significantly
often in connection with discussions of the concept. Using a measure of lexical
similarity to help cluster these words, a number of distinct themes emerge,
which collectively contribute to a comprehensive and multi-perspective model of
creativity. The components provide an ontology of creativity: a set of building
blocks which can be used to model creative practice in a variety of domains.
The components have been employed in two case studies to evaluate the
creativity of computational systems and have proven useful in articulating
achievements of this work and directions for further research.
| 2,017 | Computation and Language |
Morphological Constraints for Phrase Pivot Statistical Machine
Translation | The lack of parallel data for many language pairs is an important challenge
to statistical machine translation (SMT). One common solution is to pivot
through a third language for which there exist parallel corpora with the source
and target languages. Although pivoting is a robust technique, it introduces
some low quality translations especially when a poor morphology language is
used as the pivot between rich morphology languages. In this paper, we examine
the use of synchronous morphology constraint features to improve the quality of
phrase pivot SMT. We compare hand-crafted constraints to those learned from
limited parallel data between source and target languages. The learned
morphology constraints are based on projected align- ments between the source
and target phrases in the pivot phrase table. We show positive results on
Hebrew-Arabic SMT (pivoting on English). We get 1.5 BLEU points over a phrase
pivot baseline and 0.8 BLEU points over a system combination baseline with a
direct model built from parallel data.
| 2,015 | Computation and Language |
Read, Tag, and Parse All at Once, or Fully-neural Dependency Parsing | We present a dependency parser implemented as a single deep neural network
that reads orthographic representations of words and directly generates
dependencies and their labels. Unlike typical approaches to parsing, the model
doesn't require part-of-speech (POS) tagging of the sentences. With proper
regularization and additional supervision achieved with multitask learning we
reach state-of-the-art performance on Slavic languages from the Universal
Dependencies treebank: with no linguistic features other than characters, our
parser is as accurate as a transition- based system trained on perfect POS
tags.
| 2,017 | Computation and Language |
The Microsoft 2016 Conversational Speech Recognition System | We describe Microsoft's conversational speech recognition system, in which we
combine recent developments in neural-network-based acoustic and language
modeling to advance the state of the art on the Switchboard recognition task.
Inspired by machine learning ensemble techniques, the system uses a range of
convolutional and recurrent neural networks. I-vector modeling and lattice-free
MMI training provide significant gains for all acoustic model architectures.
Language model rescoring with multiple forward and backward running RNNLMs, and
word posterior-based system combination provide a 20% boost. The best single
system uses a ResNet architecture acoustic model with RNNLM rescoring, and
achieves a word error rate of 6.9% on the NIST 2000 Switchboard task. The
combined system has an error rate of 6.2%, representing an improvement over
previously reported results on this benchmark task.
| 2,017 | Computation and Language |
Joint Extraction of Events and Entities within a Document Context | Events and entities are closely related; entities are often actors or
participants in events and events without entities are uncommon. The
interpretation of events and entities is highly contextually dependent.
Existing work in information extraction typically models events separately from
entities, and performs inference at the sentence level, ignoring the rest of
the document. In this paper, we propose a novel approach that models the
dependencies among variables of events, entities, and their relations, and
performs joint inference of these variables across a document. The goal is to
enable access to document-level contextual information and facilitate
context-aware predictions. We demonstrate that our approach substantially
outperforms the state-of-the-art methods for event extraction as well as a
strong baseline for entity extraction.
| 2,016 | Computation and Language |
An Experimental Study of LSTM Encoder-Decoder Model for Text
Simplification | Text simplification (TS) aims to reduce the lexical and structural complexity
of a text, while still retaining the semantic meaning. Current automatic TS
techniques are limited to either lexical-level applications or manually
defining a large amount of rules. Since deep neural networks are powerful
models that have achieved excellent performance over many difficult tasks, in
this paper, we propose to use the Long Short-Term Memory (LSTM) Encoder-Decoder
model for sentence level TS, which makes minimal assumptions about word
sequence. We conduct preliminary experiments to find that the model is able to
learn operation rules such as reversing, sorting and replacing from sequence
pairs, which shows that the model may potentially discover and apply rules such
as modifying sentence structure, substituting words, and removing words for TS.
| 2,016 | Computation and Language |
Multimodal Attention for Neural Machine Translation | The attention mechanism is an important part of the neural machine
translation (NMT) where it was reported to produce richer source representation
compared to fixed-length encoding sequence-to-sequence models. Recently, the
effectiveness of attention has also been explored in the context of image
captioning. In this work, we assess the feasibility of a multimodal attention
mechanism that simultaneously focus over an image and its natural language
description for generating a description in another language. We train several
variants of our proposed attention mechanism on the Multi30k multilingual image
captioning dataset. We show that a dedicated attention for each modality
achieves up to 1.6 points in BLEU and METEOR compared to a textual NMT
baseline.
| 2,016 | Computation and Language |
Neural Machine Translation with Supervised Attention | The attention mechanisim is appealing for neural machine translation, since
it is able to dynam- ically encode a source sentence by generating a alignment
between a target word and source words. Unfortunately, it has been proved to be
worse than conventional alignment models in aligment accuracy. In this paper,
we analyze and explain this issue from the point view of re- ordering, and
propose a supervised attention which is learned with guidance from conventional
alignment models. Experiments on two Chinese-to-English translation tasks show
that the super- vised attention mechanism yields better alignments leading to
substantial gains over the standard attention based NMT.
| 2,016 | Computation and Language |
Neural Machine Transliteration: Preliminary Results | Machine transliteration is the process of automatically transforming the
script of a word from a source language to a target language, while preserving
pronunciation. Sequence to sequence learning has recently emerged as a new
paradigm in supervised learning. In this paper a character-based
encoder-decoder model has been proposed that consists of two Recurrent Neural
Networks. The encoder is a Bidirectional recurrent neural network that encodes
a sequence of symbols into a fixed-length vector representation, and the
decoder generates the target sequence using an attention-based recurrent neural
network. The encoder, the decoder and the attention mechanism are jointly
trained to maximize the conditional probability of a target sequence given a
source sequence. Our experiments on different datasets show that the proposed
encoder-decoder model is able to achieve significantly higher transliteration
quality over traditional statistical models.
| 2,016 | Computation and Language |
Efficient softmax approximation for GPUs | We propose an approximate strategy to efficiently train neural network based
language models over very large vocabularies. Our approach, called adaptive
softmax, circumvents the linear dependency on the vocabulary size by exploiting
the unbalanced word distribution to form clusters that explicitly minimize the
expectation of computation time. Our approach further reduces the computational
time by exploiting the specificities of modern architectures and matrix-matrix
vector operations, making it particularly suited for graphical processing
units. Our experiments carried out on standard benchmarks, such as EuroParl and
One Billion Word, show that our approach brings a large gain in efficiency over
standard approximations while achieving an accuracy close to that of the full
softmax. The code of our method is available at
https://github.com/facebookresearch/adaptive-softmax.
| 2,017 | Computation and Language |
Transliteration in Any Language with Surrogate Languages | We introduce a method for transliteration generation that can produce
transliterations in every language. Where previous results are only as
multilingual as Wikipedia, we show how to use training data from Wikipedia as
surrogate training for any language. Thus, the problem becomes one of ranking
Wikipedia languages in order of suitability with respect to a target language.
We introduce several task-specific methods for ranking languages, and show that
our approach is comparable to the oracle ceiling, and even outperforms it in
some cases.
| 2,016 | Computation and Language |
An Adaptive Psychoacoustic Model for Automatic Speech Recognition | Compared with automatic speech recognition (ASR), the human auditory system
is more adept at handling noise-adverse situations, including environmental
noise and channel distortion. To mimic this adeptness, auditory models have
been widely incorporated in ASR systems to improve their robustness. This paper
proposes a novel auditory model which incorporates psychoacoustics and
otoacoustic emissions (OAEs) into ASR. In particular, we successfully implement
the frequency-dependent property of psychoacoustic models and effectively
improve resulting system performance. We also present a novel double-transform
spectrum-analysis technique, which can qualitatively predict ASR performance
for different noise types. Detailed theoretical analysis is provided to show
the effectiveness of the proposed algorithm. Experiments are carried out on the
AURORA2 database and show that the word recognition rate using our proposed
feature extraction method is significantly increased over the baseline. Given
models trained with clean speech, our proposed method achieves up to 85.39%
word recognition accuracy on noisy data.
| 2,016 | Computation and Language |
Factored Neural Machine Translation | We present a new approach for neural machine translation (NMT) using the
morphological and grammatical decomposition of the words (factors) in the
output side of the neural network. This architecture addresses two main
problems occurring in MT, namely dealing with a large target language
vocabulary and the out of vocabulary (OOV) words. By the means of factors, we
are able to handle larger vocabulary and reduce the training time (for systems
with equivalent target language vocabulary size). In addition, we can produce
new words that are not in the vocabulary. We use a morphological analyser to
get a factored representation of each word (lemmas, Part of Speech tag, tense,
person, gender and number). We have extended the NMT approach with attention
mechanism in order to have two different outputs, one for the lemmas and the
other for the rest of the factors. The final translation is built using some
\textit{a priori} linguistic information. We compare our extension with a
word-based NMT system. The experiments, performed on the IWSLT'15 dataset
translating from English to French, show that while the performance do not
always increase, the system can manage a much larger vocabulary and
consistently reduce the OOV rate. We observe up to 2% BLEU point improvement in
a simulated out of domain translation setup.
| 2,017 | Computation and Language |
Characterizing the Language of Online Communities and its Relation to
Community Reception | This work investigates style and topic aspects of language in online
communities: looking at both utility as an identifier of the community and
correlation with community reception of content. Style is characterized using a
hybrid word and part-of-speech tag n-gram language model, while topic is
represented using Latent Dirichlet Allocation. Experiments with several Reddit
forums show that style is a better indicator of community identity than topic,
even for communities organized around specific topics. Further, there is a
positive correlation between the community reception to a contribution and the
style similarity to that community, but not so for topic similarity.
| 2,016 | Computation and Language |
Distant Supervision for Relation Extraction beyond the Sentence Boundary | The growing demand for structured knowledge has led to great interest in
relation extraction, especially in cases with limited supervision. However,
existing distance supervision approaches only extract relations expressed in
single sentences. In general, cross-sentence relation extraction is
under-explored, even in the supervised-learning setting. In this paper, we
propose the first approach for applying distant supervision to cross- sentence
relation extraction. At the core of our approach is a graph representation that
can incorporate both standard dependencies and discourse relations, thus
providing a unifying way to model relations within and across sentences. We
extract features from multiple paths in this graph, increasing accuracy and
robustness when confronted with linguistic variation and analysis error.
Experiments on an important extraction task for precision medicine show that
our approach can learn an accurate cross-sentence extractor, using only a small
existing knowledge base and unlabeled text from biomedical research articles.
Compared to the existing distant supervision paradigm, our approach extracted
twice as many relations at similar precision, thus demonstrating the prevalence
of cross-sentence relations and the promise of our approach.
| 2,017 | Computation and Language |
Long-Term Trends in the Public Perception of Artificial Intelligence | Analyses of text corpora over time can reveal trends in beliefs, interest,
and sentiment about a topic. We focus on views expressed about artificial
intelligence (AI) in the New York Times over a 30-year period. General
interest, awareness, and discussion about AI has waxed and waned since the
field was founded in 1956. We present a set of measures that captures levels of
engagement, measures of pessimism and optimism, the prevalence of specific
hopes and concerns, and topics that are linked to discussions about AI over
decades. We find that discussion of AI has increased sharply since 2009, and
that these discussions have been consistently more optimistic than pessimistic.
However, when we examine specific concerns, we find that worries of loss of
control of AI, ethical concerns for AI, and the negative impact of AI on work
have grown in recent years. We also find that hopes for AI in healthcare and
education have increased over time.
| 2,016 | Computation and Language |
An Iterative Transfer Learning Based Ensemble Technique for Automatic
Short Answer Grading | Automatic short answer grading (ASAG) techniques are designed to
automatically assess short answers to questions in natural language, having a
length of a few words to a few sentences. Supervised ASAG techniques have been
demonstrated to be effective but suffer from a couple of key practical
limitations. They are greatly reliant on instructor provided model answers and
need labeled training data in the form of graded student answers for every
assessment task. To overcome these, in this paper, we introduce an ASAG
technique with two novel features. We propose an iterative technique on an
ensemble of (a) a text classifier of student answers and (b) a classifier using
numeric features derived from various similarity measures with respect to model
answers. Second, we employ canonical correlation analysis based transfer
learning on a common feature representation to build the classifier ensemble
for questions having no labelled data. The proposed technique handsomely beats
all winning supervised entries on the SCIENTSBANK dataset from the Student
Response Analysis task of SemEval 2013. Additionally, we demonstrate
generalizability and benefits of the proposed technique through evaluation on
multiple ASAG datasets from different subject topics and standards.
| 2,016 | Computation and Language |
Grammatical Templates: Improving Text Difficulty Evaluation for Language
Learners | Language students are most engaged while reading texts at an appropriate
difficulty level. However, existing methods of evaluating text difficulty focus
mainly on vocabulary and do not prioritize grammatical features, hence they do
not work well for language learners with limited knowledge of grammar. In this
paper, we introduce grammatical templates, the expert-identified units of
grammar that students learn from class, as an important feature of text
difficulty evaluation. Experimental classification results show that
grammatical template features significantly improve text difficulty prediction
accuracy over baseline readability features by 7.4%. Moreover, we build a
simple and human-understandable text difficulty evaluation approach with 87.7%
accuracy, using only 5 grammatical template features.
| 2,016 | Computation and Language |
Interactive Spoken Content Retrieval by Deep Reinforcement Learning | User-machine interaction is important for spoken content retrieval. For text
content retrieval, the user can easily scan through and select on a list of
retrieved item. This is impossible for spoken content retrieval, because the
retrieved items are difficult to show on screen. Besides, due to the high
degree of uncertainty for speech recognition, the retrieval results can be very
noisy. One way to counter such difficulties is through user-machine
interaction. The machine can take different actions to interact with the user
to obtain better retrieval results before showing to the user. The suitable
actions depend on the retrieval status, for example requesting for extra
information from the user, returning a list of topics for user to select, etc.
In our previous work, some hand-crafted states estimated from the present
retrieval results are used to determine the proper actions. In this paper, we
propose to use Deep-Q-Learning techniques instead to determine the machine
actions for interactive spoken content retrieval. Deep-Q-Learning bypasses the
need for estimation of the hand-crafted states, and directly determine the best
action base on the present retrieval status even without any human knowledge.
It is shown to achieve significantly better performance compared with the
previous hand-crafted states.
| 2,016 | Computation and Language |
Select-Additive Learning: Improving Generalization in Multimodal
Sentiment Analysis | Multimodal sentiment analysis is drawing an increasing amount of attention
these days. It enables mining of opinions in video reviews which are now
available aplenty on online platforms. However, multimodal sentiment analysis
has only a few high-quality data sets annotated for training machine learning
algorithms. These limited resources restrict the generalizability of models,
where, for example, the unique characteristics of a few speakers (e.g., wearing
glasses) may become a confounding factor for the sentiment classification task.
In this paper, we propose a Select-Additive Learning (SAL) procedure that
improves the generalizability of trained neural networks for multimodal
sentiment analysis. In our experiments, we show that our SAL approach improves
prediction accuracy significantly in all three modalities (verbal, acoustic,
visual), as well as in their fusion. Our results show that SAL, even when
trained on one dataset, achieves good generalization across two new test
datasets.
| 2,017 | Computation and Language |
Multilinear Grammar: Ranks and Interpretations | Multilinear Grammar provides a framework for integrating the many different
syntagmatic structures of language into a coherent semiotically based Rank
Interpretation Architecture, with default linear grammars at each rank. The
architecture defines a Sui Generis Condition on ranks, from discourse through
utterance and phrasal structures to the word, with its sub-ranks of morphology
and phonology. Each rank has unique structures and its own semantic-pragmatic
and prosodic-phonetic interpretation models. Default computational models for
each rank are proposed, based on a Procedural Plausibility Condition:
incremental processing in linear time with finite working memory. We suggest
that the Rank Interpretation Architecture and its multilinear properties
provide systematic design features of human languages, contrasting with
unordered lists of key properties or single structural properties at one rank,
such as recursion, which have previously been been put forward as language
design features. The framework provides a realistic background for the gradual
development of complexity in the phylogeny and ontogeny of language, and
clarifies a range of challenges for the evaluation of realistic linguistic
theories and applications. The empirical objective of the paper is to
demonstrate unique multilinear properties at each rank and thereby motivate the
Multilinear Grammar and Rank Interpretation Architecture framework as a
coherent approach to capturing the complexity of human languages in the
simplest possible way.
| 2,017 | Computation and Language |
The MGB-2 Challenge: Arabic Multi-Dialect Broadcast Media Recognition | This paper describes the Arabic Multi-Genre Broadcast (MGB-2) Challenge for
SLT-2016. Unlike last year's English MGB Challenge, which focused on
recognition of diverse TV genres, this year, the challenge has an emphasis on
handling the diversity in dialect in Arabic speech. Audio data comes from 19
distinct programmes from the Aljazeera Arabic TV channel between March 2005 and
December 2015. Programmes are split into three groups: conversations,
interviews, and reports. A total of 1,200 hours have been released with lightly
supervised transcriptions for the acoustic modelling. For language modelling,
we made available over 110M words crawled from Aljazeera Arabic website
Aljazeera.net for a 10 year duration 2000-2011. Two lexicons have been
provided, one phoneme based and one grapheme based. Finally, two tasks were
proposed for this year's challenge: standard speech transcription, and word
alignment. This paper describes the task data and evaluation process used in
the MGB challenge, and summarises the results obtained.
| 2,019 | Computation and Language |
Multi-view Dimensionality Reduction for Dialect Identification of Arabic
Broadcast Speech | In this work, we present a new Vector Space Model (VSM) of speech utterances
for the task of spoken dialect identification. Generally, DID systems are built
using two sets of features that are extracted from speech utterances; acoustic
and phonetic. The acoustic and phonetic features are used to form vector
representations of speech utterances in an attempt to encode information about
the spoken dialects. The Phonotactic and Acoustic VSMs, thus formed, are used
for the task of DID. The aim of this paper is to construct a single VSM that
encodes information about spoken dialects from both the Phonotactic and
Acoustic VSMs. Given the two views of the data, we make use of a well known
multi-view dimensionality reduction technique known as Canonical Correlation
Analysis (CCA), to form a single vector representation for each speech
utterance that encodes dialect specific discriminative information from both
the phonetic and acoustic representations. We refer to this approach as feature
space combination approach and show that our CCA based feature vector
representation performs better on the Arabic DID task than the phonetic and
acoustic feature representations used alone. We also present the feature space
combination approach as a viable alternative to the model based combination
approach, where two DID systems are built using the two VSMs (Phonotactic and
Acoustic) and the final prediction score is the output score combination from
the two systems.
| 2,016 | Computation and Language |
Advances in All-Neural Speech Recognition | This paper advances the design of CTC-based all-neural (or end-to-end) speech
recognizers. We propose a novel symbol inventory, and a novel iterated-CTC
method in which a second system is used to transform a noisy initial output
into a cleaner version. We present a number of stabilization and initialization
methods we have found useful in training these networks. We evaluate our system
on the commonly used NIST 2000 conversational telephony test set, and
significantly exceed the previously published performance of similar systems,
both with and without the use of an external language model and decoding
technology.
| 2,017 | Computation and Language |
Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model.
| 2,020 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.