Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Towards an Arabic-English Machine-Translation Based on Semantic Web | Communication tools make the world like a small village and as a consequence
people can contact with others who are from different societies or who speak
different languages. This communication cannot happen effectively without
Machine Translation because they can be found anytime and everywhere. There are
a number of studies that have developed Machine Translation for the English
language with so many other languages except the Arabic it has not been
considered yet. Therefore we aim to highlight a roadmap for our proposed
translation machine to provide an enhanced Arabic English translation based on
Semantic.
| 2,017 | Computation and Language |
Machine-Translation History and Evolution: Survey for Arabic-English
Translations | As a result of the rapid changes in information and communication technology
(ICT), the world has become a small village where people from all over the
world connect with each other in dialogue and communication via the Internet.
Also, communications have become a daily routine activity due to the new
globalization where companies and even universities become global residing
cross countries borders. As a result, translation becomes a needed activity in
this connected world. ICT made it possible to have a student in one country
take a course or even a degree from a different country anytime anywhere
easily. The resulted communication still needs a language as a means that helps
the receiver understands the contents of the sent message. People need an
automated translation application because human translators are hard to find
all the times, and the human translations are very expensive comparing to the
translations automated process. Several types of research describe the
electronic process of the Machine-Translation. In this paper, the authors are
going to study some of these previous researches, and they will explore some of
the needed tools for the Machine-Translation. This research is going to
contribute to the Machine-Translation area by helping future researchers to
have a summary for the Machine-Translation groups of research and to let lights
on the importance of the translation mechanism.
| 2,017 | Computation and Language |
DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language
Understanding | Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely
used on NLP tasks to capture the long-term and local dependencies,
respectively. Attention mechanisms have recently attracted enormous interest
due to their highly parallelizable computation, significantly less training
time, and flexibility in modeling dependencies. We propose a novel attention
mechanism in which the attention between elements from input sequence(s) is
directional and multi-dimensional (i.e., feature-wise). A light-weight neural
net, "Directional Self-Attention Network (DiSAN)", is then proposed to learn
sentence embedding, based solely on the proposed attention without any RNN/CNN
structure. DiSAN is only composed of a directional self-attention with temporal
order encoded, followed by a multi-dimensional attention that compresses the
sequence into a vector representation. Despite its simple form, DiSAN
outperforms complicated RNN models on both prediction quality and time
efficiency. It achieves the best test accuracy among all sentence encoding
methods and improves the most recent best result by 1.02% on the Stanford
Natural Language Inference (SNLI) dataset, and shows state-of-the-art test
accuracy on the Stanford Sentiment Treebank (SST), Multi-Genre natural language
inference (MultiNLI), Sentences Involving Compositional Knowledge (SICK),
Customer Review, MPQA, TREC question-type classification and Subjectivity
(SUBJ) datasets.
| 2,017 | Computation and Language |
Synapse at CAp 2017 NER challenge: Fasttext CRF | We present our system for the CAp 2017 NER challenge which is about named
entity recognition on French tweets. Our system leverages unsupervised learning
on a larger dataset of French tweets to learn features feeding a CRF model. It
was ranked first without using any gazetteer or structured external data, with
an F-measure of 58.89\%. To the best of our knowledge, it is the first system
to use fasttext embeddings (which include subword representations) and an
embedding-based sentence representation for NER.
| 2,017 | Computation and Language |
Self-Attentive Residual Decoder for Neural Machine Translation | Neural sequence-to-sequence networks with attention have achieved remarkable
performance for machine translation. One of the reasons for their effectiveness
is their ability to capture relevant source-side contextual information at each
time-step prediction through an attention mechanism. However, the target-side
context is solely based on the sequence model which, in practice, is prone to a
recency bias and lacks the ability to capture effectively non-sequential
dependencies among words. To address this limitation, we propose a
target-side-attentive residual recurrent network for decoding, where attention
over previous words contributes directly to the prediction of the next word.
The residual learning facilitates the flow of information from the distant past
and is able to emphasize any of the previously translated words, hence it gains
access to a wider context. The proposed model outperforms a neural MT baseline
as well as a memory and self-attention network on three language pairs. The
analysis of the attention learned by the decoder confirms that it emphasizes a
wider context, and that it captures syntactic-like structures.
| 2,018 | Computation and Language |
A New Semantic Theory of Natural Language | Formal Semantics and Distributional Semantics are two important semantic
frameworks in Natural Language Processing (NLP). Cognitive Semantics belongs to
the movement of Cognitive Linguistics, which is based on contemporary cognitive
science. Each framework could deal with some meaning phenomena, but none of
them fulfills all requirements proposed by applications. A unified semantic
theory characterizing all important language phenomena has both theoretical and
practical significance; however, although many attempts have been made in
recent years, no existing theory has achieved this goal yet.
This article introduces a new semantic theory that has the potential to
characterize most of the important meaning phenomena of natural language and to
fulfill most of the necessary requirements for philosophical analysis and for
NLP applications. The theory is based on a unified representation of
information, and constructs a kind of mathematical model called cognitive model
to interpret natural language expressions in a compositional manner. It accepts
the empirical assumption of Cognitive Semantics, and overcomes most
shortcomings of Formal Semantics and of Distributional Semantics. The theory,
however, is not a simple combination of existing theories, but an extensive
generalization of classic logic and Formal Semantics. It inherits nearly all
advantages of Formal Semantics, and also provides descriptive contents for
objects and events as fine-gram as possible, descriptive contents which
represent the results of human cognition.
| 2,017 | Computation and Language |
Cross-Platform Emoji Interpretation: Analysis, a Solution, and
Applications | Most social media platforms are largely based on text, and users often write
posts to describe where they are, what they are seeing, and how they are
feeling. Because written text lacks the emotional cues of spoken and
face-to-face dialogue, ambiguities are common in written language. This problem
is exacerbated in the short, informal nature of many social media posts. To
bypass this issue, a suite of special characters called "emojis," which are
small pictograms, are embedded within the text. Many emojis are small
depictions of facial expressions designed to help disambiguate the emotional
meaning of the text. However, a new ambiguity arises in the way that emojis are
rendered. Every platform (Windows, Mac, and Android, to name a few) renders
emojis according to their own style. In fact, it has been shown that some
emojis can be rendered so differently that they look "happy" on some platforms,
and "sad" on others. In this work, we use real-world data to verify the
existence of this problem. We verify that the usage of the same emoji can be
significantly different across platforms, with some emojis exhibiting different
sentiment polarities on different platforms. We propose a solution to identify
the intended emoji based on the platform-specific nature of the emoji used by
the author of a social media post. We apply our solution to sentiment analysis,
a task that can benefit from the emoji calibration technique we use in this
work. We conduct experiments to evaluate the effectiveness of the mapping in
this task.
| 2,017 | Computation and Language |
WOAH: Preliminaries to Zero-shot Ontology Learning for Conversational
Agents | The present paper presents the Weighted Ontology Approximation Heuristic
(WOAH), a novel zero-shot approach to ontology estimation for conversational
agents development environments. This methodology extracts verbs and nouns
separately from data by distilling the dependencies obtained and applying
similarity and sparsity metrics to generate an ontology estimation configurable
in terms of the level of generalization.
| 2,017 | Computation and Language |
A Deep Generative Framework for Paraphrase Generation | Paraphrase generation is an important problem in NLP, especially in question
answering, information retrieval, information extraction, conversation systems,
to name a few. In this paper, we address the problem of generating paraphrases
automatically. Our proposed method is based on a combination of deep generative
models (VAE) with sequence-to-sequence models (LSTM) to generate paraphrases,
given an input sentence. Traditional VAEs when combined with recurrent neural
networks can generate free text but they are not suitable for paraphrase
generation for a given sentence. We address this problem by conditioning the
both, encoder and decoder sides of VAE, on the original sentence, so that it
can generate the given sentence's paraphrases. Unlike most existing models, our
model is simple, modular and can generate multiple paraphrases, for a given
sentence. Quantitative evaluation of the proposed method on a benchmark
paraphrase dataset demonstrates its efficacy, and its performance improvement
over the state-of-the-art methods by a significant margin, whereas qualitative
human evaluation indicate that the generated paraphrases are well-formed,
grammatically correct, and are relevant to the input sentence. Furthermore, we
evaluate our method on a newly released question paraphrase dataset, and
establish a new baseline for future research.
| 2,017 | Computation and Language |
Unsupervised Aspect Term Extraction with B-LSTM & CRF using
Automatically Labelled Datasets | Aspect Term Extraction (ATE) identifies opinionated aspect terms in texts and
is one of the tasks in the SemEval Aspect Based Sentiment Analysis (ABSA)
contest. The small amount of available datasets for supervised ATE and the
costly human annotation for aspect term labelling give rise to the need for
unsupervised ATE. In this paper, we introduce an architecture that achieves
top-ranking performance for supervised ATE. Moreover, it can be used
efficiently as feature extractor and classifier for unsupervised ATE. Our
second contribution is a method to automatically construct datasets for ATE. We
train a classifier on our automatically labelled datasets and evaluate it on
the human annotated SemEval ABSA test sets. Compared to a strong rule-based
baseline, we obtain a dramatically higher F-score and attain precision values
above 80%. Our unsupervised method beats the supervised ABSA baseline from
SemEval, while preserving high precision scores.
| 2,017 | Computation and Language |
Transcribing Against Time | We investigate the problem of manually correcting errors from an automatic
speech transcript in a cost-sensitive fashion. This is done by specifying a
fixed time budget, and then automatically choosing location and size of
segments for correction such that the number of corrected errors is maximized.
The core components, as suggested by previous research [1], are a utility model
that estimates the number of errors in a particular segment, and a cost model
that estimates annotation effort for the segment. In this work we propose a
dynamic updating framework that allows for the training of cost models during
the ongoing transcription process. This removes the need for transcriber
enrollment prior to the actual transcription, and improves correction
efficiency by allowing highly transcriber-adaptive cost modeling. We first
confirm and analyze the improvements afforded by this method in a simulated
study. We then conduct a realistic user study, observing efficiency
improvements of 15% relative on average, and 42% for the participants who
deviated most strongly from our initial, transcriber-agnostic cost model.
Moreover, we find that our updating framework can capture dynamically changing
factors, such as transcriber fatigue and topic familiarity, which we observe to
have a large influence on the transcriber's working behavior.
| 2,017 | Computation and Language |
And That's A Fact: Distinguishing Factual and Emotional Argumentation in
Online Dialogue | We investigate the characteristics of factual and emotional argumentation
styles observed in online debates. Using an annotated set of "factual" and
"feeling" debate forum posts, we extract patterns that are highly correlated
with factual and emotional arguments, and then apply a bootstrapping
methodology to find new patterns in a larger pool of unannotated forum posts.
This process automatically produces a large set of patterns representing
linguistic expressions that are highly correlated with factual and emotional
language. Finally, we analyze the most discriminating patterns to better
understand the defining characteristics of factual and emotional arguments.
| 2,017 | Computation and Language |
Are you serious?: Rhetorical Questions and Sarcasm in Social Media
Dialog | Effective models of social dialog must understand a broad range of rhetorical
and figurative devices. Rhetorical questions (RQs) are a type of figurative
language whose aim is to achieve a pragmatic goal, such as structuring an
argument, being persuasive, emphasizing a point, or being ironic. While there
are computational models for other forms of figurative language, rhetorical
questions have received little attention to date. We expand a small dataset
from previous work, presenting a corpus of 10,270 RQs from debate forums and
Twitter that represent different discourse functions. We show that we can
clearly distinguish between RQs and sincere questions (0.76 F1). We then show
that RQs can be used both sarcastically and non-sarcastically, observing that
non-sarcastic (other) uses of RQs are frequently argumentative in forums, and
persuasive in tweets. We present experiments to distinguish between these uses
of RQs using SVM and LSTM models that represent linguistic features and
post-level context, achieving results as high as 0.76 F1 for "sarcastic" and
0.77 F1 for "other" in forums, and 0.83 F1 for both "sarcastic" and "other" in
tweets. We supplement our quantitative experiments with an in-depth
characterization of the linguistic variation in RQs.
| 2,017 | Computation and Language |
Harvesting Creative Templates for Generating Stylistically Varied
Restaurant Reviews | Many of the creative and figurative elements that make language exciting are
lost in translation in current natural language generation engines. In this
paper, we explore a method to harvest templates from positive and negative
reviews in the restaurant domain, with the goal of vastly expanding the types
of stylistic variation available to the natural language generator. We learn
hyperbolic adjective patterns that are representative of the strongly-valenced
expressive language commonly used in either positive or negative reviews. We
then identify and delexicalize entities, and use heuristics to extract
generation templates from review sentences. We evaluate the learned templates
against more traditional review templates, using subjective measures of
"convincingness", "interestingness", and "naturalness". Our results show that
the learned templates score highly on these measures. Finally, we analyze the
linguistic categories that characterize the learned positive and negative
templates. We plan to use the learned templates to improve the conversational
style of dialogue systems in the restaurant domain.
| 2,017 | Computation and Language |
Creating and Characterizing a Diverse Corpus of Sarcasm in Dialogue | The use of irony and sarcasm in social media allows us to study them at scale
for the first time. However, their diversity has made it difficult to construct
a high-quality corpus of sarcasm in dialogue. Here, we describe the process of
creating a large- scale, highly-diverse corpus of online debate forums
dialogue, and our novel methods for operationalizing classes of sarcasm in the
form of rhetorical questions and hyperbole. We show that we can use
lexico-syntactic cues to reliably retrieve sarcastic utterances with high
accuracy. To demonstrate the properties and quality of our corpus, we conduct
supervised learning experiments with simple features, and show that we achieve
both higher precision and F than previous work on sarcasm in debate forums
dialogue. We apply a weakly-supervised linguistic pattern learner and
qualitatively analyze the linguistic differences in each class.
| 2,017 | Computation and Language |
Combining Search with Structured Data to Create a More Engaging User
Experience in Open Domain Dialogue | The greatest challenges in building sophisticated open-domain conversational
agents arise directly from the potential for ongoing mixed-initiative
multi-turn dialogues, which do not follow a particular plan or pursue a
particular fixed information need. In order to make coherent conversational
contributions in this context, a conversational agent must be able to track the
types and attributes of the entities under discussion in the conversation and
know how they are related. In some cases, the agent can rely on structured
information sources to help identify the relevant semantic relations and
produce a turn, but in other cases, the only content available comes from
search, and it may be unclear which semantic relations hold between the search
results and the discourse context. A further constraint is that the system must
produce its contribution to the ongoing conversation in real-time. This paper
describes our experience building SlugBot for the 2017 Alexa Prize, and
discusses how we leveraged search and structured data from different sources to
help SlugBot produce dialogic turns and carry on conversations whose length
over the semi-finals user evaluation period averaged 8:17 minutes.
| 2,017 | Computation and Language |
"How May I Help You?": Modeling Twitter Customer Service Conversations
Using Fine-Grained Dialogue Acts | Given the increasing popularity of customer service dialogue on Twitter,
analysis of conversation data is essential to understand trends in customer and
agent behavior for the purpose of automating customer service interactions. In
this work, we develop a novel taxonomy of fine-grained "dialogue acts"
frequently observed in customer service, showcasing acts that are more suited
to the domain than the more generic existing taxonomies. Using a sequential
SVM-HMM model, we model conversation flow, predicting the dialogue act of a
given turn in real-time. We characterize differences between customer and agent
behavior in Twitter customer service conversations, and investigate the effect
of testing our system on different customer service industries. Finally, we use
a data-driven approach to predict important conversation outcomes: customer
satisfaction, customer frustration, and overall problem resolution. We show
that the type and location of certain dialogue acts in a conversation have a
significant effect on the probability of desirable and undesirable outcomes,
and present actionable rules based on our findings. The patterns and rules we
derive can be used as guidelines for outcome-driven automated customer service
platforms.
| 2,017 | Computation and Language |
Acquiring Background Knowledge to Improve Moral Value Prediction | In this paper, we address the problem of detecting expressions of moral
values in tweets using content analysis. This is a particularly challenging
problem because moral values are often only implicitly signaled in language,
and tweets contain little contextual information due to length constraints. To
address these obstacles, we present a novel approach to automatically acquire
background knowledge from an external knowledge base to enrich input texts and
thus improve moral value prediction. By combining basic text features with
background knowledge, our overall context-aware framework achieves performance
comparable to a single human annotator. To the best of our knowledge, this is
the first attempt to incorporate background knowledge for the prediction of
implicit psychological variables in the area of computational social science.
| 2,017 | Computation and Language |
Order-Preserving Abstractive Summarization for Spoken Content Based on
Connectionist Temporal Classification | Connectionist temporal classification (CTC) is a powerful approach for
sequence-to-sequence learning, and has been popularly used in speech
recognition. The central ideas of CTC include adding a label "blank" during
training. With this mechanism, CTC eliminates the need of segment alignment,
and hence has been applied to various sequence-to-sequence learning problems.
In this work, we applied CTC to abstractive summarization for spoken content.
The "blank" in this case implies the corresponding input data are less
important or noisy; thus it can be ignored. This approach was shown to
outperform the existing methods in term of ROUGE scores over Chinese Gigaword
and MATBN corpora. This approach also has the nice property that the ordering
of words or characters in the input documents can be better preserved in the
generated summaries.
| 2,017 | Computation and Language |
Role of Morphology Injection in Statistical Machine Translation | Phrase-based Statistical models are more commonly used as they perform
optimally in terms of both, translation quality and complexity of the system.
Hindi and in general all Indian languages are morphologically richer than
English. Hence, even though Phrase-based systems perform very well for the less
divergent language pairs, for English to Indian language translation, we need
more linguistic information (such as morphology, parse tree, parts of speech
tags, etc.) on the source side. Factored models seem to be useful in this case,
as Factored models consider word as a vector of factors. These factors can
contain any information about the surface word and use it while translating.
Hence, the objective of this work is to handle morphological inflections in
Hindi and Marathi using Factored translation models while translating from
English. SMT approaches face the problem of data sparsity while translating
into a morphologically rich language. It is very unlikely for a parallel corpus
to contain all morphological forms of words. We propose a solution to generate
these unseen morphological forms and inject them into original training
corpora. In this paper, we study factored models and the problem of sparseness
in context of translation to morphologically rich languages. We propose a
simple and effective solution which is based on enriching the input with
various morphological forms of words. We observe that morphology injection
improves the quality of translation in terms of both adequacy and fluency. We
verify this with the experiments on two morphologically rich languages: Hindi
and Marathi, while translating from English.
| 2,017 | Computation and Language |
AISHELL-1: An Open-Source Mandarin Speech Corpus and A Speech
Recognition Baseline | An open-source Mandarin speech corpus called AISHELL-1 is released. It is by
far the largest corpus which is suitable for conducting the speech recognition
research and building speech recognition systems for Mandarin. The recording
procedure, including audio capturing devices and environments are presented in
details. The preparation of the related resources, including transcriptions and
lexicon are described. The corpus is released with a Kaldi recipe. Experimental
results implies that the quality of audio recordings and transcriptions are
promising.
| 2,017 | Computation and Language |
Data Innovation for International Development: An overview of natural
language processing for qualitative data analysis | Availability, collection and access to quantitative data, as well as its
limitations, often make qualitative data the resource upon which development
programs heavily rely. Both traditional interview data and social media
analysis can provide rich contextual information and are essential for
research, appraisal, monitoring and evaluation. These data may be difficult to
process and analyze both systematically and at scale. This, in turn, limits the
ability of timely data driven decision-making which is essential in fast
evolving complex social systems. In this paper, we discuss the potential of
using natural language processing to systematize analysis of qualitative data,
and to inform quick decision-making in the development context. We illustrate
this with interview data generated in a format of micro-narratives for the UNDP
Fragments of Impact project.
| 2,017 | Computation and Language |
Character Distributions of Classical Chinese Literary Texts: Zipf's Law,
Genres, and Epochs | We collect 14 representative corpora for major periods in Chinese history in
this study. These corpora include poetic works produced in several dynasties,
novels of the Ming and Qing dynasties, and essays and news reports written in
modern Chinese. The time span of these corpora ranges between 1046 BCE and 2007
CE. We analyze their character and word distributions from the viewpoint of the
Zipf's law, and look for factors that affect the deviations and similarities
between their Zipfian curves. Genres and epochs demonstrated their influences
in our analyses. Specifically, the character distributions for poetic works of
between 618 CE and 1644 CE exhibit striking similarity. In addition, although
texts of the same dynasty may tend to use the same set of characters, their
character distributions still deviate from each other.
| 2,017 | Computation and Language |
Hierarchical Gated Recurrent Neural Tensor Network for Answer Triggering | In this paper, we focus on the problem of answer triggering ad-dressed by
Yang et al. (2015), which is a critical component for a real-world question
answering system. We employ a hierarchical gated recurrent neural tensor
(HGRNT) model to capture both the context information and the deep
in-teractions between the candidate answers and the question. Our result on F
val-ue achieves 42.6%, which surpasses the baseline by over 10 %.
| 2,017 | Computation and Language |
Unwritten Languages Demand Attention Too! Word Discovery with
Encoder-Decoder Models | Word discovery is the task of extracting words from unsegmented text. In this
paper we examine to what extent neural networks can be applied to this task in
a realistic unwritten language scenario, where only small corpora and limited
annotations are available. We investigate two scenarios: one with no
supervision and another with limited supervision with access to the most
frequent words. Obtained results show that it is possible to retrieve at least
27% of the gold standard vocabulary by training an encoder-decoder neural
machine translation system with only 5,157 sentences. This result is close to
those obtained with a task-specific Bayesian nonparametric model. Moreover, our
approach has the advantage of generating translation alignments, which could be
used to create a bilingual lexicon. As a future perspective, this approach is
also well suited to work directly from speech.
| 2,017 | Computation and Language |
Flexible Computing Services for Comparisons and Analyses of Classical
Chinese Poetry | We collect nine corpora of representative Chinese poetry for the time span of
1046 BCE and 1644 CE for studying the history of Chinese words, collocations,
and patterns. By flexibly integrating our own tools, we are able to provide new
perspectives for approaching our goals. We illustrate the ideas with two
examples. The first example show a new way to compare word preferences of
poets, and the second example demonstrates how we can utilize our corpora in
historical studies of the Chinese words. We show the viability of the tools for
academic research, and we wish to make it helpful for enriching existing
Chinese dictionary as well.
| 2,017 | Computation and Language |
Word Vector Enrichment of Low Frequency Words in the Bag-of-Words Model
for Short Text Multi-class Classification Problems | The bag-of-words model is a standard representation of text for many linear
classifier learners. In many problem domains, linear classifiers are preferred
over more complex models due to their efficiency, robustness and
interpretability, and the bag-of-words text representation can capture
sufficient information for linear classifiers to make highly accurate
predictions. However in settings where there is a large vocabulary, large
variance in the frequency of terms in the training corpus, many classes and
very short text (e.g., single sentences or document titles) the bag-of-words
representation becomes extremely sparse, and this can reduce the accuracy of
classifiers. A particular issue in such settings is that short texts tend to
contain infrequently occurring or rare terms which lack class-conditional
evidence. In this work we introduce a method for enriching the bag-of-words
model by complementing such rare term information with related terms from both
general and domain-specific Word Vector models. By reducing sparseness in the
bag-of-words models, our enrichment approach achieves improved classification
over several baseline classifiers in a variety of text classification problems.
Our approach is also efficient because it requires no change to the linear
classifier before or during training, since bag-of-words enrichment applies
only to text being classified.
| 2,017 | Computation and Language |
Toward a full-scale neural machine translation in production: the
Booking.com use case | While some remarkable progress has been made in neural machine translation
(NMT) research, there have not been many reports on its development and
evaluation in practice. This paper tries to fill this gap by presenting some of
our findings from building an in-house travel domain NMT system in a large
scale E-commerce setting. The three major topics that we cover are optimization
and training (including different optimization strategies and corpus sizes),
handling real-world content and evaluating results.
| 2,017 | Computation and Language |
Limitations of Cross-Lingual Learning from Image Search | Cross-lingual representation learning is an important step in making NLP
scale to all the world's languages. Recent work on bilingual lexicon induction
suggests that it is possible to learn cross-lingual representations of words
based on similarities between images associated with these words. However, that
work focused on the translation of selected nouns only. In our work, we
investigate whether the meaning of other parts-of-speech, in particular
adjectives and verbs, can be learned in the same way. We also experiment with
combining the representations learned from visual data with embeddings learned
from textual data. Our experiments across five language pairs indicate that
previous work does not scale to the problem of learning cross-lingual
representations beyond simple nouns.
| 2,017 | Computation and Language |
Sequence to Sequence Learning for Event Prediction | This paper presents an approach to the task of predicting an event
description from a preceding sentence in a text. Our approach explores
sequence-to-sequence learning using a bidirectional multi-layer recurrent
neural network. Our approach substantially outperforms previous work in terms
of the BLEU score on two datasets derived from WikiHow and DeScript
respectively. Since the BLEU score is not easy to interpret as a measure of
event prediction, we complement our study with a second evaluation that
exploits the rich linguistic annotation of gold paraphrase sets of events.
| 2,017 | Computation and Language |
Iterative Policy Learning in End-to-End Trainable Task-Oriented Neural
Dialog Models | In this paper, we present a deep reinforcement learning (RL) framework for
iterative dialog policy optimization in end-to-end task-oriented dialog
systems. Popular approaches in learning dialog policy with RL include letting a
dialog agent to learn against a user simulator. Building a reliable user
simulator, however, is not trivial, often as difficult as building a good
dialog agent. We address this challenge by jointly optimizing the dialog agent
and the user simulator with deep RL by simulating dialogs between the two
agents. We first bootstrap a basic dialog agent and a basic user simulator by
learning directly from dialog corpora with supervised training. We then improve
them further by letting the two agents to conduct task-oriented dialogs and
iteratively optimizing their policies with deep RL. Both the dialog agent and
the user simulator are designed with neural network models that can be trained
end-to-end. Our experiment results show that the proposed method leads to
promising improvements on task success rate and total task reward comparing to
supervised training and single-agent RL training baseline models.
| 2,017 | Computation and Language |
Paraphrasing verbal metonymy through computational methods | Verbal metonymy has received relatively scarce attention in the field of
computational linguistics despite the fact that a model to accurately
paraphrase metonymy has applications both in academia and the technology
sector. The method described in this paper makes use of data from the British
National Corpus in order to create word vectors, find instances of verbal
metonymy and generate potential paraphrases. Two different ways of creating
word vectors are evaluated in this study: Continuous bag of words and
Skip-grams. Skip-grams are found to outperform the Continuous bag of words
approach. Furthermore, the Skip-gram model is found to operate with
better-than-chance accuracy and there is a strong positive relationship (phi
coefficient = 0.61) between the model's classification and human judgement of
the ranked paraphrases. This study lends credence to the viability of modelling
verbal metonymy through computational methods based on distributional
semantics.
| 2,017 | Computation and Language |
Dynamic Oracle for Neural Machine Translation in Decoding Phase | The past several years have witnessed the rapid progress of end-to-end Neural
Machine Translation (NMT). However, there exists discrepancy between training
and inference in NMT when decoding, which may lead to serious problems since
the model might be in a part of the state space it has never seen during
training. To address the issue, Scheduled Sampling has been proposed. However,
there are certain limitations in Scheduled Sampling and we propose two dynamic
oracle-based methods to improve it. We manage to mitigate the discrepancy by
changing the training process towards a less guided scheme and meanwhile
aggregating the oracle's demonstrations. Experimental results show that the
proposed approaches improve translation quality over standard NMT system.
| 2,017 | Computation and Language |
A Fast and Accurate Vietnamese Word Segmenter | We propose a novel approach to Vietnamese word segmentation. Our approach is
based on the Single Classification Ripple Down Rules methodology (Compton and
Jansen, 1990), where rules are stored in an exception structure and new rules
are only added to correct segmentation errors given by existing rules.
Experimental results on the benchmark Vietnamese treebank show that our
approach outperforms previous state-of-the-art approaches JVnSegmenter,
vnTokenizer, DongDu and UETsegmenter in terms of both accuracy and performance
speed. Our code is open-source and available at:
https://github.com/datquocnguyen/RDRsegmenter.
| 2,017 | Computation and Language |
Aspect-Based Relational Sentiment Analysis Using a Stacked Neural
Network Architecture | Sentiment analysis can be regarded as a relation extraction problem in which
the sentiment of some opinion holder towards a certain aspect of a product,
theme or event needs to be extracted. We present a novel neural architecture
for sentiment analysis as a relation extraction problem that addresses this
problem by dividing it into three subtasks: i) identification of aspect and
opinion terms, ii) labeling of opinion terms with a sentiment, and iii)
extraction of relations between opinion terms and aspect terms. For each
subtask, we propose a neural network based component and combine all of them
into a complete system for relational sentiment analysis. The component for
aspect and opinion term extraction is a hybrid architecture consisting of a
recurrent neural network stacked on top of a convolutional neural network. This
approach outperforms a standard convolutional deep neural architecture as well
as a recurrent network architecture and performs competitively compared to
other methods on two datasets of annotated customer reviews. To extract
sentiments for individual opinion terms, we propose a recurrent architecture in
combination with word distance features and achieve promising results,
outperforming a majority baseline by 18% accuracy and providing the first
results for the USAGE dataset. Our relation extraction component outperforms
the current state-of-the-art in aspect-opinion relation extraction by 15%
F-Measure.
| 2,017 | Computation and Language |
Aspect-Based Sentiment Analysis Using a Two-Step Neural Network
Architecture | The World Wide Web holds a wealth of information in the form of unstructured
texts such as customer reviews for products, events and more. By extracting and
analyzing the expressed opinions in customer reviews in a fine-grained way,
valuable opportunities and insights for customers and businesses can be gained.
We propose a neural network based system to address the task of Aspect-Based
Sentiment Analysis to compete in Task 2 of the ESWC-2016 Challenge on Semantic
Sentiment Analysis. Our proposed architecture divides the task in two subtasks:
aspect term extraction and aspect-specific sentiment extraction. This approach
is flexible in that it allows to address each subtask independently. As a first
step, a recurrent neural network is used to extract aspects from a text by
framing the problem as a sequence labeling task. In a second step, a recurrent
network processes each extracted aspect with respect to its context and
predicts a sentiment label. The system uses pretrained semantic word embedding
features which we experimentally enhance with semantic knowledge extracted from
WordNet. Further features extracted from SenticNet prove to be beneficial for
the extraction of sentiment labels. As the best performing system in its
category, our proposed system proves to be an effective approach for the
Aspect-Based Sentiment Analysis.
| 2,017 | Computation and Language |
Improving Opinion-Target Extraction with Character-Level Word Embeddings | Fine-grained sentiment analysis is receiving increasing attention in recent
years. Extracting opinion target expressions (OTE) in reviews is often an
important step in fine-grained, aspect-based sentiment analysis. Retrieving
this information from user-generated text, however, can be difficult. Customer
reviews, for instance, are prone to contain misspelled words and are difficult
to process due to their domain-specific language. In this work, we investigate
whether character-level models can improve the performance for the
identification of opinion target expressions. We integrate information about
the character structure of a word into a sequence labeling system using
character-level word embeddings and show their positive impact on the system's
performance. Specifically, we obtain an increase by 3.3 points F1-score with
respect to our baseline model. In further experiments, we reveal encoded
character patterns of the learned embeddings and give a nuanced view of the
performance differences of both models.
| 2,017 | Computation and Language |
MetaLDA: a Topic Model that Efficiently Incorporates Meta information | Besides the text content, documents and their associated words usually come
with rich sets of meta informa- tion, such as categories of documents and
semantic/syntactic features of words, like those encoded in word embeddings.
Incorporating such meta information directly into the generative process of
topic models can improve modelling accuracy and topic quality, especially in
the case where the word-occurrence information in the training data is
insufficient. In this paper, we present a topic model, called MetaLDA, which is
able to leverage either document or word meta information, or both of them
jointly. With two data argumentation techniques, we can derive an efficient
Gibbs sampling algorithm, which benefits from the fully local conjugacy of the
model. Moreover, the algorithm is favoured by the sparsity of the meta
information. Extensive experiments on several real world datasets demonstrate
that our model achieves comparable or improved performance in terms of both
perplexity and topic quality, particularly in handling sparse texts. In
addition, compared with other models using meta information, our model runs
significantly faster.
| 2,017 | Computation and Language |
Neural Networks for Text Correction and Completion in Keyboard Decoding | Despite the ubiquity of mobile and wearable text messaging applications, the
problem of keyboard text decoding is not tackled sufficiently in the light of
the enormous success of the deep learning Recurrent Neural Network (RNN) and
Convolutional Neural Networks (CNN) for natural language understanding. In
particular, considering that the keyboard decoders should operate on devices
with memory and processor resource constraints, makes it challenging to deploy
industrial scale deep neural network (DNN) models. This paper proposes a
sequence-to-sequence neural attention network system for automatic text
correction and completion. Given an erroneous sequence, our model encodes
character level hidden representations and then decodes the revised sequence
thus enabling auto-correction and completion. We achieve this by a combination
of character level CNN and gated recurrent unit (GRU) encoder along with and a
word level gated recurrent unit (GRU) attention decoder. Unlike traditional
language models that learn from billions of words, our corpus size is only 12
million words; an order of magnitude smaller. The memory footprint of our
learnt model for inference and prediction is also an order of magnitude smaller
than the conventional language model based text decoders. We report baseline
performance for neural keyboard decoders in such limited domain. Our models
achieve a word level accuracy of $90\%$ and a character error rate CER of
$2.4\%$ over the Twitter typo dataset. We present a novel dataset of noisy to
corrected mappings by inducing the noise distribution from the Twitter data
over the OpenSubtitles 2009 dataset; on which our model predicts with a word
level accuracy of $98\%$ and sequence accuracy of $68.9\%$. In our user study,
our model achieved an average CER of $2.6\%$ with the state-of-the-art
non-neural touch-screen keyboard decoder at CER of $1.6\%$.
| 2,017 | Computation and Language |
Language Modeling with Highway LSTM | Language models (LMs) based on Long Short Term Memory (LSTM) have shown good
gains in many automatic speech recognition tasks. In this paper, we extend an
LSTM by adding highway networks inside an LSTM and use the resulting Highway
LSTM (HW-LSTM) model for language modeling. The added highway networks increase
the depth in the time dimension. Since a typical LSTM has two internal states,
a memory cell and a hidden state, we compare various types of HW-LSTM by adding
highway networks onto the memory cell and/or the hidden state. Experimental
results on English broadcast news and conversational telephone speech
recognition show that the proposed HW-LSTM LM improves speech recognition
accuracy on top of a strong LSTM LM baseline. We report 5.1% and 9.9% on the
Switchboard and CallHome subsets of the Hub5 2000 evaluation, which reaches the
best performance numbers reported on these tasks to date.
| 2,017 | Computation and Language |
A Recorded Debating Dataset | This paper describes an English audio and textual dataset of debating
speeches, a unique resource for the growing research field of computational
argumentation and debating technologies. We detail the process of speech
recording by professional debaters, the transcription of the speeches with an
Automatic Speech Recognition (ASR) system, their consequent automatic
processing to produce a text that is more "NLP-friendly", and in parallel --
the manual transcription of the speeches in order to produce gold-standard
"reference" transcripts. We release 60 speeches on various controversial
topics, each in five formats corresponding to the different stages in the
production of the data. The intention is to allow utilizing this resource for
multiple research purposes, be it the addition of in-domain training data for a
debate-specific ASR system, or applying argumentation mining on either noisy or
clean debate transcripts. We intend to make further releases of this data in
the future.
| 2,018 | Computation and Language |
Think Globally, Embed Locally --- Locally Linear Meta-embedding of Words | Distributed word embeddings have shown superior performances in numerous
Natural Language Processing (NLP) tasks. However, their performances vary
significantly across different tasks, implying that the word embeddings learnt
by those methods capture complementary aspects of lexical semantics. Therefore,
we believe that it is important to combine the existing word embeddings to
produce more accurate and complete \emph{meta-embeddings} of words. For this
purpose, we propose an unsupervised locally linear meta-embedding learning
method that takes pre-trained word embeddings as the input, and produces more
accurate meta embeddings. Unlike previously proposed meta-embedding learning
methods that learn a global projection over all words in a vocabulary, our
proposed method is sensitive to the differences in local neighbourhoods of the
individual source word embeddings. Moreover, we show that vector concatenation,
a previously proposed highly competitive baseline approach for integrating word
embeddings, can be derived as a special case of the proposed method.
Experimental results on semantic similarity, word analogy, relation
classification, and short-text classification tasks show that our
meta-embeddings to significantly outperform prior methods in several benchmark
datasets, establishing a new state of the art for meta-embeddings.
| 2,017 | Computation and Language |
Why PairDiff works? -- A Mathematical Analysis of Bilinear Relational
Compositional Operators for Analogy Detection | Representing the semantic relations that exist between two given words (or
entities) is an important first step in a wide-range of NLP applications such
as analogical reasoning, knowledge base completion and relational information
retrieval. A simple, yet surprisingly accurate method for representing a
relation between two words is to compute the vector offset (\PairDiff) between
their corresponding word embeddings. Despite the empirical success, it remains
unclear as to whether \PairDiff is the best operator for obtaining a relational
representation from word embeddings. We conduct a theoretical analysis of
generalised bilinear operators that can be used to measure the $\ell_{2}$
relational distance between two word-pairs. We show that, if the word
embeddings are standardised and uncorrelated, such an operator will be
independent of bilinear terms, and can be simplified to a linear form, where
\PairDiff is a special case. For numerous word embedding types, we empirically
verify the uncorrelation assumption, demonstrating the general applicability of
our theoretical result. Moreover, we experimentally discover \PairDiff from the
bilinear relation composition operator on several benchmark analogy datasets.
| 2,017 | Computation and Language |
Updating the silent speech challenge benchmark with deep learning | The 2010 Silent Speech Challenge benchmark is updated with new results
obtained in a Deep Learning strategy, using the same input features and
decoding strategy as in the original article. A Word Error Rate of 6.4% is
obtained, compared to the published value of 17.4%. Additional results
comparing new auto-encoder-based features with the original features at reduced
dimensionality, as well as decoding scenarios on two different language models,
are also presented. The Silent Speech Challenge archive has been updated to
contain both the original and the new auto-encoder features, in addition to the
original raw data.
| 2,017 | Computation and Language |
De-identification of medical records using conditional random fields and
long short-term memory networks | The CEGS N-GRID 2016 Shared Task 1 in Clinical Natural Language Processing
focuses on the de-identification of psychiatric evaluation records. This paper
describes two participating systems of our team, based on conditional random
fields (CRFs) and long short-term memory networks (LSTMs). A pre-processing
module was introduced for sentence detection and tokenization before
de-identification. For CRFs, manually extracted rich features were utilized to
train the model. For LSTMs, a character-level bi-directional LSTM network was
applied to represent tokens and classify tags for each token, following which a
decoding layer was stacked to decode the most probable protected health
information (PHI) terms. The LSTM-based system attained an i2b2 strict
micro-F_1 measure of 89.86%, which was higher than that of the CRF-based
system.
| 2,017 | Computation and Language |
Constructing a Hierarchical User Interest Structure based on User
Profiles | The interests of individual internet users fall into a hierarchical structure
which is useful in regards to building personalized searches and
recommendations. Most studies on this subject construct the interest hierarchy
of a single person from the document perspective. In this study, we constructed
the user interest hierarchy via user profiles. We organized 433,397 user
interests, referred to here as "attentions", into a user attention network
(UAN) from 200 million user profiles; we then applied the Louvain algorithm to
detect hierarchical clusters in these attentions. Finally, a 26-level hierarchy
with 34,676 clusters was obtained. We found that these attention clusters were
aggregated according to certain topics as opposed to the hyponymy-relation
based conceptual ontologies. The topics can be entities or concepts, and the
relations were not restrained by hyponymy. The concept relativity encapsulated
in the user's interest can be captured by labeling the attention clusters with
corresponding concepts.
| 2,017 | Computation and Language |
On the Use of Machine Translation-Based Approaches for Vietnamese
Diacritic Restoration | This paper presents an empirical study of two machine translation-based
approaches for Vietnamese diacritic restoration problem, including phrase-based
and neural-based machine translation models. This is the first work that
applies neural-based machine translation method to this problem and gives a
thorough comparison to the phrase-based machine translation method which is the
current state-of-the-art method for this problem. On a large dataset, the
phrase-based approach has an accuracy of 97.32% while that of the neural-based
approach is 96.15%. While the neural-based method has a slightly lower
accuracy, it is about twice faster than the phrase-based method in terms of
inference speed. Moreover, neural-based machine translation method has much
room for future improvement such as incorporating pre-trained word embeddings
and collecting more training data.
| 2,017 | Computation and Language |
Deconvolutional Latent-Variable Model for Text Sequence Matching | A latent-variable model is introduced for text matching, inferring sentence
representations by jointly optimizing generative and discriminative objectives.
To alleviate typical optimization challenges in latent-variable models for
text, we employ deconvolutional networks as the sequence decoder (generator),
providing learned latent codes with more semantic information and better
generalization. Our model, trained in an unsupervised manner, yields stronger
empirical predictive performance than a decoder based on Long Short-Term Memory
(LSTM), with less parameters and considerably faster training. Further, we
apply it to text sequence-matching problems. The proposed model significantly
outperforms several strong sentence-encoding baselines, especially in the
semi-supervised setting.
| 2,017 | Computation and Language |
Speech Recognition Challenge in the Wild: Arabic MGB-3 | This paper describes the Arabic MGB-3 Challenge - Arabic Speech Recognition
in the Wild. Unlike last year's Arabic MGB-2 Challenge, for which the
recognition task was based on more than 1,200 hours broadcast TV news
recordings from Aljazeera Arabic TV programs, MGB-3 emphasises dialectal Arabic
using a multi-genre collection of Egyptian YouTube videos. Seven genres were
used for the data collection: comedy, cooking, family/kids, fashion, drama,
sports, and science (TEDx). A total of 16 hours of videos, split evenly across
the different genres, were divided into adaptation, development and evaluation
data sets. The Arabic MGB-Challenge comprised two tasks: A) Speech
transcription, evaluated on the MGB-3 test set, along with the 10 hour MGB-2
test set to report progress on the MGB-2 evaluation; B) Arabic dialect
identification, introduced this year in order to distinguish between four major
Arabic dialects - Egyptian, Levantine, North African, Gulf, as well as Modern
Standard Arabic. Two hours of audio per dialect were released for development
and a further two hours were used for evaluation. For dialect identification,
both lexical features and i-vector bottleneck features were shared with
participants in addition to the raw audio recordings. Overall, thirteen teams
submitted ten systems to the challenge. We outline the approaches adopted in
each system, and summarise the evaluation results.
| 2,017 | Computation and Language |
Retrofitting Concept Vector Representations of Medical Concepts to
Improve Estimates of Semantic Similarity and Relatedness | Estimation of semantic similarity and relatedness between biomedical concepts
has utility for many informatics applications. Automated methods fall into two
categories: methods based on distributional statistics drawn from text corpora,
and methods using the structure of existing knowledge resources. Methods in the
former category disregard taxonomic structure, while those in the latter fail
to consider semantically relevant empirical information. In this paper, we
present a method that retrofits distributional context vector representations
of biomedical concepts using structural information from the UMLS
Metathesaurus, such that the similarity between vector representations of
linked concepts is augmented. We evaluated it on the UMNSRS benchmark. Our
results demonstrate that retrofitting of concept vector representations leads
to better correlation with human raters for both similarity and relatedness,
surpassing the best results reported to date. They also demonstrate a clear
improvement in performance on this reference standard for retrofitted vector
representations, as compared to those without retrofitting.
| 2,017 | Computation and Language |
Inducing Distant Supervision in Suggestion Mining through Part-of-Speech
Embeddings | Mining suggestion expressing sentences from a given text is a less
investigated sentence classification task, and therefore lacks hand labeled
benchmark datasets. In this work, we propose and evaluate two approaches for
distant supervision in suggestion mining. The distant supervision is obtained
through a large silver standard dataset, constructed using the text from
wikiHow and Wikipedia. Both the approaches use a LSTM based neural network
architecture to learn a classification model for suggestion mining, but vary in
their method to use the silver standard dataset. The first approach directly
trains the classifier using this dataset, while the second approach only learns
word embeddings from this dataset. In the second approach, we also learn POS
embeddings, which interestingly gives the best classification accuracy.
| 2,017 | Computation and Language |
Analyzing users' sentiment towards popular consumer industries and
brands on Twitter | Social media serves as a unified platform for users to express their thoughts
on subjects ranging from their daily lives to their opinion on consumer brands
and products. These users wield an enormous influence in shaping the opinions
of other consumers and influence brand perception, brand loyalty and brand
advocacy. In this paper, we analyze the opinion of 19M Twitter users towards 62
popular industries, encompassing 12,898 enterprise and consumer brands, as well
as associated subject matter topics, via sentiment analysis of 330M tweets over
a period spanning a month. We find that users tend to be most positive towards
manufacturing and most negative towards service industries. In addition, they
tend to be more positive or negative when interacting with brands than
generally on Twitter. We also find that sentiment towards brands within an
industry varies greatly and we demonstrate this using two industries as use
cases. In addition, we discover that there is no strong correlation between
topic sentiments of different industries, demonstrating that topic sentiments
are highly dependent on the context of the industry that they are mentioned in.
We demonstrate the value of such an analysis in order to assess the impact of
brands on social media. We hope that this initial study will prove valuable for
both researchers and companies in understanding users' perception of
industries, brands and associated topics and encourage more research in this
field.
| 2,017 | Computation and Language |
Learning Domain-Specific Word Embeddings from Sparse Cybersecurity Texts | Word embedding is a Natural Language Processing (NLP) technique that
automatically maps words from a vocabulary to vectors of real numbers in an
embedding space. It has been widely used in recent years to boost the
performance of a vari-ety of NLP tasks such as Named Entity Recognition,
Syntac-tic Parsing and Sentiment Analysis. Classic word embedding methods such
as Word2Vec and GloVe work well when they are given a large text corpus. When
the input texts are sparse as in many specialized domains (e.g.,
cybersecurity), these methods often fail to produce high-quality vectors. In
this pa-per, we describe a novel method to train domain-specificword embeddings
from sparse texts. In addition to domain texts, our method also leverages
diverse types of domain knowledge such as domain vocabulary and semantic
relations. Specifi-cally, we first propose a general framework to encode
diverse types of domain knowledge as text annotations. Then we de-velop a novel
Word Annotation Embedding (WAE) algorithm to incorporate diverse types of text
annotations in word em-bedding. We have evaluated our method on two
cybersecurity text corpora: a malware description corpus and a Common
Vulnerability and Exposure (CVE) corpus. Our evaluation re-sults have
demonstrated the effectiveness of our method in learning domain-specific word
embeddings.
| 2,017 | Computation and Language |
WERd: Using Social Text Spelling Variants for Evaluating Dialectal
Speech Recognition | We study the problem of evaluating automatic speech recognition (ASR) systems
that target dialectal speech input. A major challenge in this case is that the
orthography of dialects is typically not standardized. From an ASR evaluation
perspective, this means that there is no clear gold standard for the expected
output, and several possible outputs could be considered correct according to
different human annotators, which makes standard word error rate (WER)
inadequate as an evaluation metric. Such a situation is typical for machine
translation (MT), and thus we borrow ideas from an MT evaluation metric, namely
TERp, an extension of translation error rate which is closely-related to WER.
In particular, in the process of comparing a hypothesis to a reference, we make
use of spelling variants for words and phrases, which we mine from Twitter in
an unsupervised fashion. Our experiments with evaluating ASR output for
Egyptian Arabic, and further manual analysis, show that the resulting WERd
(i.e., WER for dialects) metric, a variant of TERp, is more adequate than WER
for evaluating dialectal ASR.
| 2,017 | Computation and Language |
Improving Language Modelling with Noise-contrastive estimation | Neural language models do not scale well when the vocabulary is large.
Noise-contrastive estimation (NCE) is a sampling-based method that allows for
fast learning with large vocabularies. Although NCE has shown promising
performance in neural machine translation, it was considered to be an
unsuccessful approach for language modelling. A sufficient investigation of the
hyperparameters in the NCE-based neural language models was also missing. In
this paper, we showed that NCE can be a successful approach in neural language
modelling when the hyperparameters of a neural network are tuned appropriately.
We introduced the 'search-then-converge' learning rate schedule for NCE and
designed a heuristic that specifies how to use this schedule. The impact of the
other important hyperparameters, such as the dropout rate and the weight
initialisation range, was also demonstrated. We showed that appropriate tuning
of NCE-based neural language models outperforms the state-of-the-art
single-model methods on a popular benchmark.
| 2,017 | Computation and Language |
Sentence Correction Based on Large-scale Language Modelling | With the further development of informatization, more and more data is stored
in the form of text. There are some loss of text during their generation and
transmission. The paper aims to establish a language model based on the
large-scale corpus to complete the restoration of missing text. In this paper,
we introduce a novel measurement to find the missing words, and a way of
establishing a comprehensive candidate lexicon to insert the correct choice of
words. The paper also introduces some effective optimization methods, which
largely improve the efficiency of the text restoration and shorten the time of
dealing with 1000 sentences into 3.6 seconds. \keywords{ language model,
sentence correction, word imputation, parallel optimization
| 2,017 | Computation and Language |
Neural Machine Translation | Draft of textbook chapter on neural machine translation. a comprehensive
treatment of the topic, ranging from introduction to neural networks,
computation graphs, description of the currently dominant attentional
sequence-to-sequence model, recent refinements, alternative architectures and
challenges. Written as chapter for the textbook Statistical Machine
Translation. Used in the JHU Fall 2017 class on machine translation.
| 2,017 | Computation and Language |
Attention-based Wav2Text with Feature Transfer Learning | Conventional automatic speech recognition (ASR) typically performs
multi-level pattern recognition tasks that map the acoustic speech waveform
into a hierarchy of speech units. But, it is widely known that information loss
in the earlier stage can propagate through the later stages. After the
resurgence of deep learning, interest has emerged in the possibility of
developing a purely end-to-end ASR system from the raw waveform to the
transcription without any predefined alignments and hand-engineered models.
However, the successful attempts in end-to-end architecture still used
spectral-based features, while the successful attempts in using raw waveform
were still based on the hybrid deep neural network - Hidden Markov model
(DNN-HMM) framework. In this paper, we construct the first end-to-end
attention-based encoder-decoder model to process directly from raw speech
waveform to the text transcription. We called the model as "Attention-based
Wav2Text". To assist the training process of the end-to-end model, we propose
to utilize a feature transfer learning. Experimental results also reveal that
the proposed Attention-based Wav2Text model directly with raw waveform could
achieve a better result in comparison with the attentional encoder-decoder
model trained on standard front-end filterbank features.
| 2,017 | Computation and Language |
Challenging Neural Dialogue Models with Natural Data: Memory Networks
Fail on Incremental Phenomena | Natural, spontaneous dialogue proceeds incrementally on a word-by-word basis;
and it contains many sorts of disfluency such as mid-utterance/sentence
hesitations, interruptions, and self-corrections. But training data for machine
learning approaches to dialogue processing is often either cleaned-up or wholly
synthetic in order to avoid such phenomena. The question then arises of how
well systems trained on such clean data generalise to real spontaneous
dialogue, or indeed whether they are trainable at all on naturally occurring
dialogue data. To answer this question, we created a new corpus called bAbI+ by
systematically adding natural spontaneous incremental dialogue phenomena such
as restarts and self-corrections to the Facebook AI Research's bAbI dialogues
dataset. We then explore the performance of a state-of-the-art retrieval model,
MemN2N, on this more natural dataset. Results show that the semantic accuracy
of the MemN2N model drops drastically; and that although it is in principle
able to learn to process the constructions in bAbI+, it needs an impractical
amount of training data to do so. Finally, we go on to show that an
incremental, semantic parser -- DyLan -- shows 100% semantic accuracy on both
bAbI and bAbI+, highlighting the generalisation properties of linguistically
informed dialogue models.
| 2,017 | Computation and Language |
Bootstrapping incremental dialogue systems from minimal data: the
generalisation power of dialogue grammars | We investigate an end-to-end method for automatically inducing task-based
dialogue systems from small amounts of unannotated dialogue data. It combines
an incremental semantic grammar - Dynamic Syntax and Type Theory with Records
(DS-TTR) - with Reinforcement Learning (RL), where language generation and
dialogue management are a joint decision problem. The systems thus produced are
incremental: dialogues are processed word-by-word, shown previously to be
essential in supporting natural, spontaneous dialogue. We hypothesised that the
rich linguistic knowledge within the grammar should enable a combinatorially
large number of dialogue variations to be processed, even when trained on very
few dialogues. Our experiments show that our model can process 74% of the
Facebook AI bAbI dataset even when trained on only 0.13% of the data (5
dialogues). It can in addition process 65% of bAbI+, a corpus we created by
systematically adding incremental dialogue phenomena such as restarts and
self-corrections to bAbI. We compare our model with a state-of-the-art
retrieval model, MemN2N. We find that, in terms of semantic accuracy, MemN2N
shows very poor robustness to the bAbI+ transformations even when trained on
the full bAbI dataset.
| 2,017 | Computation and Language |
Mitigating the Impact of Speech Recognition Errors on Chatbot using
Sequence-to-Sequence Model | We apply sequence-to-sequence model to mitigate the impact of speech
recognition errors on open domain end-to-end dialog generation. We cast the
task as a domain adaptation problem where ASR transcriptions and original text
are in two different domains. In this paper, our proposed model includes two
individual encoders for each domain data and make their hidden states similar
to ensure the decoder predict the same dialog text. The method shows that the
sequence-to-sequence model can learn the ASR transcriptions and original text
pair having the same meaning and eliminate the speech recognition errors.
Experimental results on Cornell movie dialog dataset demonstrate that the
domain adaption system help the spoken dialog system generate more similar
responses with the original text answers.
| 2,017 | Computation and Language |
Long Short-Term Memory for Japanese Word Segmentation | This study presents a Long Short-Term Memory (LSTM) neural network approach
to Japanese word segmentation (JWS). Previous studies on Chinese word
segmentation (CWS) succeeded in using recurrent neural networks such as LSTM
and gated recurrent units (GRU). However, in contrast to Chinese, Japanese
includes several character types, such as hiragana, katakana, and kanji, that
produce orthographic variations and increase the difficulty of word
segmentation. Additionally, it is important for JWS tasks to consider a global
context, and yet traditional JWS approaches rely on local features. In order to
address this problem, this study proposes employing an LSTM-based approach to
JWS. The experimental results indicate that the proposed model achieves
state-of-the-art accuracy with respect to various Japanese corpora.
| 2,018 | Computation and Language |
Language Independent Acquisition of Abbreviations | This paper addresses automatic extraction of abbreviations (encompassing
acronyms and initialisms) and corresponding long-form expansions from plain
unstructured text. We create and are going to release a multilingual resource
for abbreviations and their corresponding expansions, built automatically by
exploiting Wikipedia redirect and disambiguation pages, that can be used as a
benchmark for evaluation. We address a shortcoming of previous work where only
the redirect pages were used, and so every abbreviation had only a single
expansion, even though multiple different expansions are possible for many of
the abbreviations. We also develop a principled machine learning based approach
to scoring expansion candidates using different techniques such as indicators
of near synonymy, topical relatedness, and surface similarity. We show improved
performance over seven languages, including two with a non-Latin alphabet,
relative to strong baselines.
| 2,017 | Computation and Language |
Identifying Phrasemes via Interlingual Association Measures -- A
Data-driven Approach on Dependency-parsed and Word-aligned Parallel Corpora | This is a preprint of the article "Identifying Phrasemes via Interlingual
Association Measures" that was presented in February 2016 at the LeKo (Lexical
combinations and typified speech in a multilingual context) conference in
Innsbruck.
| 2,017 | Computation and Language |
Learning Context-Sensitive Convolutional Filters for Text Processing | Convolutional neural networks (CNNs) have recently emerged as a popular
building block for natural language processing (NLP). Despite their success,
most existing CNN models employed in NLP share the same learned (and static)
set of filters for all input sentences. In this paper, we consider an approach
of using a small meta network to learn context-sensitive convolutional filters
for text processing. The role of meta network is to abstract the contextual
information of a sentence or document into a set of input-aware filters. We
further generalize this framework to model sentence pairs, where a
bidirectional filter generation mechanism is introduced to encapsulate
co-dependent sentence representations. In our benchmarks on four different
tasks, including ontology classification, sentiment analysis, answer sentence
selection, and paraphrase identification, our proposed model, a modified CNN
with context-sensitive filters, consistently outperforms the standard CNN and
attention-based CNN baselines. By visualizing the learned context-sensitive
filters, we further validate and rationalize the effectiveness of proposed
framework.
| 2,018 | Computation and Language |
Dataset for the First Evaluation on Chinese Machine Reading
Comprehension | Machine Reading Comprehension (MRC) has become enormously popular recently
and has attracted a lot of attention. However, existing reading comprehension
datasets are mostly in English. To add diversity in reading comprehension
datasets, in this paper we propose a new Chinese reading comprehension dataset
for accelerating related research in the community. The proposed dataset
contains two different types: cloze-style reading comprehension and user query
reading comprehension, associated with large-scale training data as well as
human-annotated validation and hidden test set. Along with this dataset, we
also hosted the first Evaluation on Chinese Machine Reading Comprehension
(CMRC-2017) and successfully attracted tens of participants, which suggest the
potential impact of this dataset.
| 2,018 | Computation and Language |
Using objective words in the reviews to improve the colloquial arabic
sentiment analysis | One of the main difficulties in sentiment analysis of the Arabic language is
the presence of the colloquialism. In this paper, we examine the effect of
using objective words in conjunction with sentimental words on sentiment
classification for the colloquial Arabic reviews, specifically Jordanian
colloquial reviews. The reviews often include both sentimental and objective
words, however, the most existing sentiment analysis models ignore the
objective words as they are considered useless. In this work, we created two
lexicons: the first includes the colloquial sentimental words and compound
phrases, while the other contains the objective words associated with values of
sentiment tendency based on a particular estimation method. We used these
lexicons to extract sentiment features that would be training input to the
Support Vector Machines (SVM) to classify the sentiment polarity of the
reviews. The reviews dataset have been collected manually from JEERAN website.
The results of the experiments show that the proposed approach improves the
polarity classification in comparison to two baseline models, with accuracy
95.6%.
| 2,017 | Computation and Language |
EZLearn: Exploiting Organic Supervision in Large-Scale Data Annotation | Many real-world applications require automated data annotation, such as
identifying tissue origins based on gene expressions and classifying images
into semantic categories. Annotation classes are often numerous and subject to
changes over time, and annotating examples has become the major bottleneck for
supervised learning methods. In science and other high-value domains, large
repositories of data samples are often available, together with two sources of
organic supervision: a lexicon for the annotation classes, and text
descriptions that accompany some data samples. Distant supervision has emerged
as a promising paradigm for exploiting such indirect supervision by
automatically annotating examples where the text description contains a class
mention in the lexicon. However, due to linguistic variations and ambiguities,
such training data is inherently noisy, which limits the accuracy of this
approach. In this paper, we introduce an auxiliary natural language processing
system for the text modality, and incorporate co-training to reduce noise and
augment signal in distant supervision. Without using any manually labeled data,
our EZLearn system learned to accurately annotate data samples in functional
genomics and scientific figure comprehension, substantially outperforming
state-of-the-art supervised methods trained on tens of thousands of annotated
examples.
| 2,018 | Computation and Language |
Long Text Generation via Adversarial Training with Leaked Information | Automatically generating coherent and semantically meaningful text has many
applications in machine translation, dialogue systems, image captioning, etc.
Recently, by combining with policy gradient, Generative Adversarial Nets (GAN)
that use a discriminative model to guide the training of the generative model
as a reinforcement learning policy has shown promising results in text
generation. However, the scalar guiding signal is only available after the
entire text has been generated and lacks intermediate information about text
structure during the generative process. As such, it limits its success when
the length of the generated text samples is long (more than 20 words). In this
paper, we propose a new framework, called LeakGAN, to address the problem for
long text generation. We allow the discriminative net to leak its own
high-level extracted features to the generative net to further help the
guidance. The generator incorporates such informative signals into all
generation steps through an additional Manager module, which takes the
extracted features of current generated words and outputs a latent vector to
guide the Worker module for next-word generation. Our extensive experiments on
synthetic data and various real-world tasks with Turing test demonstrate that
LeakGAN is highly effective in long text generation and also improves the
performance in short text generation scenarios. More importantly, without any
supervision, LeakGAN would be able to implicitly learn sentence structures only
through the interaction between Manager and Worker.
| 2,017 | Computation and Language |
Methodology and Results for the Competition on Semantic Similarity
Evaluation and Entailment Recognition for PROPOR 2016 | In this paper, we present the methodology and the results obtained by our
teams, dubbed Blue Man Group, in the ASSIN (from the Portuguese {\it
Avalia\c{c}\~ao de Similaridade Sem\^antica e Infer\^encia Textual})
competition, held at PROPOR 2016\footnote{International Conference on the
Computational Processing of the Portuguese Language -
http://propor2016.di.fc.ul.pt/}. Our team's strategy consisted of evaluating
methods based on semantic word vectors, following two distinct directions: 1)
to make use of low-dimensional, compact, feature sets, and 2) deep
learning-based strategies dealing with high-dimensional feature vectors.
Evaluation results demonstrated that the first strategy was more promising, so
that the results from the second strategy have been discarded. As a result, by
considering the best run of each of the six teams, we have been able to achieve
the best accuracy and F1 values in entailment recognition, in the Brazilian
Portuguese set, and the best F1 score overall. In the semantic similarity task,
our team was ranked second in the Brazilian Portuguese set, and third
considering both sets.
| 2,017 | Computation and Language |
Identifying Restaurant Features via Sentiment Analysis on Yelp Reviews | Many people use Yelp to find a good restaurant. Nonetheless, with only an
overall rating for each restaurant, Yelp offers not enough information for
independently judging its various aspects such as environment, service or
flavor. In this paper, we introduced a machine learning based method to
characterize such aspects for particular types of restaurants. The main
approach used in this paper is to use a support vector machine (SVM) model to
decipher the sentiment tendency of each review from word frequency. Word scores
generated from the SVM models are further processed into a polarity index
indicating the significance of each word for special types of restaurant.
Customers overall tend to express more sentiment regarding service. As for the
distinction between different cuisines, results that match the common sense are
obtained: Japanese cuisines are usually fresh, some French cuisines are
overpriced while Italian Restaurants are often famous for their pizzas.
| 2,017 | Computation and Language |
DOC: Deep Open Classification of Text Documents | Traditional supervised learning makes the closed-world assumption that the
classes appeared in the test data must have appeared in training. This also
applies to text learning or text classification. As learning is used
increasingly in dynamic open environments where some new/test documents may not
belong to any of the training classes, identifying these novel documents during
classification presents an important problem. This problem is called open-world
classification or open classification. This paper proposes a novel deep
learning based approach. It outperforms existing state-of-the-art techniques
dramatically.
| 2,017 | Computation and Language |
Generating Sentences by Editing Prototypes | We propose a new generative model of sentences that first samples a prototype
sentence from the training corpus and then edits it into a new sentence.
Compared to traditional models that generate from scratch either left-to-right
or by first sampling a latent sentence vector, our prototype-then-edit model
improves perplexity on language modeling and generates higher quality outputs
according to human evaluation. Furthermore, the model gives rise to a latent
edit vector that captures interpretable semantics such as sentence similarity
and sentence-level analogies.
| 2,018 | Computation and Language |
Improving a Multi-Source Neural Machine Translation Model with Corpus
Extension for Low-Resource Languages | In machine translation, we often try to collect resources to improve
performance. However, most of the language pairs, such as Korean-Arabic and
Korean-Vietnamese, do not have enough resources to train machine translation
systems. In this paper, we propose the use of synthetic methods for extending a
low-resource corpus and apply it to a multi-source neural machine translation
model. We showed the improvement of machine translation performance through
corpus extension using the synthetic method. We specifically focused on how to
create source sentences that can make better target sentences, including the
use of synthetic methods. We found that the corpus extension could also improve
the performance of multi-source neural machine translation. We showed the
corpus extension and multi-source model to be efficient methods for a
low-resource language pair. Furthermore, when both methods were used together,
we found better machine translation performance.
| 2,018 | Computation and Language |
Input-to-Output Gate to Improve RNN Language Models | This paper proposes a reinforcing method that refines the output layers of
existing Recurrent Neural Network (RNN) language models. We refer to our
proposed method as Input-to-Output Gate (IOG). IOG has an extremely simple
structure, and thus, can be easily combined with any RNN language models. Our
experiments on the Penn Treebank and WikiText-2 datasets demonstrate that IOG
consistently boosts the performance of several different types of current
topline RNN language models.
| 2,017 | Computation and Language |
Integration of Japanese Papers Into the DBLP Data Set | If someone is looking for a certain publication in the field of computer
science, the searching person is likely to use the DBLP to find the desired
publication. The DBLP data set is continuously extended with new publications,
or rather their metadata, for example the names of involved authors, the title
and the publication date. While the size of the data set is already remarkable,
specific areas can still be improved. The DBLP offers a huge collection of
English papers because most papers concerning computer science are published in
English. Nevertheless, there are official publications in other languages which
are supposed to be added to the data set. One kind of these are Japanese
papers. This diploma thesis will show a way to automatically process
publication lists of Japanese papers and to make them ready for an import into
the DBLP data set. Especially important are the problems along the way of
processing, such as transcription handling and Personal Name Matching with
Japanese names.
| 2,017 | Computation and Language |
Dataset Construction via Attention for Aspect Term Extraction with
Distant Supervision | Aspect Term Extraction (ATE) detects opinionated aspect terms in sentences or
text spans, with the end goal of performing aspect-based sentiment analysis.
The small amount of available datasets for supervised ATE and the fact that
they cover only a few domains raise the need for exploiting other data sources
in new and creative ways. Publicly available review corpora contain a plethora
of opinionated aspect terms and cover a larger domain spectrum. In this paper,
we first propose a method for using such review corpora for creating a new
dataset for ATE. Our method relies on an attention mechanism to select
sentences that have a high likelihood of containing actual opinionated aspects.
We thus improve the quality of the extracted aspects. We then use the
constructed dataset to train a model and perform ATE with distant supervision.
By evaluating on human annotated datasets, we prove that our method achieves a
significantly improved performance over various unsupervised and supervised
baselines. Finally, we prove that sentence selection matters when it comes to
creating new datasets for ATE. Specifically, we show that, using a set of
selected sentences leads to higher ATE performance compared to using the whole
sentence set.
| 2,017 | Computation and Language |
Predicting Disease-Gene Associations using Cross-Document Graph-based
Features | In the context of personalized medicine, text mining methods pose an
interesting option for identifying disease-gene associations, as they can be
used to generate novel links between diseases and genes which may complement
knowledge from structured databases. The most straightforward approach to
extract such links from text is to rely on a simple assumption postulating an
association between all genes and diseases that co-occur within the same
document. However, this approach (i) tends to yield a number of spurious
associations, (ii) does not capture different relevant types of associations,
and (iii) is incapable of aggregating knowledge that is spread across
documents. Thus, we propose an approach in which disease-gene co-occurrences
and gene-gene interactions are represented in an RDF graph. A machine
learning-based classifier is trained that incorporates features extracted from
the graph to separate disease-gene pairs into valid disease-gene associations
and spurious ones. On the manually curated Genetic Testing Registry, our
approach yields a 30 points increase in F1 score over a plain co-occurrence
baseline.
| 2,017 | Computation and Language |
Lexical Disambiguation in Natural Language Questions (NLQs) | Question processing is a fundamental step in a question answering (QA)
application, and its quality impacts the performance of QA application. The
major challenging issue in processing question is how to extract semantic of
natural language questions (NLQs). A human language is ambiguous. Ambiguity may
occur at two levels; lexical and syntactic. In this paper, we propose a new
approach for resolving lexical ambiguity problem by integrating context
knowledge and concepts knowledge of a domain, into shallow natural language
processing (SNLP) techniques. Concepts knowledge is modeled using ontology,
while context knowledge is obtained from WordNet, and it is determined based on
neighborhood words in a question. The approach will be applied to a university
QA system.
| 2,011 | Computation and Language |
Learning to Explain Non-Standard English Words and Phrases | We describe a data-driven approach for automatically explaining new,
non-standard English expressions in a given sentence, building on a large
dataset that includes 15 years of crowdsourced examples from
UrbanDictionary.com. Unlike prior studies that focus on matching keywords from
a slang dictionary, we investigate the possibility of learning a neural
sequence-to-sequence model that generates explanations of unseen non-standard
English expressions given context. We propose a dual encoder approach---a
word-level encoder learns the representation of context, and a second
character-level encoder to learn the hidden representation of the target
non-standard expression. Our model can produce reasonable definitions of new
non-standard English expressions given their context with certain confidence.
| 2,017 | Computation and Language |
Learning of Colors from Color Names: Distribution and Point Estimation | Color names are often made up of multiple words. As a task in natural
language understanding we investigate in depth the capacity of neural networks
based on sums of word embeddings (SOWE), recurrence (LSTM and GRU based RNNs)
and convolution (CNN), to estimate colors from sequences of terms. We consider
both point and distribution estimates of color. We argue that the latter has a
particular value as there is no clear agreement between people as to what a
particular color describes -- different people have a different idea of what it
means to be ``very dark orange'', for example. Surprisingly, despite it's
simplicity, the sum of word embeddings generally performs the best on almost
all evaluations.
| 2,020 | Computation and Language |
A Bimodal Network Approach to Model Topic Dynamics | This paper presents an intertemporal bimodal network to analyze the evolution
of the semantic content of a scientific field within the framework of topic
modeling, namely using the Latent Dirichlet Allocation (LDA). The main
contribution is the conceptualization of the topic dynamics and its
formalization and codification into an algorithm. To benchmark the
effectiveness of this approach, we propose three indexes which track the
transformation of topics over time, their rate of birth and death, and the
novelty of their content. Applying the LDA, we test the algorithm both on a
controlled experiment and on a corpus of several thousands of scientific papers
over a period of more than 100 years which account for the history of the
economic thought.
| 2,020 | Computation and Language |
A Preliminary Study for Building an Arabic Corpus of Pair
Questions-Texts from the Web: AQA-Webcorp | With the development of electronic media and the heterogeneity of Arabic data
on the Web, the idea of building a clean corpus for certain applications of
natural language processing, including machine translation, information
retrieval, question answer, become more and more pressing. In this manuscript,
we seek to create and develop our own corpus of pair's questions-texts. This
constitution then will provide a better base for our experimentation step.
Thus, we try to model this constitution by a method for Arabic insofar as it
recovers texts from the web that could prove to be answers to our factual
questions. To do this, we had to develop a java script that can extract from a
given query a list of html pages. Then clean these pages to the extent of
having a data base of texts and a corpus of pair's question-texts. In addition,
we give preliminary results of our proposal method. Some investigations for the
construction of Arabic corpus are also presented in this document.
| 2,016 | Computation and Language |
Prosodic Features from Large Corpora of Child-Directed Speech as
Predictors of the Age of Acquisition of Words | The impressive ability of children to acquire language is a widely studied
phenomenon, and the factors influencing the pace and patterns of word learning
remains a subject of active research. Although many models predicting the age
of acquisition of words have been proposed, little emphasis has been directed
to the raw input children achieve. In this work we present a comparatively
large-scale multi-modal corpus of prosody-text aligned child directed speech.
Our corpus contains automatically extracted word-level prosodic features, and
we investigate the utility of this information as predictors of age of
acquisition. We show that prosody features boost predictive power in a
regularized regression, and demonstrate their utility in the context of a
multi-modal factorized language models trained and tested on child-directed
speech.
| 2,017 | Computation and Language |
Replicability Analysis for Natural Language Processing: Testing
Significance with Multiple Datasets | With the ever-growing amounts of textual data from a large variety of
languages, domains, and genres, it has become standard to evaluate NLP
algorithms on multiple datasets in order to ensure consistent performance
across heterogeneous setups. However, such multiple comparisons pose
significant challenges to traditional statistical analysis methods in NLP and
can lead to erroneous conclusions. In this paper, we propose a Replicability
Analysis framework for a statistically sound analysis of multiple comparisons
between algorithms for NLP tasks. We discuss the theoretical advantages of this
framework over the current, statistically unjustified, practice in the NLP
literature, and demonstrate its empirical value across four applications:
multi-domain dependency parsing, multilingual POS tagging, cross-domain
sentiment classification and word similarity prediction.
| 2,017 | Computation and Language |
Multi-Label Classification of Patient Notes a Case Study on ICD Code
Assignment | In the context of the Electronic Health Record, automated diagnosis coding of
patient notes is a useful task, but a challenging one due to the large number
of codes and the length of patient notes. We investigate four models for
assigning multiple ICD codes to discharge summaries taken from both MIMIC II
and III. We present Hierarchical Attention-GRU (HA-GRU), a hierarchical
approach to tag a document by identifying the sentences relevant for each
label. HA-GRU achieves state-of-the art results. Furthermore, the learned
sentence-level attention layer highlights the model decision process, allows
easier error analysis, and suggests future directions for improvement.
| 2,017 | Computation and Language |
An attentive neural architecture for joint segmentation and parsing and
its application to real estate ads | In processing human produced text using natural language processing (NLP)
techniques, two fundamental subtasks that arise are (i) segmentation of the
plain text into meaningful subunits (e.g., entities), and (ii) dependency
parsing, to establish relations between subunits. In this paper, we develop a
relatively simple and effective neural joint model that performs both
segmentation and dependency parsing together, instead of one after the other as
in most state-of-the-art works. We will focus in particular on the real estate
ad setting, aiming to convert an ad to a structured description, which we name
property tree, comprising the tasks of (1) identifying important entities of a
property (e.g., rooms) from classifieds and (2) structuring them into a tree
format. In this work, we propose a new joint model that is able to tackle the
two tasks simultaneously and construct the property tree by (i) avoiding the
error propagation that would arise from the subtasks one after the other in a
pipelined fashion, and (ii) exploiting the interactions between the subtasks.
For this purpose, we perform an extensive comparative study of the pipeline
methods and the new proposed joint model, reporting an improvement of over
three percentage points in the overall edge F1 score of the property tree.
Also, we propose attention methods, to encourage our model to focus on salient
tokens during the construction of the property tree. Thus we experimentally
demonstrate the usefulness of attentive neural architectures for the proposed
joint model, showcasing a further improvement of two percentage points in edge
F1 score for our application.
| 2,018 | Computation and Language |
Application of a Hybrid Bi-LSTM-CRF model to the task of Russian Named
Entity Recognition | Named Entity Recognition (NER) is one of the most common tasks of the natural
language processing. The purpose of NER is to find and classify tokens in text
documents into predefined categories called tags, such as person names,
quantity expressions, percentage expressions, names of locations,
organizations, as well as expression of time, currency and others. Although
there is a number of approaches have been proposed for this task in Russian
language, it still has a substantial potential for the better solutions. In
this work, we studied several deep neural network models starting from vanilla
Bi-directional Long Short-Term Memory (Bi-LSTM) then supplementing it with
Conditional Random Fields (CRF) as well as highway networks and finally adding
external word embeddings. All models were evaluated across three datasets:
Gareev's dataset, Person-1000, FactRuEval-2016. We found that extension of
Bi-LSTM model with CRF significantly increased the quality of predictions.
Encoding input tokens with external word embeddings reduced training time and
allowed to achieve state of the art for the Russian NER task.
| 2,017 | Computation and Language |
KeyVec: Key-semantics Preserving Document Representations | Previous studies have demonstrated the empirical success of word embeddings
in various applications. In this paper, we investigate the problem of learning
distributed representations for text documents which many machine learning
algorithms take as input for a number of NLP tasks.
We propose a neural network model, KeyVec, which learns document
representations with the goal of preserving key semantics of the input text. It
enables the learned low-dimensional vectors to retain the topics and important
information from the documents that will flow to downstream tasks. Our
empirical evaluations show the superior quality of KeyVec representations in
two different document understanding tasks.
| 2,017 | Computation and Language |
A Deep Neural Network Approach To Parallel Sentence Extraction | Parallel sentence extraction is a task addressing the data sparsity problem
found in multilingual natural language processing applications. We propose an
end-to-end deep neural network approach to detect translational equivalence
between sentences in two different languages. In contrast to previous
approaches, which typically rely on multiples models and various word alignment
features, by leveraging continuous vector representation of sentences we remove
the need of any domain specific feature engineering. Using a siamese
bidirectional recurrent neural networks, our results against a strong baseline
based on a state-of-the-art parallel sentence extraction system show a
significant improvement in both the quality of the extracted parallel sentences
and the translation performance of statistical machine translation systems. We
believe this study is the first one to investigate deep learning for the
parallel sentence extraction task.
| 2,017 | Computation and Language |
Edina: Building an Open Domain Socialbot with Self-dialogues | We present Edina, the University of Edinburgh's social bot for the Amazon
Alexa Prize competition. Edina is a conversational agent whose responses
utilize data harvested from Amazon Mechanical Turk (AMT) through an innovative
new technique we call self-dialogues. These are conversations in which a single
AMT Worker plays both participants in a dialogue. Such dialogues are
surprisingly natural, efficient to collect and reflective of relevant and/or
trending topics. These self-dialogues provide training data for a generative
neural network as well as a basis for soft rules used by a matching score
component. Each match of a soft rule against a user utterance is associated
with a confidence score which we show is strongly indicative of reply quality,
allowing this component to self-censor and be effectively integrated with other
components. Edina's full architecture features a rule-based system backing off
to a matching score, backing off to a generative neural network. Our hybrid
data-driven methodology thus addresses both coverage limitations of a strictly
rule-based approach and the lack of guarantees of a strictly machine-learning
approach.
| 2,017 | Computation and Language |
Sentiment Classification with Word Attention based on Weakly Supervised
Learning with a Convolutional Neural Network | In order to maximize the applicability of sentiment analysis results, it is
necessary to not only classify the overall sentiment (positive/negative) of a
given document but also to identify the main words that contribute to the
classification. However, most datasets for sentiment analysis only have the
sentiment label for each document or sentence. In other words, there is no
information about which words play an important role in sentiment
classification. In this paper, we propose a method for identifying key words
discriminating positive and negative sentences by using a weakly supervised
learning method based on a convolutional neural network (CNN). In our model,
each word is represented as a continuous-valued vector and each sentence is
represented as a matrix whose rows correspond to the word vector used in the
sentence. Then, the CNN model is trained using these sentence matrices as
inputs and the sentiment labels as the output. Once the CNN model is trained,
we implement the word attention mechanism that identifies high-contributing
words to classification results with a class activation map, using the weights
from the fully connected layer at the end of the learned CNN model. In order to
verify the proposed methodology, we evaluated the classification accuracy and
inclusion rate of polarity words using two movie review datasets. Experimental
result show that the proposed model can not only correctly classify the
sentence polarity but also successfully identify the corresponding words with
high polarity scores.
| 2,017 | Computation and Language |
Graph Convolutional Networks for Named Entity Recognition | In this paper we investigate the role of the dependency tree in a named
entity recognizer upon using a set of GCN. We perform a comparison among
different NER architectures and show that the grammar of a sentence positively
influences the results. Experiments on the ontonotes dataset demonstrate
consistent performance improvements, without requiring heavy feature
engineering nor additional language-specific knowledge.
| 2,018 | Computation and Language |
A Web of Hate: Tackling Hateful Speech in Online Social Spaces | Online social platforms are beset with hateful speech - content that
expresses hatred for a person or group of people. Such content can frighten,
intimidate, or silence platform users, and some of it can inspire other users
to commit violence. Despite widespread recognition of the problems posed by
such content, reliable solutions even for detecting hateful speech are lacking.
In the present work, we establish why keyword-based methods are insufficient
for detection. We then propose an approach to detecting hateful speech that
uses content produced by self-identifying hateful communities as training data.
Our approach bypasses the expensive annotation process often required to train
keyword systems and performs well across several established platforms, making
substantial improvements over current state-of-the-art approaches.
| 2,017 | Computation and Language |
Jointly Trained Sequential Labeling and Classification by Sparse
Attention Neural Networks | Sentence-level classification and sequential labeling are two fundamental
tasks in language understanding. While these two tasks are usually modeled
separately, in reality, they are often correlated, for example in intent
classification and slot filling, or in topic classification and named-entity
recognition. In order to utilize the potential benefits from their
correlations, we propose a jointly trained model for learning the two tasks
simultaneously via Long Short-Term Memory (LSTM) networks. This model predicts
the sentence-level category and the word-level label sequence from the stepwise
output hidden representations of LSTM. We also introduce a novel mechanism of
"sparse attention" to weigh words differently based on their semantic relevance
to sentence-level classification. The proposed method outperforms baseline
models on ATIS and TREC datasets.
| 2,017 | Computation and Language |
A Neural Comprehensive Ranker (NCR) for Open-Domain Question Answering | This paper proposes a novel neural machine reading model for open-domain
question answering at scale. Existing machine comprehension models typically
assume that a short piece of relevant text containing answers is already
identified and given to the models, from which the models are designed to
extract answers. This assumption, however, is not realistic for building a
large-scale open-domain question answering system which requires both deep text
understanding and identifying relevant text from corpus simultaneously.
In this paper, we introduce Neural Comprehensive Ranker (NCR) that integrates
both passage ranking and answer extraction in one single framework. A Q&A
system based on this framework allows users to issue an open-domain question
without needing to provide a piece of text that must contain the answer.
Experiments show that the unified NCR model is able to outperform the
states-of-the-art in both retrieval of relevant text and answer extraction.
| 2,017 | Computation and Language |
The First Evaluation of Chinese Human-Computer Dialogue Technology | In this paper, we introduce the first evaluation of Chinese human-computer
dialogue technology. We detail the evaluation scheme, tasks, metrics and how to
collect and annotate the data for training, developing and test. The evaluation
includes two tasks, namely user intent classification and online testing of
task-oriented dialogue. To consider the different sources of the data for
training and developing, the first task can also be divided into two sub tasks.
Both the two tasks are coming from the real problems when using the
applications developed by industry. The evaluation data is provided by the
iFLYTEK Corporation. Meanwhile, in this paper, we publish the evaluation
results to present the current performance of the participants in the two tasks
of Chinese human-computer dialogue technology. Moreover, we analyze the
existing problems of human-computer dialogue as well as the evaluation scheme
itself.
| 2,019 | Computation and Language |
Structured Embedding Models for Grouped Data | Word embeddings are a powerful approach for analyzing language, and
exponential family embeddings (EFE) extend them to other types of data. Here we
develop structured exponential family embeddings (S-EFE), a method for
discovering embeddings that vary across related groups of data. We study how
the word usage of U.S. Congressional speeches varies across states and party
affiliation, how words are used differently across sections of the ArXiv, and
how the co-purchase patterns of groceries can vary across seasons. Key to the
success of our method is that the groups share statistical information. We
develop two sharing strategies: hierarchical modeling and amortization. We
demonstrate the benefits of this approach in empirical studies of speeches,
abstracts, and shopping baskets. We show how S-EFE enables group-specific
interpretation of word usage, and outperforms EFE in predicting held-out data.
| 2,017 | Computation and Language |
Towards Universal Semantic Tagging | The paper proposes the task of universal semantic tagging---tagging word
tokens with language-neutral, semantically informative tags. We argue that the
task, with its independent nature, contributes to better semantic analysis for
wide-coverage multilingual text. We present the initial version of the semantic
tagset and show that (a) the tags provide semantically fine-grained
information, and (b) they are suitable for cross-lingual semantic parsing. An
application of the semantic tagging in the Parallel Meaning Bank supports both
of these points as the tags contribute to formal lexical semantics and their
cross-lingual projection. As a part of the application, we annotate a small
corpus with the semantic tags and present new baseline result for universal
semantic tagging.
| 2,017 | Computation and Language |
Learning how to learn: an adaptive dialogue agent for incrementally
learning visually grounded word meanings | We present an optimised multi-modal dialogue agent for interactive learning
of visually grounded word meanings from a human tutor, trained on real
human-human tutoring data. Within a life-long interactive learning period, the
agent, trained using Reinforcement Learning (RL), must be able to handle
natural conversations with human users and achieve good learning performance
(accuracy) while minimising human effort in the learning process. We train and
evaluate this system in interaction with a simulated human tutor, which is
built on the BURCHAK corpus -- a Human-Human Dialogue dataset for the visual
learning task. The results show that: 1) The learned policy can coherently
interact with the simulated user to achieve the goal of the task (i.e. learning
visual attributes of objects, e.g. colour and shape); and 2) it finds a better
trade-off between classifier accuracy and tutoring costs than hand-crafted
rule-based policies, including ones with dynamic policies.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.