Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Improving Neural Sequence Labelling using Additional Linguistic
Information | Sequence labelling is the task of assigning categorical labels to a data
sequence. In Natural Language Processing, sequence labelling can be applied to
various fundamental problems, such as Part of Speech (POS) tagging, Named
Entity Recognition (NER), and Chunking. In this study, we propose a method to
add various linguistic features to the neural sequence framework to improve
sequence labelling. Besides word level knowledge, sense embeddings are added to
provide semantic information. Additionally, selective readings of character
embeddings are added to capture contextual as well as morphological features
for each word in a sentence. Compared to previous methods, these added
linguistic features allow us to design a more concise model and perform more
efficient training. Our proposed architecture achieves state of the art results
on the benchmark datasets of POS, NER, and chunking. Moreover, the convergence
rate of our model is significantly better than the previous state of the art
models.
| 2,018 | Computation and Language |
A Survey of the Usages of Deep Learning in Natural Language Processing | Over the last several years, the field of natural language processing has
been propelled forward by an explosion in the use of deep learning models. This
survey provides a brief introduction to the field and a quick overview of deep
learning architectures and methods. It then sifts through the plethora of
recent studies and summarizes a large assortment of relevant contributions.
Analyzed research areas include several core linguistic processing issues in
addition to a number of applications of computational linguistics. A discussion
of the current state of the art is then provided along with recommendations for
future research in the field.
| 2,019 | Computation and Language |
Back-Translation-Style Data Augmentation for End-to-End ASR | In this paper we propose a novel data augmentation method for attention-based
end-to-end automatic speech recognition (E2E-ASR), utilizing a large amount of
text which is not paired with speech signals. Inspired by the back-translation
technique proposed in the field of machine translation, we build a neural
text-to-encoder model which predicts a sequence of hidden states extracted by a
pre-trained E2E-ASR encoder from a sequence of characters. By using hidden
states as a target instead of acoustic features, it is possible to achieve
faster attention learning and reduce computational cost, thanks to sub-sampling
in E2E-ASR encoder, also the use of the hidden states can avoid to model
speaker dependencies unlike acoustic features. After training, the
text-to-encoder model generates the hidden states from a large amount of
unpaired text, then E2E-ASR decoder is retrained using the generated hidden
states as additional training data. Experimental evaluation using LibriSpeech
dataset demonstrates that our proposed method achieves improvement of ASR
performance and reduces the number of unknown words without the need for paired
data.
| 2,018 | Computation and Language |
Acoustic and Textual Data Augmentation for Improved ASR of
Code-Switching Speech | In this paper, we describe several techniques for improving the acoustic and
language model of an automatic speech recognition (ASR) system operating on
code-switching (CS) speech. We focus on the recognition of Frisian-Dutch radio
broadcasts where one of the mixed languages, namely Frisian, is an
under-resourced language. In previous work, we have proposed several automatic
transcription strategies for CS speech to increase the amount of available
training speech data. In this work, we explore how the acoustic modeling (AM)
can benefit from monolingual speech data belonging to the high-resourced mixed
language. For this purpose, we train state-of-the-art AMs, which were
ineffective due to lack of training data, on a significantly increased amount
of CS speech and monolingual Dutch speech. Moreover, we improve the language
model (LM) by creating code-switching text, which is in practice almost
non-existent, by (1) generating text using recurrent LMs trained on the
transcriptions of the training CS speech data, (2) adding the transcriptions of
the automatically transcribed CS speech data and (3) translating Dutch text
extracted from the transcriptions of a large Dutch speech corpora. We report
significantly improved CS ASR performance due to the increase in the acoustic
and textual training data.
| 2,018 | Computation and Language |
Articulatory Features for ASR of Pathological Speech | In this work, we investigate the joint use of articulatory and acoustic
features for automatic speech recognition (ASR) of pathological speech. Despite
long-lasting efforts to build speaker- and text-independent ASR systems for
people with dysarthria, the performance of state-of-the-art systems is still
considerably lower on this type of speech than on normal speech. The most
prominent reason for the inferior performance is the high variability in
pathological speech that is characterized by the spectrotemporal deviations
caused by articulatory impairments due to various etiologies. To cope with this
high variation, we propose to use speech representations which utilize
articulatory information together with the acoustic properties. A designated
acoustic model, namely a fused-feature-map convolutional neural network (fCNN),
which performs frequency convolution on acoustic features and time convolution
on articulatory features is trained and tested on a Dutch and a Flemish
pathological speech corpus. The ASR performance of fCNN-based ASR system using
joint features is compared to other neural network architectures such
conventional CNNs and time-frequency convolutional networks (TFCNNs) in several
training scenarios.
| 2,018 | Computation and Language |
Building a Unified Code-Switching ASR System for South African Languages | We present our first efforts towards building a single multilingual automatic
speech recognition (ASR) system that can process code-switching (CS) speech in
five languages spoken within the same population. This contrasts with related
prior work which focuses on the recognition of CS speech in bilingual
scenarios. Recently, we have compiled a small five-language corpus of South
African soap opera speech which contains examples of CS between 5 languages
occurring in various contexts such as using English as the matrix language and
switching to other indigenous languages. The ASR system presented in this work
is trained on 4 corpora containing English-isiZulu, English-isiXhosa,
English-Setswana and English-Sesotho CS speech. The interpolation of multiple
language models trained on these language pairs enables the ASR system to
hypothesize mixed word sequences from these 5 languages. We evaluate various
state-of-the-art acoustic models trained on this 5-lingual training data and
report ASR accuracy and language recognition performance on the development and
test sets of the South African multilingual soap opera corpus.
| 2,018 | Computation and Language |
Ontology-Grounded Topic Modeling for Climate Science Research | In scientific disciplines where research findings have a strong impact on
society, reducing the amount of time it takes to understand, synthesize and
exploit the research is invaluable. Topic modeling is an effective technique
for summarizing a collection of documents to find the main themes among them
and to classify other documents that have a similar mixture of co-occurring
words. We show how grounding a topic model with an ontology, extracted from a
glossary of important domain phrases, improves the topics generated and makes
them easier to understand. We apply and evaluate this method to the climate
science domain. The result improves the topics generated and supports faster
research understanding, discovery of social networks among researchers, and
automatic ontology generation.
| 2,018 | Computation and Language |
Domain Robust Feature Extraction for Rapid Low Resource ASR Development | Developing a practical speech recognizer for a low resource language is
challenging, not only because of the (potentially unknown) properties of the
language, but also because test data may not be from the same domain as the
available training data. In this paper, we focus on the latter challenge, i.e.
domain mismatch, for systems trained using a sequence-based criterion. We
demonstrate the effectiveness of using a pre-trained English recognizer, which
is robust to such mismatched conditions, as a domain normalizing feature
extractor on a low resource language. In our example, we use Turkish
Conversational Speech and Broadcast News data. This enables rapid development
of speech recognizers for new languages which can easily adapt to any domain.
Testing in various cross-domain scenarios, we achieve relative improvements of
around 25% in phoneme error rate, with improvements being around 50% for some
domains.
| 2,018 | Computation and Language |
NMT-based Cross-lingual Document Embeddings | This paper investigates a cross-lingual document embedding method that
improves the current Neural machine Translation framework based Document Vector
(NTDV or simply NV). NV is developed with a self-attention mechanism under the
neural machine translation (NMT) framework. In NV, each pair of parallel
documents in different languages are projected to the same shared layer in the
model. However, the pair of NV embeddings are not guaranteed to be similar.
This paper further adds a distance constraint to the training objective
function of NV so that the two embeddings of a parallel document are required
to be as close as possible. The new method will be called constrained NV (cNV).
In a cross-lingual document classification task, the new cNV performs as well
as NV and outperforms other published studies that require forward-pass
decoding. Compared with the previous NV, cNV does not need a translator during
testing, and so the method is lighter and more flexible.
| 2,020 | Computation and Language |
Convolutional Gated Recurrent Units for Medical Relation Classification | Convolutional neural network (CNN) and recurrent neural network (RNN) models
have become the mainstream methods for relation classification. We propose a
unified architecture, which exploits the advantages of CNN and RNN
simultaneously, to identify medical relations in clinical records, with only
word embedding features. Our model learns phrase-level features through a CNN
layer, and these feature representations are directly fed into a bidirectional
gated recurrent unit (GRU) layer to capture long-term feature dependencies. We
evaluate our model on two clinical datasets, and experiments demonstrate that
our model performs significantly better than previous single-model methods on
both datasets.
| 2,018 | Computation and Language |
Microsoft Dialogue Challenge: Building End-to-End Task-Completion
Dialogue Systems | This proposal introduces a Dialogue Challenge for building end-to-end
task-completion dialogue systems, with the goal of encouraging the dialogue
research community to collaborate and benchmark on standard datasets and
unified experimental environment. In this special session, we will release
human-annotated conversational data in three domains (movie-ticket booking,
restaurant reservation, and taxi booking), as well as an experiment platform
with built-in simulators in each domain, for training and evaluation purposes.
The final submitted systems will be evaluated both in simulated setting and by
human judges.
| 2,018 | Computation and Language |
Leveraging Medical Sentiment to Understand Patients Health on Social
Media | The unprecedented growth of Internet users in recent years has resulted in an
abundance of unstructured information in the form of social media text. A large
percentage of this population is actively engaged in health social networks to
share health-related information. In this paper, we address an important and
timely topic by analyzing the users' sentiments and emotions w.r.t their
medical conditions. Towards this, we examine users on popular medical forums
(Patient.info,dailystrength.org), where they post on important topics such as
asthma, allergy, depression, and anxiety. First, we provide a benchmark setup
for the task by crawling the data, and further define the sentiment specific
fine-grained medical conditions (Recovered, Exist, Deteriorate, and Other). We
propose an effective architecture that uses a Convolutional Neural Network
(CNN) as a data-driven feature extractor and a Support Vector Machine (SVM) as
a classifier. We further develop a sentiment feature which is sensitive to the
medical context. Here, we show that the use of medical sentiment feature along
with extracted features from CNN improves the model performance. In addition to
our dataset, we also evaluate our approach on the benchmark "CLEF eHealth 2014"
corpora and show that our model outperforms the state-of-the-art techniques.
| 2,018 | Computation and Language |
Training Neural Machine Translation using Word Embedding-based Loss | In neural machine translation (NMT), the computational cost at the output
layer increases with the size of the target-side vocabulary. Using a
limited-size vocabulary instead may cause a significant decrease in translation
quality. This trade-off is derived from a softmax-based loss function that
handles in-dictionary words independently, in which word similarity is not
considered. In this paper, we propose a novel NMT loss function that includes
word similarity in forms of distances in a word embedding space. The proposed
loss function encourages an NMT decoder to generate words close to their
references in the embedding space; this helps the decoder to choose similar
acceptable words when the actual best candidates are not included in the
vocabulary due to its size limitation. In experiments using ASPEC
Japanese-to-English and IWSLT17 English-to-French data sets, the proposed
method showed improvements against a standard NMT baseline in both datasets;
especially with IWSLT17 En-Fr, it achieved up to +1.72 in BLEU and +1.99 in
METEOR. When the target-side vocabulary was very limited to 1,000 words, the
proposed method demonstrated a substantial gain, +1.72 in METEOR with ASPEC
Ja-En.
| 2,018 | Computation and Language |
YouTube AV 50K: An Annotated Corpus for Comments in Autonomous Vehicles | With one billion monthly viewers, and millions of users discussing and
sharing opinions, comments below YouTube videos are rich sources of data for
opinion mining and sentiment analysis. We introduce the YouTube AV 50K dataset,
a freely-available collections of more than 50,000 YouTube comments and
metadata below autonomous vehicle (AV)-related videos. We describe its creation
process, its content and data format, and discuss its possible usages.
Especially, we do a case study of the first self-driving car fatality to
evaluate the dataset, and show how we can use this dataset to better understand
public attitudes toward self-driving cars and public reactions to the accident.
Future developments of the dataset are also discussed.
| 2,019 | Computation and Language |
Active Learning for Interactive Neural Machine Translation of Data
Streams | We study the application of active learning techniques to the translation of
unbounded data streams via interactive neural machine translation. The main
idea is to select, from an unbounded stream of source sentences, those worth to
be supervised by a human agent. The user will interactively translate those
samples. Once validated, these data is useful for adapting the neural machine
translation model.
We propose two novel methods for selecting the samples to be validated. We
exploit the information from the attention mechanism of a neural machine
translation system. Our experiments show that the inclusion of active learning
techniques into this pipeline allows to reduce the effort required during the
process, while increasing the quality of the translation system. Moreover, it
enables to balance the human effort required for achieving a certain
translation quality. Moreover, our neural system outperforms classical
approaches by a large margin.
| 2,018 | Computation and Language |
Graphene: Semantically-Linked Propositions in Open Information
Extraction | We present an Open Information Extraction (IE) approach that uses a
two-layered transformation stage consisting of a clausal disembedding layer and
a phrasal disembedding layer, together with rhetorical relation identification.
In that way, we convert sentences that present a complex linguistic structure
into simplified, syntactically sound sentences, from which we can extract
propositions that are represented in a two-layered hierarchy in the form of
core relational tuples and accompanying contextual information which are
semantically linked via rhetorical relations. In a comparative evaluation, we
demonstrate that our reference implementation Graphene outperforms
state-of-the-art Open IE systems in the construction of correct n-ary
predicate-argument structures. Moreover, we show that existing Open IE
approaches can benefit from the transformation process of our framework.
| 2,018 | Computation and Language |
News Article Teaser Tweets and How to Generate Them | In this work, we define the task of teaser generation and provide an
evaluation benchmark and baseline systems for the process of generating
teasers. A teaser is a short reading suggestion for an article that is
illustrative and includes curiosity-arousing elements to entice potential
readers to read particular news items. Teasers are one of the main vehicles for
transmitting news to social media users. We compile a novel dataset of teasers
by systematically accumulating tweets and selecting those that conform to the
teaser definition. We have compared a number of neural abstractive
architectures on the task of teaser generation and the overall best performing
system is See et al.(2017)'s seq2seq with pointer network.
| 2,019 | Computation and Language |
Neural Sentence Embedding using Only In-domain Sentences for
Out-of-domain Sentence Detection in Dialog Systems | To ensure satisfactory user experience, dialog systems must be able to
determine whether an input sentence is in-domain (ID) or out-of-domain (OOD).
We assume that only ID sentences are available as training data because
collecting enough OOD sentences in an unbiased way is a laborious and
time-consuming job. This paper proposes a novel neural sentence embedding
method that represents sentences in a low-dimensional continuous vector space
that emphasizes aspects that distinguish ID cases from OOD cases. We first used
a large set of unlabeled text to pre-train word representations that are used
to initialize neural sentence embedding. Then we used domain-category analysis
as an auxiliary task to train neural sentence embedding for OOD sentence
detection. After the sentence representations were learned, we used them to
train an autoencoder aimed at OOD sentence detection. We evaluated our method
by experimentally comparing it to the state-of-the-art methods in an
eight-domain dialog system; our proposed method achieved the highest accuracy
in all tests.
| 2,018 | Computation and Language |
A Hierarchical Approach to Neural Context-Aware Modeling | We present a new recurrent neural network topology to enhance
state-of-the-art machine learning systems by incorporating a broader context.
Our approach overcomes recent limitations with extended narratives through a
multi-layered computational approach to generate an abstract context
representation. Therefore, the developed system captures the narrative on
word-level, sentence-level, and context-level. Through the hierarchical set-up,
our proposed model summarizes the most salient information on each level and
creates an abstract representation of the extended context. We subsequently use
this representation to enhance neural language processing systems on the task
of semantic error detection. To show the potential of the newly introduced
topology, we compare the approach against a context-agnostic set-up including a
standard neural language model and a supervised binary classification network.
The performance measures on the error detection task show the advantage of the
hierarchical context-aware topologies, improving the baseline by 12.75%
relative for unsupervised models and 20.37% relative for supervised models.
| 2,018 | Computation and Language |
UH-PRHLT at SemEval-2016 Task 3: Combining Lexical and Semantic-based
Features for Community Question Answering | In this work we describe the system built for the three English subtasks of
the SemEval 2016 Task 3 by the Department of Computer Science of the University
of Houston (UH) and the Pattern Recognition and Human Language Technology
(PRHLT) research center - Universitat Polit`ecnica de Val`encia: UH-PRHLT. Our
system represents instances by using both lexical and semantic-based similarity
measures between text pairs. Our semantic features include the use of
distributed representations of words, knowledge graphs generated with the
BabelNet multilingual semantic network, and the FrameNet lexical database.
Experimental results outperform the random and Google search engine baselines
in the three English subtasks. Our approach obtained the highest results of
subtask B compared to the other task participants.
| 2,018 | Computation and Language |
Doubly Attentive Transformer Machine Translation | In this paper a doubly attentive transformer machine translation model
(DATNMT) is presented in which a doubly-attentive transformer decoder normally
joins spatial visual features obtained via pretrained convolutional neural
networks, conquering any gap between image captioning and translation. In this
framework, the transformer decoder figures out how to take care of
source-language words and parts of an image freely by methods for two separate
attention components in an Enhanced Multi-Head Attention Layer of doubly
attentive transformer, as it generates words in the target language. We find
that the proposed model can effectively exploit not just the scarce multimodal
machine translation data, but also large general-domain text-only machine
translation corpora, or image-text image captioning corpora. The experimental
results show that the proposed doubly-attentive transformer-decoder performs
better than a single-decoder transformer model, and gives the state-of-the-art
results in the English-German multimodal machine translation task.
| 2,018 | Computation and Language |
An Enhanced Latent Semantic Analysis Approach for Arabic Document
Summarization | The fast-growing amount of information on the Internet makes the research in
automatic document summarization very urgent. It is an effective solution for
information overload. Many approaches have been proposed based on different
strategies, such as latent semantic analysis (LSA). However, LSA, when applied
to document summarization, has some limitations which diminish its performance.
In this work, we try to overcome these limitations by applying statistic and
linear algebraic approaches combined with syntactic and semantic processing of
text. First, the part of speech tagger is utilized to reduce the dimension of
LSA. Then, the weight of the term in four adjacent sentences is added to the
weighting schemes while calculating the input matrix to take into account the
word order and the syntactic relations. In addition, a new LSA-based sentence
selection algorithm is proposed, in which the term description is combined with
sentence description for each topic which in turn makes the generated summary
more informative and diverse. To ensure the effectiveness of the proposed
LSA-based sentence selection algorithm, extensive experiment on Arabic and
English are done. Four datasets are used to evaluate the new model, Linguistic
Data Consortium (LDC) Arabic Newswire-a corpus, Essex Arabic Summaries Corpus
(EASC), DUC2002, and Multilingual MSS 2015 dataset. Experimental results on the
four datasets show the effectiveness of the proposed model on Arabic and
English datasets. It performs comprehensively better compared to the
state-of-the-art methods.
| 2,018 | Computation and Language |
RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification | This paper presents our system for "TRAC 2018 Shared Task on Aggression
Identification". Our best systems for the English dataset use a combination of
lexical and semantic features. However, for Hindi data using only lexical
features gave us the best results. We obtained weighted F1- measures of 0.5921
for the English Facebook task (ranked 12th), 0.5663 for the English Social
Media task (ranked 6th), 0.6292 for the Hindi Facebook task (ranked 1st), and
0.4853 for the Hindi Social Media task (ranked 2nd).
| 2,018 | Computation and Language |
Gender Bias in Neural Natural Language Processing | We examine whether neural natural language processing (NLP) systems reflect
historical biases in training data. We define a general benchmark to quantify
gender bias in a variety of neural NLP tasks. Our empirical evaluation with
state-of-the-art neural coreference resolution and textbook RNN-based language
models trained on benchmark datasets finds significant gender bias in how
models view occupations. We then mitigate bias with CDA: a generic methodology
for corpus augmentation via causal interventions that breaks associations
between gendered and gender-neutral words. We empirically show that CDA
effectively decreases gender bias while preserving accuracy. We also explore
the space of mitigation strategies with CDA, a prior approach to word embedding
debiasing (WED), and their compositions. We show that CDA outperforms WED,
drastically so when word embeddings are trained. For pre-trained embeddings,
the two methods can be effectively composed. We also find that as training
proceeds on the original data set with gradient descent the gender bias grows
as the loss reduces, indicating that the optimization encourages bias; CDA
mitigates this behavior.
| 2,019 | Computation and Language |
Effective Parallel Corpus Mining using Bilingual Sentence Embeddings | This paper presents an effective approach for parallel corpus mining using
bilingual sentence embeddings. Our embedding models are trained to produce
similar representations exclusively for bilingual sentence pairs that are
translations of each other. This is achieved using a novel training method that
introduces hard negatives consisting of sentences that are not translations but
that have some degree of semantic similarity. The quality of the resulting
embeddings are evaluated on parallel corpus reconstruction and by assessing
machine translation systems trained on gold vs. mined sentence pairs. We find
that the sentence embeddings can be used to reconstruct the United Nations
Parallel Corpus at the sentence level with a precision of 48.9% for en-fr and
54.9% for en-es. When adapted to document level matching, we achieve a parallel
document matching accuracy that is comparable to the significantly more
computationally intensive approach of [Jakob 2010]. Using reconstructed
parallel data, we are able to train NMT models that perform nearly as well as
models trained on the original data (within 1-2 BLEU).
| 2,018 | Computation and Language |
Modeling Task Effects in Human Reading with Neural Network-based
Attention | Research on human reading has long documented that reading behavior shows
task-specific effects, but it has been challenging to build general models
predicting what reading behavior humans will show in a given task. We introduce
NEAT, a computational model of the allocation of attention in human reading,
based on the hypothesis that human reading optimizes a tradeoff between economy
of attention and success at a task. Our model is implemented using contemporary
neural network modeling techniques, and makes explicit and testable predictions
about how the allocation of attention varies across different tasks. We test
this in an eyetracking study comparing two versions of a reading comprehension
task, finding that our model successfully accounts for reading behavior across
the tasks. Our work thus provides evidence that task effects can be modeled as
optimal adaptation to task demands.
| 2,022 | Computation and Language |
Monolingual and Cross-lingual Zero-shot Style Transfer | We introduce the task of zero-shot style transfer between different
languages. Our training data includes multilingual parallel corpora, but does
not contain any parallel sentences between styles, similarly to the recent
previous work. We propose a unified multilingual multi-style machine
translation system design, that allows to perform zero-shot style conversions
during inference; moreover, it does so both monolingually and cross-lingually.
Our model allows to increase the presence of dissimilar styles in corpus by up
to 3 times, easily learns to operate with various contractions, and provides
reasonable lexicon swaps as we see from manual evaluation.
| 2,018 | Computation and Language |
Low-Latency Neural Speech Translation | Through the development of neural machine translation, the quality of machine
translation systems has been improved significantly. By exploiting advancements
in deep learning, systems are now able to better approximate the complex
mapping from source sentences to target sentences. But with this ability, new
challenges also arise. An example is the translation of partial sentences in
low-latency speech translation. Since the model has only seen complete
sentences in training, it will always try to generate a complete sentence,
though the input may only be a partial sentence. We show that NMT systems can
be adapted to scenarios where no task-specific training data is available.
Furthermore, this is possible without losing performance on the original
training data. We achieve this by creating artificial data and by using
multi-task learning. After adaptation, we are able to reduce the number of
corrections displayed during incremental output construction by 45%, without a
decrease in translation quality.
| 2,018 | Computation and Language |
Code-Switching Detection with Data-Augmented Acoustic and Language
Models | In this paper, we investigate the code-switching detection performance of a
code-switching (CS) automatic speech recognition (ASR) system with
data-augmented acoustic and language models. We focus on the recognition of
Frisian-Dutch radio broadcasts where one of the mixed languages, namely
Frisian, is under-resourced. Recently, we have explored how the acoustic
modeling (AM) can benefit from monolingual speech data belonging to the
high-resourced mixed language. For this purpose, we have trained
state-of-the-art AMs on a significantly increased amount of CS speech by
applying automatic transcription and monolingual Dutch speech. Moreover, we
have improved the language model (LM) by creating CS text in various ways
including text generation using recurrent LMs trained on existing CS text.
Motivated by the significantly improved CS ASR performance, we delve into the
CS detection performance of the same ASR system in this work by reporting CS
detection accuracies together with a detailed detection error analysis.
| 2,018 | Computation and Language |
Data Augmentation for Robust Keyword Spotting under Playback
Interference | Accurate on-device keyword spotting (KWS) with low false accept and false
reject rate is crucial to customer experience for far-field voice control of
conversational agents. It is particularly challenging to maintain low false
reject rate in real world conditions where there is (a) ambient noise from
external sources such as TV, household appliances, or other speech that is not
directed at the device (b) imperfect cancellation of the audio playback from
the device, resulting in residual echo, after being processed by the Acoustic
Echo Cancellation (AEC) system. In this paper, we propose a data augmentation
strategy to improve keyword spotting performance under these challenging
conditions. The training set audio is artificially corrupted by mixing in music
and TV/movie audio, at different signal to interference ratios. Our results
show that we get around 30-45% relative reduction in false reject rates, at a
range of false alarm rates, under audio playback from such devices.
| 2,018 | Computation and Language |
Sequence Discriminative Training for Deep Learning based Acoustic
Keyword Spotting | Speech recognition is a sequence prediction problem. Besides employing
various deep learning approaches for framelevel classification, sequence-level
discriminative training has been proved to be indispensable to achieve the
state-of-the-art performance in large vocabulary continuous speech recognition
(LVCSR). However, keyword spotting (KWS), as one of the most common speech
recognition tasks, almost only benefits from frame-level deep learning due to
the difficulty of getting competing sequence hypotheses. The few studies on
sequence discriminative training for KWS are limited for fixed vocabulary or
LVCSR based methods and have not been compared to the state-of-the-art deep
learning based KWS approaches. In this paper, a sequence discriminative
training framework is proposed for both fixed vocabulary and unrestricted
acoustic KWS. Sequence discriminative training for both sequence-level
generative and discriminative models are systematically investigated. By
introducing word-independent phone lattices or non-keyword blank symbols to
construct competing hypotheses, feasible and efficient sequence discriminative
training approaches are proposed for acoustic KWS. Experiments showed that the
proposed approaches obtained consistent and significant improvement in both
fixed vocabulary and unrestricted KWS tasks, compared to previous frame-level
deep learning based acoustic KWS methods.
| 2,018 | Computation and Language |
Linguistic Search Optimization for Deep Learning Based LVCSR | Recent advances in deep learning based large vocabulary con- tinuous speech
recognition (LVCSR) invoke growing demands in large scale speech transcription.
The inference process of a speech recognizer is to find a sequence of labels
whose corresponding acoustic and language models best match the input feature
[1]. The main computation includes two stages: acoustic model (AM) inference
and linguistic search (weighted finite-state transducer, WFST). Large
computational overheads of both stages hamper the wide application of LVCSR.
Benefit from stronger classifiers, deep learning, and more powerful computing
devices, we propose general ideas and some initial trials to solve these
fundamental problems.
| 2,018 | Computation and Language |
OntoSenseNet: A Verb-Centric Ontological Resource for Indian Languages | Following approaches for understanding lexical meaning developed by Yaska,
Patanjali and Bhartrihari from Indian linguistic traditions and extending
approaches developed by Leibniz and Brentano in the modern times, a framework
of formal ontology of language was developed. This framework proposes that
meaning of words are in-formed by intrinsic and extrinsic ontological
structures. The paper aims to capture such intrinsic and extrinsic meanings of
words for two major Indian languages, namely, Hindi and Telugu. Parts-of-speech
have been rendered into sense-types and sense-classes. Using them we have
developed a gold- standard annotated lexical resource to support semantic
understanding of a language. The resource has collection of Hindi and Telugu
lexicons, which has been manually annotated by native speakers of the languages
following our annotation guidelines. Further, the resource was utilised to
derive adverbial sense-class distribution of verbs and karaka-verb sense- type
distribution. Different corpora (news, novels) were compared using verb
sense-types distribution. Word Embedding was used as an aid for the enrichment
of the resource. This is a work in progress that aims at lexical coverage of
language extensively.
| 2,018 | Computation and Language |
Cyberbullying Detection -- Technical Report 2/2018, Department of
Computer Science AGH, University of Science and Technology | The research described in this paper concerns automatic cyberbullying
detection in social media. There are two goals to achieve: building a gold
standard cyberbullying detection dataset and measuring the performance of the
Samurai cyberbullying detection system. The Formspring dataset provided in a
Kaggle competition was re-annotated as a part of the research. The annotation
procedure is described in detail and, unlike many other recent data annotation
initiatives, does not use Mechanical Turk for finding people willing to perform
the annotation. The new annotation compared to the old one seems to be more
coherent since all tested cyberbullying detection system performed better on
the former. The performance of the Samurai system is compared with 5 commercial
systems and one well-known machine learning algorithm, used for classifying
textual content, namely Fasttext. It turns out that Samurai scores the best in
all measures (accuracy, precision and recall), while Fasttext is the
second-best performing algorithm.
| 2,018 | Computation and Language |
Efficient Purely Convolutional Text Encoding | In this work, we focus on a lightweight convolutional architecture that
creates fixed-size vector embeddings of sentences. Such representations are
useful for building NLP systems, including conversational agents. Our work
derives from a recently proposed recursive convolutional architecture for
auto-encoding text paragraphs at byte level. We propose alternations that
significantly reduce training time, the number of parameters, and improve
auto-encoding accuracy. Finally, we evaluate the representations created by our
model on tasks from SentEval benchmark suite, and show that it can serve as a
better, yet fairly low-resource alternative to popular bag-of-words embeddings.
| 2,018 | Computation and Language |
Content-driven, unsupervised clustering of news articles through
multiscale graph partitioning | The explosion in the amount of news and journalistic content being generated
across the globe, coupled with extended and instantaneous access to information
through online media, makes it difficult and time-consuming to monitor news
developments and opinion formation in real time. There is an increasing need
for tools that can pre-process, analyse and classify raw text to extract
interpretable content; specifically, identifying topics and content-driven
groupings of articles. We present here such a methodology that brings together
powerful vector embeddings from Natural Language Processing with tools from
Graph Theory that exploit diffusive dynamics on graphs to reveal natural
partitions across scales. Our framework uses a recent deep neural network text
analysis methodology (Doc2vec) to represent text in vector form and then
applies a multi-scale community detection method (Markov Stability) to
partition a similarity graph of document vectors. The method allows us to
obtain clusters of documents with similar content, at different levels of
resolution, in an unsupervised manner. We showcase our approach with the
analysis of a corpus of 9,000 news articles published by Vox Media over one
year. Our results show consistent groupings of documents according to content
without a priori assumptions about the number or type of clusters to be found.
The multilevel clustering reveals a quasi-hierarchy of topics and subtopics
with increased intelligibility and improved topic coherence as compared to
external taxonomy services and standard topic detection methods.
| 2,018 | Computation and Language |
A Multi-task Ensemble Framework for Emotion, Sentiment and Intensity
Prediction | In this paper, through multi-task ensemble framework we address three
problems of emotion and sentiment analysis i.e. "emotion classification &
intensity", "valence, arousal & dominance for emotion" and "valence & arousal}
for sentiment". The underlying problems cover two granularities (i.e.
coarse-grained and fine-grained) and a diverse range of domains (i.e. tweets,
Facebook posts, news headlines, blogs, letters etc.). The ensemble model aims
to leverage the learned representations of three deep learning models (i.e.
CNN, LSTM and GRU) and a hand-crafted feature representation for the
predictions. Experimental results on the benchmark datasets show the efficacy
of our proposed multi-task ensemble frameworks. We obtain the performance
improvement of 2-3 points on an average over single-task systems for most of
the problems and domains.
| 2,018 | Computation and Language |
Predicting Expressive Speaking Style From Text In End-To-End Speech
Synthesis | Global Style Tokens (GSTs) are a recently-proposed method to learn latent
disentangled representations of high-dimensional data. GSTs can be used within
Tacotron, a state-of-the-art end-to-end text-to-speech synthesis system, to
uncover expressive factors of variation in speaking style. In this work, we
introduce the Text-Predicted Global Style Token (TP-GST) architecture, which
treats GST combination weights or style embeddings as "virtual" speaking style
labels within Tacotron. TP-GST learns to predict stylistic renderings from text
alone, requiring neither explicit labels during training nor auxiliary inputs
for inference. We show that, when trained on a dataset of expressive speech,
our system generates audio with more pitch and energy variation than two
state-of-the-art baseline models. We further demonstrate that TP-GSTs can
synthesize speech with background noise removed, and corroborate these analyses
with positive results on human-rated listener preference audiobook tasks.
Finally, we demonstrate that multi-speaker TP-GST models successfully factorize
speaker identity and speaking style. We provide a website with audio samples
for each of our findings.
| 2,018 | Computation and Language |
Abstractive Summarization Improved by WordNet-based Extractive Sentences | Recently, the seq2seq abstractive summarization models have achieved good
results on the CNN/Daily Mail dataset. Still, how to improve abstractive
methods with extractive methods is a good research direction, since extractive
methods have their potentials of exploiting various efficient features for
extracting important sentences in one text. In this paper, in order to improve
the semantic relevance of abstractive summaries, we adopt the WordNet based
sentence ranking algorithm to extract the sentences which are most semantically
to one text. Then, we design a dual attentional seq2seq framework to generate
summaries with consideration of the extracted information. At the same time, we
combine pointer-generator and coverage mechanisms to solve the problems of
out-of-vocabulary (OOV) words and duplicate words which exist in the
abstractive models. Experiments on the CNN/Daily Mail dataset show that our
models achieve competitive performance with the state-of-the-art ROUGE scores.
Human evaluations also show that the summaries generated by our models have
high semantic relevance to the original text.
| 2,018 | Computation and Language |
LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse
Semantic Accumulation and Example to Pattern Transformation | Recurrent neural networks (RNNs) are temporal networks and cumulative in
nature that have shown promising results in various natural language processing
tasks. Despite their success, it still remains a challenge to understand their
hidden behavior. In this work, we analyze and interpret the cumulative nature
of RNN via a proposed technique named as Layer-wIse-Semantic-Accumulation
(LISA) for explaining decisions and detecting the most likely (i.e., saliency)
patterns that the network relies on while decision making. We demonstrate (1)
LISA: "How an RNN accumulates or builds semantics during its sequential
processing for a given text example and expected response" (2) Example2pattern:
"How the saliency patterns look like for each category in the data according to
the network in decision making". We analyse the sensitiveness of RNNs about
different inputs to check the increase or decrease in prediction scores and
further extract the saliency patterns learned by the network. We employ two
relation classification datasets: SemEval 10 Task 8 and TAC KBP Slot Filling to
explain RNN predictions via the LISA and example2pattern.
| 2,018 | Computation and Language |
Instantiation | In computational linguistics, a large body of work exists on distributed
modeling of lexical relations, focussing largely on lexical relations such as
hypernymy (scientist -- person) that hold between two categories, as expressed
by common nouns. In contrast, computational linguistics has paid little
attention to entities denoted by proper nouns (Marie Curie, Mumbai, ...). These
have investigated in detail by the Knowledge Representation and Semantic Web
communities, but generally not with regard to their linguistic properties.
Our paper closes this gap by investigating and modeling the lexical relation
of instantiation, which holds between an entity-denoting and a
category-denoting expression (Marie Curie -- scientist or Mumbai -- city). We
present a new, principled dataset for the task of instantiation detection as
well as experiments and analyses on this dataset. We obtain the following
results: (a), entities belonging to one category form a region in
distributional space, but the embedding for the category word is typically
located outside this subspace; (b) it is easy to learn to distinguish entities
from categories from distributional evidence, but due to (a), instantiation
proper is much harder to learn when using common nouns as representations of
categories; (c) this problem can be alleviated by using category
representations based on entity rather than category word embeddings.
| 2,021 | Computation and Language |
Using Linguistic Cues for Analyzing Social Movements | With the growth of social media usage, social activists try to leverage this
platform to raise the awareness related to a social issue and engage the public
worldwide. The broad use of social media platforms in recent years, made it
easier for the people to stay up-to-date on the news related to regional and
worldwide events. While social media, namely Twitter, assists social movements
to connect with more people and mobilize the movement, traditional media such
as news articles help in spreading the news related to the events in a broader
aspect. In this study, we analyze linguistic features and cues, such as
individualism vs. pluralism, sentiment and emotion to examine the relationship
between the medium and discourse over time. We conduct this work in a specific
application context, the "Black Lives Matter" (BLM) movement, and compare
discussions related to this event in social media vs. news articles.
| 2,018 | Computation and Language |
Residual Memory Networks: Feed-forward approach to learn long temporal
dependencies | Training deep recurrent neural network (RNN) architectures is complicated due
to the increased network complexity. This disrupts the learning of higher order
abstracts using deep RNN. In case of feed-forward networks training deep
structures is simple and faster while learning long-term temporal information
is not possible. In this paper we propose a residual memory neural network
(RMN) architecture to model short-time dependencies using deep feed-forward
layers having residual and time delayed connections. The residual connection
paves way to construct deeper networks by enabling unhindered flow of gradients
and the time delay units capture temporal information with shared weights. The
number of layers in RMN signifies both the hierarchical processing depth and
temporal depth. The computational complexity in training RMN is significantly
less when compared to deep recurrent networks. RMN is further extended as
bi-directional RMN (BRMN) to capture both past and future information.
Experimental analysis is done on AMI corpus to substantiate the capability of
RMN in learning long-term information and hierarchical information. Recognition
performance of RMN trained with 300 hours of Switchboard corpus is compared
with various state-of-the-art LVCSR systems. The results indicate that RMN and
BRMN gains 6 % and 3.8 % relative improvement over LSTM and BLSTM networks.
| 2,018 | Computation and Language |
Principles for Developing a Knowledge Graph of Interlinked Events from
News Headlines on Twitter | The ever-growing datasets published on Linked Open Data mainly contain
encyclopedic information. However, there is a lack of quality structured and
semantically annotated datasets extracted from unstructured real-time sources.
In this paper, we present principles for developing a knowledge graph of
interlinked events using the case study of news headlines published on Twitter
which is a real-time and eventful source of fresh information. We represent the
essential pipeline containing the required tasks ranging from choosing
background data model, event annotation (i.e., event recognition and
classification), entity annotation and eventually interlinking events. The
state-of-the-art is limited to domain-specific scenarios for recognizing and
classifying events, whereas this paper plays the role of a domain-agnostic
road-map for developing a knowledge graph of interlinked events.
| 2,018 | Computation and Language |
Did you take the pill? - Detecting Personal Intake of Medicine from
Twitter | Mining social media messages such as tweets, articles, and Facebook posts for
health and drug related information has received significant interest in
pharmacovigilance research. Social media sites (e.g., Twitter), have been used
for monitoring drug abuse, adverse reactions of drug usage and analyzing
expression of sentiments related to drugs. Most of these studies are based on
aggregated results from a large population rather than specific sets of
individuals. In order to conduct studies at an individual level or specific
cohorts, identifying posts mentioning intake of medicine by the user is
necessary. Towards this objective we develop a classifier for identifying
mentions of personal intake of medicine in tweets. We train a stacked ensemble
of shallow convolutional neural network (CNN) models on an annotated dataset.
We use random search for tuning the hyper-parameters of the CNN models and
present an ensemble of best models for the prediction task. Our system produces
state-of-the-art result, with a micro-averaged F-score of 0.693. We believe
that the developed classifier has direct uses in the areas of psychology,
health informatics, pharmacovigilance and affective computing for tracking
moods, emotions and sentiments of patients expressing intake of medicine in
social media.
| 2,018 | Computation and Language |
Dialog-context aware end-to-end speech recognition | Existing speech recognition systems are typically built at the sentence
level, although it is known that dialog context, e.g. higher-level knowledge
that spans across sentences or speakers, can help the processing of long
conversations. The recent progress in end-to-end speech recognition systems
promises to integrate all available information (e.g. acoustic, language
resources) into a single model, which is then jointly optimized. It seems
natural that such dialog context information should thus also be integrated
into the end-to-end models to improve further recognition accuracy. In this
work, we present a dialog-context aware speech recognition model, which
explicitly uses context information beyond sentence-level information, in an
end-to-end fashion. Our dialog-context model captures a history of
sentence-level context so that the whole system can be trained with
dialog-context information in an end-to-end manner. We evaluate our proposed
approach on the Switchboard conversational speech corpus and show that our
system outperforms a comparable sentence-level end-to-end speech recognition
system.
| 2,018 | Computation and Language |
Segmental Audio Word2Vec: Representing Utterances as Sequences of
Vectors with Applications in Spoken Term Detection | While Word2Vec represents words (in text) as vectors carrying semantic
information, audio Word2Vec was shown to be able to represent signal segments
of spoken words as vectors carrying phonetic structure information. Audio
Word2Vec can be trained in an unsupervised way from an unlabeled corpus, except
the word boundaries are needed. In this paper, we extend audio Word2Vec from
word-level to utterance-level by proposing a new segmental audio Word2Vec, in
which unsupervised spoken word boundary segmentation and audio Word2Vec are
jointly learned and mutually enhanced, so an utterance can be directly
represented as a sequence of vectors carrying phonetic structure information.
This is achieved by a segmental sequence-to-sequence autoencoder (SSAE), in
which a segmentation gate trained with reinforcement learning is inserted in
the encoder. Experiments on English, Czech, French and German show very good
performance in both unsupervised spoken word segmentation and spoken term
detection applications (significantly better than frame-based DTW).
| 2,018 | Computation and Language |
ODSQA: Open-domain Spoken Question Answering Dataset | Reading comprehension by machine has been widely studied, but machine
comprehension of spoken content is still a less investigated problem. In this
paper, we release Open-Domain Spoken Question Answering Dataset (ODSQA) with
more than three thousand questions. To the best of our knowledge, this is the
largest real SQA dataset. On this dataset, we found that ASR errors have
catastrophic impact on SQA. To mitigate the effect of ASR errors, subword units
are involved, which brings consistent improvements over all the models. We
further found that data augmentation on text-based QA training examples can
improve SQA.
| 2,018 | Computation and Language |
How did the discussion go: Discourse act classification in social media
conversations | We propose a novel attention based hierarchical LSTM model to classify
discourse act sequences in social media conversations, aimed at mining data
from online discussion using textual meanings beyond sentence level. The very
uniqueness of the task is the complete categorization of possible pragmatic
roles in informal textual discussions, contrary to extraction of
question-answers, stance detection or sarcasm identification which are very
much role specific tasks. Early attempt was made on a Reddit discussion
dataset. We train our model on the same data, and present test results on two
different datasets, one from Reddit and one from Facebook. Our proposed model
outperformed the previous one in terms of domain independence; without using
platform-dependent structural features, our hierarchical LSTM with word
relevance attention mechanism achieved F1-scores of 71\% and 66\% respectively
to predict discourse roles of comments in Reddit and Facebook discussions.
Efficiency of recurrent and convolutional architectures in order to learn
discursive representation on the same task has been presented and analyzed,
with different word and comment embedding schemes. Our attention mechanism
enables us to inquire into relevance ordering of text segments according to
their roles in discourse. We present a human annotator experiment to unveil
important observations about modeling and data annotation. Equipped with our
text-based discourse identification model, we inquire into how heterogeneous
non-textual features like location, time, leaning of information etc. play
their roles in charaterizing online discussions on Facebook.
| 2,019 | Computation and Language |
Word-Level Loss Extensions for Neural Temporal Relation Classification | Unsupervised pre-trained word embeddings are used effectively for many tasks
in natural language processing to leverage unlabeled textual data. Often these
embeddings are either used as initializations or as fixed word representations
for task-specific classification models. In this work, we extend our
classification model's task loss with an unsupervised auxiliary loss on the
word-embedding level of the model. This is to ensure that the learned word
representations contain both task-specific features, learned from the
supervised loss component, and more general features learned from the
unsupervised loss component. We evaluate our approach on the task of temporal
relation extraction, in particular, narrative containment relation extraction
from clinical records, and show that continued training of the embeddings on
the unsupervised objective together with the task objective gives better
task-specific embeddings, and results in an improvement over the state of the
art on the THYME dataset, using only a general-domain part-of-speech tagger as
linguistic resource.
| 2,018 | Computation and Language |
Device-directed Utterance Detection | In this work, we propose a classifier for distinguishing device-directed
queries from background speech in the context of interactions with voice
assistants. Applications include rejection of false wake-ups or unintended
interactions as well as enabling wake-word free follow-up queries. Consider the
example interaction: $"Computer,~play~music", "Computer,~reduce~the~volume"$.
In this interaction, the user needs to repeat the wake-word ($Computer$) for
the second query. To allow for more natural interactions, the device could
immediately re-enter listening state after the first query (without wake-word
repetition) and accept or reject a potential follow-up as device-directed or
background speech. The proposed model consists of two long short-term memory
(LSTM) neural networks trained on acoustic features and automatic speech
recognition (ASR) 1-best hypotheses, respectively. A feed-forward deep neural
network (DNN) is then trained to combine the acoustic and 1-best embeddings,
derived from the LSTMs, with features from the ASR decoder. Experimental
results show that ASR decoder, acoustic embeddings, and 1-best embeddings yield
an equal-error-rate (EER) of $9.3~\%$, $10.9~\%$ and $20.1~\%$, respectively.
Combination of the features resulted in a $44~\%$ relative improvement and a
final EER of $5.2~\%$.
| 2,018 | Computation and Language |
Design Challenges in Named Entity Transliteration | We analyze some of the fundamental design challenges that impact the
development of a multilingual state-of-the-art named entity transliteration
system, including curating bilingual named entity datasets and evaluation of
multiple transliteration methods. We empirically evaluate the transliteration
task using traditional weighted finite state transducer (WFST) approach against
two neural approaches: the encoder-decoder recurrent neural network method and
the recent, non-sequential Transformer method. In order to improve availability
of bilingual named entity transliteration datasets, we release personal name
bilingual dictionaries minded from Wikidata for English to Russian, Hebrew,
Arabic and Japanese Katakana. Our code and dictionaries are publicly available.
| 2,018 | Computation and Language |
Adversarial Domain Adaptation for Variational Neural Language Generation
in Dialogue Systems | Domain Adaptation arises when we aim at learning from source domain a model
that can per- form acceptably well on a different target domain. It is
especially crucial for Natural Language Generation (NLG) in Spoken Dialogue
Systems when there are sufficient annotated data in the source domain, but
there is a limited labeled data in the target domain. How to effectively
utilize as much of existing abilities from source domains is a crucial issue in
domain adaptation. In this paper, we propose an adversarial training procedure
to train a Variational encoder-decoder based language generator via multiple
adaptation steps. In this procedure, a model is first trained on a source
domain data and then fine-tuned on a small set of target domain utterances
under the guidance of two proposed critics. Experimental results show that the
proposed method can effec- tively leverage the existing knowledge in the source
domain to adapt to another related domain by using only a small amount of
in-domain data.
| 2,018 | Computation and Language |
End-to-end Speech Recognition with Word-based RNN Language Models | This paper investigates the impact of word-based RNN language models
(RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR).
In our prior work, we have proposed a multi-level LM, in which character-based
and word-based RNN-LMs are combined in hybrid CTC/attention-based ASR. Although
this multi-level approach achieves significant error reduction in the Wall
Street Journal (WSJ) task, two different LMs need to be trained and used for
decoding, which increase the computational cost and memory usage. In this
paper, we further propose a novel word-based RNN-LM, which allows us to decode
with only the word-based LM, where it provides look-ahead word probabilities to
predict next characters instead of the character-based LM, leading competitive
accuracy with less computation compared to the multi-level LM. We demonstrate
the efficacy of the word-based RNN-LMs using a larger corpus, LibriSpeech, in
addition to WSJ we used in the prior work. Furthermore, we show that the
proposed model achieves 5.1 %WER for WSJ Eval'92 test set when the vocabulary
size is increased, which is the best WER reported for end-to-end ASR systems on
this benchmark.
| 2,018 | Computation and Language |
Learning to Write Notes in Electronic Health Records | Clinicians spend a significant amount of time inputting free-form textual
notes into Electronic Health Records (EHR) systems. Much of this documentation
work is seen as a burden, reducing time spent with patients and contributing to
clinician burnout. With the aspiration of AI-assisted note-writing, we propose
a new language modeling task predicting the content of notes conditioned on
past data from a patient's medical record, including patient demographics,
labs, medications, and past notes. We train generative models using the public,
de-identified MIMIC-III dataset and compare generated notes with those in the
dataset on multiple measures. We find that much of the content can be
predicted, and that many common templates found in notes can be learned. We
discuss how such models can be useful in supporting assistive note-writing
features such as error-detection and auto-complete.
| 2,018 | Computation and Language |
Learning to Focus when Ranking Answers | One of the main challenges in ranking is embedding the query and document
pairs into a joint feature space, which can then be fed to a learning-to-rank
algorithm. To achieve this representation, the conventional state of the art
approaches perform extensive feature engineering that encode the similarity of
the query-answer pair. Recently, deep-learning solutions have shown that it is
possible to achieve comparable performance, in some settings, by learning the
similarity representation directly from data. Unfortunately, previous models
perform poorly on longer texts, or on texts with significant portion of
irrelevant information, or which are grammatically incorrect. To overcome these
limitations, we propose a novel ranking algorithm for question answering,
QARAT, which uses an attention mechanism to learn on which words and phrases to
focus when building the mutual representation. We demonstrate superior ranking
performance on several real-world question-answer ranking datasets, and provide
visualization of the attention mechanism to otter more insights into how our
models of attention could benefit ranking for difficult question answering
challenges.
| 2,018 | Computation and Language |
Debugging Neural Machine Translations | In this paper, we describe a tool for debugging the output and attention
weights of neural machine translation (NMT) systems and for improved
estimations of confidence about the output based on the attention. The purpose
of the tool is to help researchers and developers find weak and faulty example
translations that their NMT systems produce without the need for reference
translations. Our tool also includes an option to directly compare translation
outputs from two different NMT engines or experiments. In addition, we present
a demo website of our tool with examples of good and bad translations:
http://attention.lielakeda.lv
| 2,018 | Computation and Language |
Natural Language Generation by Hierarchical Decoding with Linguistic
Patterns | Natural language generation (NLG) is a critical component in spoken dialogue
systems. Classic NLG can be divided into two phases: (1) sentence planning:
deciding on the overall sentence structure, (2) surface realization:
determining specific word forms and flattening the sentence structure into a
string. Many simple NLG models are based on recurrent neural networks (RNN) and
sequence-to-sequence (seq2seq) model, which basically contains an
encoder-decoder structure; these NLG models generate sentences from scratch by
jointly optimizing sentence planning and surface realization using a simple
cross entropy loss training criterion. However, the simple encoder-decoder
architecture usually suffers from generating complex and long sentences,
because the decoder has to learn all grammar and diction knowledge. This paper
introduces a hierarchical decoding NLG model based on linguistic patterns in
different levels, and shows that the proposed method outperforms the
traditional one with a smaller model size. Furthermore, the design of the
hierarchical decoding is flexible and easily-extensible in various NLG systems.
| 2,018 | Computation and Language |
Effective Character-augmented Word Embedding for Machine Reading
Comprehension | Machine reading comprehension is a task to model relationship between passage
and query. In terms of deep learning framework, most of state-of-the-art models
simply concatenate word and character level representations, which has been
shown suboptimal for the concerned task. In this paper, we empirically explore
different integration strategies of word and character embeddings and propose a
character-augmented reader which attends character-level representation to
augment word embedding with a short list to improve word representations,
especially for rare words. Experimental results show that the proposed approach
helps the baseline model significantly outperform state-of-the-art baselines on
various public benchmarks.
| 2,021 | Computation and Language |
Debunking Fake News One Feature at a Time | Identifying the stance of a news article body with respect to a certain
headline is the first step to automated fake news detection. In this paper, we
introduce a 2-stage ensemble model to solve the stance detection task. By using
only hand-crafted features as input to a gradient boosting classifier, we are
able to achieve a score of 9161.5 out of 11651.25 (78.63%) on the official Fake
News Challenge (Stage 1) dataset. We identify the most useful features for
detecting fake news and discuss how sampling techniques can be used to improve
recall accuracy on a highly imbalanced dataset.
| 2,018 | Computation and Language |
Exploiting Effective Representations for Chinese Sentiment Analysis
Using a Multi-Channel Convolutional Neural Network | Effective representation of a text is critical for various natural language
processing tasks. For the particular task of Chinese sentiment analysis, it is
important to understand and choose an effective representation of a text from
different forms of Chinese representations such as word, character and pinyin.
This paper presents a systematic study of the effect of these representations
for Chinese sentiment analysis by proposing a multi-channel convolutional
neural network (MCCNN), where each channel corresponds to a representation.
Experimental results show that: (1) Word wins on the dataset of low OOV rate
while character wins otherwise; (2) Using these representations in combination
generally improves the performance; (3) The representations based on MCCNN
outperform conventional ngram features using SVM; (4) The proposed MCCNN model
achieves the competitive performance against the state-of-the-art model
fastText for Chinese sentiment analysis.
| 2,018 | Computation and Language |
Sentimental Content Analysis and Knowledge Extraction from News Articles | In web era, since technology has revolutionized mankind life, plenty of data
and information are published on the Internet each day. For instance, news
agencies publish news on their websites all over the world. These raw data
could be an important resource for knowledge extraction. These shared data
contain emotions (i.e., positive, neutral or negative) toward various topics;
therefore, sentimental content extraction could be a beneficial task in many
aspects. Extracting the sentiment of news illustrates highly valuable
information about the events over a period of time, the viewpoint of a media or
news agency to these events. In this paper an attempt is made to propose an
approach for news analysis and extracting useful knowledge from them. Firstly,
we attempt to extract a noise robust sentiment of news documents; therefore,
the news associated to six countries: United State, United Kingdom, Germany,
Canada, France and Australia in 5 different news categories: Politics, Sports,
Business, Entertainment and Technology are downloaded. In this paper we compare
the condition of different countries in each 5 news topics based on the
extracted sentiments and emotional contents in news documents. Moreover, we
propose an approach to reduce the bulky news data to extract the hottest topics
and news titles as a knowledge. Eventually, we generate a word model to map
each word to a fixed-size vector by Word2Vec in order to understand the
relations between words in our collected news database.
| 2,018 | Computation and Language |
Arithmetic Word Problem Solver using Frame Identification | Automatic Word problem solving has always posed a great challenge for the NLP
community. Usually a word problem is a narrative comprising of a few sentences
and a question is asked about a quantity referred in the sentences. Solving
word problem involves reasoning across sentences, identification of operations,
their order, relevant quantities and discarding irrelevant quantities. In this
paper, we present a novel approach for automatic arithmetic word problem
solving. Our approach starts with frame identification. Each frame can either
be classified as a state or an action frame. The frame identification is
dependent on the verb in a sentence. Every frame is unique and is identified by
its slots. The slots are filled using dependency parsed output of a sentence.
The slots are entity holder, entity, quantity of the entity, recipient,
additional information like place, time. The slots and frames helps to identify
the type of question asked and the entity referred. Action frames act on state
frame(s) which causes a change in quantities of the state frames. The frames
are then used to build a graph where any change in quantities can be propagated
to the neighboring nodes. Most of the current solvers can only answer questions
related to the quantity, while our system can answer different kinds of
questions like `who', `what' other than the quantity related questions `how
many'.
There are three major contributions of this paper. 1. Frame Annotated Corpus
(with a frame annotation tool) 2. Frame Identification Module 3. A new easily
understandable Framework for word problem solving
| 2,018 | Computation and Language |
A Survey on Sentiment and Emotion Analysis for Computational Literary
Studies | Emotions are a crucial part of compelling narratives: literature tells us
about people with goals, desires, passions, and intentions. Emotion analysis is
part of the broader and larger field of sentiment analysis, and receives
increasing attention in literary studies. In the past, the affective dimension
of literature was mainly studied in the context of literary hermeneutics.
However, with the emergence of the research field known as Digital Humanities
(DH), some studies of emotions in a literary context have taken a computational
turn. Given the fact that DH is still being formed as a field, this direction
of research can be rendered relatively new. In this survey, we offer an
overview of the existing body of research on emotion analysis as applied to
literature. The research under review deals with a variety of topics including
tracking dramatic changes of a plot development, network analysis of a literary
text, and understanding the emotionality of texts, among other topics.
| 2,022 | Computation and Language |
Building a Kannada POS Tagger Using Machine Learning and Neural Network
Models | POS Tagging serves as a preliminary task for many NLP applications. Kannada
is a relatively poor Indian language with very limited number of quality NLP
tools available for use. An accurate and reliable POS Tagger is essential for
many NLP tasks like shallow parsing, dependency parsing, sentiment analysis,
named entity recognition. We present a statistical POS tagger for Kannada using
different machine learning and neural network models. Our Kannada POS tagger
outperforms the state-of-the-art Kannada POS tagger by 6%. Our contribution in
this paper is three folds - building a generic POS Tagger, comparing the
performances of different modeling techniques, exploring the use of character
and word embeddings together for Kannada POS Tagging.
| 2,018 | Computation and Language |
Code-Mixed Sentiment Analysis Using Machine Learning and Neural Network
Approaches | Sentiment Analysis for Indian Languages (SAIL)-Code Mixed tools contest aimed
at identifying the sentence level sentiment polarity of the code-mixed dataset
of Indian languages pairs (Hi-En, Ben-Hi-En). Hi-En dataset is henceforth
referred to as HI-EN and Ben-Hi-En dataset as BN-EN respectively. For this, we
submitted four models for sentiment analysis of code-mixed HI-EN and BN-EN
datasets. The first model was an ensemble voting classifier consisting of three
classifiers - linear SVM, logistic regression and random forests while the
second one was a linear SVM. Both the models used TF-IDF feature vectors of
character n-grams where n ranged from 2 to 6. We used scikit-learn (sklearn)
machine learning library for implementing both the approaches. Run1 was
obtained from the voting classifier and Run2 used the linear SVM model for
producing the results. Out of the four submitted outputs Run2 outperformed Run1
in both the datasets. We finished first in the contest for both HI-EN with an
F-score of 0.569 and BN-EN with an F-score of 0.526.
| 2,018 | Computation and Language |
Efficient human-like semantic representations via the Information
Bottleneck principle | Maintaining efficient semantic representations of the environment is a major
challenge both for humans and for machines. While human languages represent
useful solutions to this problem, it is not yet clear what computational
principle could give rise to similar solutions in machines. In this work we
propose an answer to this open question. We suggest that languages compress
percepts into words by optimizing the Information Bottleneck (IB) tradeoff
between the complexity and accuracy of their lexicons. We present empirical
evidence that this principle may give rise to human-like semantic
representations, by exploring how human languages categorize colors. We show
that color naming systems across languages are near-optimal in the IB sense,
and that these natural systems are similar to artificial IB color naming
systems with a single tradeoff parameter controlling the cross-language
variability. In addition, the IB systems evolve through a sequence of
structural phase transitions, demonstrating a possible adaptation process. This
work thus identifies a computational principle that characterizes human
semantic systems, and that could usefully inform semantic representations in
machines.
| 2,017 | Computation and Language |
Lingke: A Fine-grained Multi-turn Chatbot for Customer Service | Traditional chatbots usually need a mass of human dialogue data, especially
when using supervised machine learning method. Though they can easily deal with
single-turn question answering, for multi-turn the performance is usually
unsatisfactory. In this paper, we present Lingke, an information retrieval
augmented chatbot which is able to answer questions based on given product
introduction document and deal with multi-turn conversations. We will introduce
a fine-grained pipeline processing to distill responses based on unstructured
documents, and attentive sequential context-response matching for multi-turn
conversations.
| 2,018 | Computation and Language |
Hybrid approach for transliteration of Algerian arabizi: a primary study | A hybrid approach for the transliteration of Algerian Arabizi: A primary
study In this paper, we present a hybrid approach for the transliteration of
the Algerian Arabizi. We define a set of rules enable us the passage from
Arabizi to Arabic. Through these rules, we generate a set of candidates for the
transliteration of each Arabizi word into arabic. Then, we extract the best
candidate. This approach was evaluated by using three test corpora, and the
obtained results show an improvement of the precision score which is equal to
75.11% for the best result. These results allow us to verify that our approach
is very competitive comparing to others works that treat Arabizi
transliteration in general.
Keywords: Arabizi, Dialecte Alg\'erien, Arabizi Alg\'erien,
Translit\'eration.
| 2,018 | Computation and Language |
TwoWingOS: A Two-Wing Optimization Strategy for Evidential Claim
Verification | Determining whether a given claim is supported by evidence is a fundamental
NLP problem that is best modeled as Textual Entailment. However, given a large
collection of text, finding evidence that could support or refute a given claim
is a challenge in itself, amplified by the fact that different evidence might
be needed to support or refute a claim. Nevertheless, most prior work decouples
evidence identification from determining the truth value of the claim given the
evidence.
We propose to consider these two aspects jointly. We develop TwoWingOS
(two-wing optimization strategy), a system that, while identifying appropriate
evidence for a claim, also determines whether or not the claim is supported by
the evidence. Given the claim, TwoWingOS attempts to identify a subset of the
evidence candidates; given the predicted evidence, it then attempts to
determine the truth value of the corresponding claim. We treat this challenge
as coupled optimization problems, training a joint model for it. TwoWingOS
offers two advantages: (i) Unlike pipeline systems, it facilitates
flexible-size evidence set, and (ii) Joint training improves both the claim
entailment and the evidence identification. Experiments on a benchmark dataset
show state-of-the-art performance. Code: https://github.com/yinwenpeng/FEVER
| 2,018 | Computation and Language |
Making effective use of healthcare data using data-to-text technology | Healthcare organizations are in a continuous effort to improve health
outcomes, reduce costs and enhance patient experience of care. Data is
essential to measure and help achieving these improvements in healthcare
delivery. Consequently, a data influx from various clinical, financial and
operational sources is now overtaking healthcare organizations and their
patients. The effective use of this data, however, is a major challenge.
Clearly, text is an important medium to make data accessible. Financial reports
are produced to assess healthcare organizations on some key performance
indicators to steer their healthcare delivery. Similarly, at a clinical level,
data on patient status is conveyed by means of textual descriptions to
facilitate patient review, shift handover and care transitions. Likewise,
patients are informed about data on their health status and treatments via
text, in the form of reports or via ehealth platforms by their doctors.
Unfortunately, such text is the outcome of a highly labour-intensive process if
it is done by healthcare professionals. It is also prone to incompleteness,
subjectivity and hard to scale up to different domains, wider audiences and
varying communication purposes. Data-to-text is a recent breakthrough
technology in artificial intelligence which automatically generates natural
language in the form of text or speech from data. This chapter provides a
survey of data-to-text technology, with a focus on how it can be deployed in a
healthcare setting. It will (1) give an up-to-date synthesis of data-to-text
approaches, (2) give a categorized overview of use cases in healthcare, (3)
seek to make a strong case for evaluating and implementing data-to-text in a
healthcare setting, and (4) highlight recent research challenges.
| 2,018 | Computation and Language |
Densely Connected Convolutional Networks for Speech Recognition | This paper presents our latest investigation on Densely Connected
Convolutional Networks (DenseNets) for acoustic modelling (AM) in automatic
speech recognition. DenseN-ets are very deep, compact convolutional neural
networks, which have demonstrated incredible improvements over the
state-of-the-art results on several data sets in computer vision. Our
experimental results show that DenseNet can be used for AM significantly
outperforming other neural-based models such as DNNs, CNNs, VGGs. Furthermore,
results on Wall Street Journal revealed that with only a half of the training
data DenseNet was able to outperform other models trained with the full data
set by a large margin.
| 2,018 | Computation and Language |
LemmaTag: Jointly Tagging and Lemmatizing for Morphologically-Rich
Languages with BRNNs | We present LemmaTag, a featureless neural network architecture that jointly
generates part-of-speech tags and lemmas for sentences by using bidirectional
RNNs with character-level and word-level embeddings. We demonstrate that both
tasks benefit from sharing the encoding part of the network, predicting tag
subcategories, and using the tagger output as an input to the lemmatizer. We
evaluate our model across several languages with complex morphology, which
surpasses state-of-the-art accuracy in both part-of-speech tagging and
lemmatization in Czech, German, and Arabic.
| 2,018 | Computation and Language |
Unsupervised Keyphrase Extraction from Scientific Publications | We propose a novel unsupervised keyphrase extraction approach that filters
candidate keywords using outlier detection. It starts by training word
embeddings on the target document to capture semantic regularities among the
words. It then uses the minimum covariance determinant estimator to model the
distribution of non-keyphrase word vectors, under the assumption that these
vectors come from the same distribution, indicative of their irrelevance to the
semantics expressed by the dimensions of the learned vector representation.
Candidate keyphrases only consist of words that are detected as outliers of
this dominant distribution. Empirical results show that our approach
outperforms state-of-the-art and recent unsupervised keyphrase extraction
methods.
| 2,020 | Computation and Language |
Learning to Represent Bilingual Dictionaries | Bilingual word embeddings have been widely used to capture the similarity of
lexical semantics in different human languages. However, many applications,
such as cross-lingual semantic search and question answering, can be largely
benefited from the cross-lingual correspondence between sentences and lexicons.
To bridge this gap, we propose a neural embedding model that leverages
bilingual dictionaries. The proposed model is trained to map the literal word
definitions to the cross-lingual target words, for which we explore with
different sentence encoding techniques. To enhance the learning process on
limited resources, our model adopts several critical learning strategies,
including multi-task learning on different bridges of languages, and joint
learning of the dictionary model with a bilingual word embedding model.
Experimental evaluation focuses on two applications. The results of the
cross-lingual reverse dictionary retrieval task show our model's promising
ability of comprehending bilingual concepts based on descriptions, and
highlight the effectiveness of proposed learning strategies in improving
performance. Meanwhile, our model effectively addresses the bilingual
paraphrase identification problem and significantly outperforms previous
approaches.
| 2,019 | Computation and Language |
Hierarchical Attention: What Really Counts in Various NLP Tasks | Attention mechanisms in sequence to sequence models have shown great ability
and wonderful performance in various natural language processing (NLP) tasks,
such as sentence embedding, text generation, machine translation, machine
reading comprehension, etc. Unfortunately, existing attention mechanisms only
learn either high-level or low-level features. In this paper, we think that the
lack of hierarchical mechanisms is a bottleneck in improving the performance of
the attention mechanisms, and propose a novel Hierarchical Attention Mechanism
(Ham) based on the weighted sum of different layers of a multi-level attention.
Ham achieves a state-of-the-art BLEU score of 0.26 on Chinese poem generation
task and a nearly 6.5% averaged improvement compared with the existing machine
reading comprehension models such as BIDAF and Match-LSTM. Furthermore, our
experiments and theorems reveal that Ham has greater generalization and
representation ability than existing attention mechanisms.
| 2,018 | Computation and Language |
From POS tagging to dependency parsing for biomedical event extraction | Background: Given the importance of relation or event extraction from
biomedical research publications to support knowledge capture and synthesis,
and the strong dependency of approaches to this information extraction task on
syntactic information, it is valuable to understand which approaches to
syntactic processing of biomedical text have the highest performance. Results:
We perform an empirical study comparing state-of-the-art traditional
feature-based and neural network-based models for two core natural language
processing tasks of part-of-speech (POS) tagging and dependency parsing on two
benchmark biomedical corpora, GENIA and CRAFT. To the best of our knowledge,
there is no recent work making such comparisons in the biomedical context;
specifically no detailed analysis of neural models on this data is available.
Experimental results show that in general, the neural models outperform the
feature-based models on two benchmark biomedical corpora GENIA and CRAFT. We
also perform a task-oriented evaluation to investigate the influences of these
models in a downstream application on biomedical event extraction, and show
that better intrinsic parsing performance does not always imply better
extrinsic event extraction performance. Conclusion: We have presented a
detailed empirical study comparing traditional feature-based and neural
network-based models for POS tagging and dependency parsing in the biomedical
context, and also investigated the influence of parser selection for a
biomedical event extraction downstream task. Availability of data and material:
We make the retrained models available at
https://github.com/datquocnguyen/BioPosDep
| 2,019 | Computation and Language |
Familia: A Configurable Topic Modeling Framework for Industrial Text
Engineering | In the last decade, a variety of topic models have been proposed for text
engineering. However, except Probabilistic Latent Semantic Analysis (PLSA) and
Latent Dirichlet Allocation (LDA), most of existing topic models are seldom
applied or considered in industrial scenarios. This phenomenon is caused by the
fact that there are very few convenient tools to support these topic models so
far. Intimidated by the demanding expertise and labor of designing and
implementing parameter inference algorithms, software engineers are prone to
simply resort to PLSA/LDA, without considering whether it is proper for their
problem at hand or not. In this paper, we propose a configurable topic modeling
framework named Familia, in order to bridge the huge gap between academic
research fruits and current industrial practice. Familia supports an important
line of topic models that are widely applicable in text engineering scenarios.
In order to relieve burdens of software engineers without knowledge of Bayesian
networks, Familia is able to conduct automatic parameter inference for a
variety of topic models. Simply through changing the data organization of
Familia, software engineers are able to easily explore a broad spectrum of
existing topic models or even design their own topic models, and find the one
that best suits the problem at hand. With its superior extendability, Familia
has a novel sampling mechanism that strikes balance between effectiveness and
efficiency of parameter inference. Furthermore, Familia is essentially a big
topic modeling framework that supports parallel parameter inference and
distributed parameter storage. The utilities and necessity of Familia are
demonstrated in real-life industrial applications. Familia would significantly
enlarge software engineers' arsenal of topic models and pave the way for
utilizing highly customized topic models in real-life problems.
| 2,018 | Computation and Language |
Ancient-Modern Chinese Translation with a Large Training Dataset | Ancient Chinese brings the wisdom and spirit culture of the Chinese nation.
Automatic translation from ancient Chinese to modern Chinese helps to inherit
and carry forward the quintessence of the ancients. However, the lack of
large-scale parallel corpus limits the study of machine translation in
Ancient-Modern Chinese. In this paper, we propose an Ancient-Modern Chinese
clause alignment approach based on the characteristics of these two languages.
This method combines both lexical-based information and statistical-based
information, which achieves 94.2 F1-score on our manual annotation Test set. We
use this method to create a new large-scale Ancient-Modern Chinese parallel
corpus which contains 1.24M bilingual pairs. To our best knowledge, this is the
first large high-quality Ancient-Modern Chinese dataset. Furthermore, we
analyzed and compared the performance of the SMT and various NMT models on this
dataset and provided a strong baseline for this task.
| 2,019 | Computation and Language |
Dropout during inference as a model for neurological degeneration in an
image captioning network | We replicate a variation of the image captioning architecture by Vinyals et
al. (2015), then introduce dropout during inference mode to simulate the
effects of neurodegenerative diseases like Alzheimer's disease (AD) and
Wernicke's aphasia (WA). We evaluate the effects of dropout on language
production by measuring the KL-divergence of word frequency distributions and
other linguistic metrics as dropout is added. We find that the generated
sentences most closely approximate the word frequency distribution of the
training corpus when using a moderate dropout of 0.4 during inference.
| 2,018 | Computation and Language |
Knowledge Graph Embedding with Entity Neighbors and Deep Memory Network | Knowledge Graph Embedding (KGE) aims to represent entities and relations of
knowledge graph in a low-dimensional continuous vector space. Recent works
focus on incorporating structural knowledge with additional information, such
as entity descriptions, relation paths and so on. However, common used
additional information usually contains plenty of noise, which makes it hard to
learn valuable representation. In this paper, we propose a new kind of
additional information, called entity neighbors, which contain both semantic
and topological features about given entity. We then develop a deep memory
network model to encode information from neighbors. Employing a gating
mechanism, representations of structure and neighbors are integrated into a
joint representation. The experimental results show that our model outperforms
existing KGE methods utilizing entity descriptions and achieves
state-of-the-art metrics on 4 datasets.
| 2,018 | Computation and Language |
The Impact of Automatic Pre-annotation in Clinical Note Data Element
Extraction - the CLEAN Tool | Objective. Annotation is expensive but essential for clinical note review and
clinical natural language processing (cNLP). However, the extent to which
computer-generated pre-annotation is beneficial to human annotation is still an
open question. Our study introduces CLEAN (CLinical note rEview and
ANnotation), a pre-annotation-based cNLP annotation system to improve clinical
note annotation of data elements, and comprehensively compares CLEAN with the
widely-used annotation system Brat Rapid Annotation Tool (BRAT).
Materials and Methods. CLEAN includes an ensemble pipeline (CLEAN-EP) with a
newly developed annotation tool (CLEAN-AT). A domain expert and a novice
user/annotator participated in a comparative usability test by tagging 87 data
elements related to Congestive Heart Failure (CHF) and Kawasaki Disease (KD)
cohorts in 84 public notes.
Results. CLEAN achieved higher note-level F1-score (0.896) over BRAT (0.820),
with significant difference in correctness (P-value < 0.001), and the mostly
related factor being system/software (P-value < 0.001). No significant
difference (P-value 0.188) in annotation time was observed between CLEAN (7.262
minutes/note) and BRAT (8.286 minutes/note). The difference was mostly
associated with note length (P-value < 0.001) and system/software (P-value
0.013). The expert reported CLEAN to be useful/satisfactory, while the novice
reported slight improvements.
Discussion. CLEAN improves the correctness of annotation and increases
usefulness/satisfaction with the same level of efficiency. Limitations include
untested impact of pre-annotation correctness rate, small sample size, small
user size, and restrictedly validated gold standard.
Conclusion. CLEAN with pre-annotation can be beneficial for an expert to deal
with complex annotation tasks involving numerous and diverse target data
elements.
| 2,018 | Computation and Language |
A Full End-to-End Semantic Role Labeler, Syntax-agnostic Over
Syntax-aware? | Semantic role labeling (SRL) is to recognize the predicate-argument structure
of a sentence, including subtasks of predicate disambiguation and argument
labeling. Previous studies usually formulate the entire SRL problem into two or
more subtasks. For the first time, this paper introduces an end-to-end neural
model which unifiedly tackles the predicate disambiguation and the argument
labeling in one shot. Using a biaffine scorer, our model directly predicts all
semantic role labels for all given word pairs in the sentence without relying
on any syntactic parse information. Specifically, we augment the BiLSTM encoder
with a non-linear transformation to further distinguish the predicate and the
argument in a given sentence, and model the semantic role labeling process as a
word pair classification task by employing the biaffine attentional mechanism.
Though the proposed model is syntax-agnostic with local decoder, it outperforms
the state-of-the-art syntax-aware SRL systems on the CoNLL-2008, 2009
benchmarks for both English and Chinese. To our best knowledge, we report the
first syntax-agnostic SRL model that surpasses all known syntax-aware models.
| 2,018 | Computation and Language |
Fake Sentence Detection as a Training Task for Sentence Encoding | Sentence encoders are typically trained on language modeling tasks with large
unlabeled datasets. While these encoders achieve state-of-the-art results on
many sentence-level tasks, they are difficult to train with long training
cycles. We introduce fake sentence detection as a new training task for
learning sentence encoders. We automatically generate fake sentences by
corrupting original sentences from a source collection and train the encoders
to produce representations that are effective at detecting fake sentences. This
binary classification task turns to be quite efficient for training sentence
encoders. We compare a basic BiLSTM encoder trained on this task with a strong
sentence encoding models (Skipthought and FastSent) trained on a language
modeling task. We find that the BiLSTM trains much faster on fake sentence
detection (20 hours instead of weeks) using smaller amounts of data (1M instead
of 64M sentences). Further analysis shows the learned representations capture
many syntactic and semantic properties expected from good sentence
representations.
| 2,018 | Computation and Language |
Pervasive Attention: 2D Convolutional Neural Networks for
Sequence-to-Sequence Prediction | Current state-of-the-art machine translation systems are based on
encoder-decoder architectures, that first encode the input sequence, and then
generate an output sequence based on the input encoding. Both are interfaced
with an attention mechanism that recombines a fixed encoding of the source
tokens based on the decoder state. We propose an alternative approach which
instead relies on a single 2D convolutional neural network across both
sequences. Each layer of our network re-codes source tokens on the basis of the
output sequence produced so far. Attention-like properties are therefore
pervasive throughout the network. Our model yields excellent results,
outperforming state-of-the-art encoder-decoder systems, while being
conceptually simpler and having fewer parameters.
| 2,018 | Computation and Language |
Interpreting Recurrent and Attention-Based Neural Models: a Case Study
on Natural Language Inference | Deep learning models have achieved remarkable success in natural language
inference (NLI) tasks. While these models are widely explored, they are hard to
interpret and it is often unclear how and why they actually work. In this
paper, we take a step toward explaining such deep learning based models through
a case study on a popular neural model for NLI. In particular, we propose to
interpret the intermediate layers of NLI models by visualizing the saliency of
attention and LSTM gating signals. We present several examples for which our
methods are able to reveal interesting insights and identify the critical
information contributing to the model decisions.
| 2,018 | Computation and Language |
Addressee and Response Selection for Multilingual Conversation | Developing conversational systems that can converse in many languages is an
interesting challenge for natural language processing. In this paper, we
introduce multilingual addressee and response selection. In this task, a
conversational system predicts an appropriate addressee and response for an
input message in multiple languages. A key to developing such multilingual
responding systems is how to utilize high-resource language data to compensate
for low-resource language data. We present several knowledge transfer methods
for conversational systems. To evaluate our methods, we create a new
multilingual conversation dataset. Experiments on the dataset demonstrate the
effectiveness of our methods.
| 2,018 | Computation and Language |
Sequence Labeling: A Practical Approach | We take a practical approach to solving sequence labeling problem assuming
unavailability of domain expertise and scarcity of informational and
computational resources. To this end, we utilize a universal end-to-end
Bi-LSTM-based neural sequence labeling model applicable to a wide range of NLP
tasks and languages. The model combines morphological, semantic, and structural
cues extracted from data to arrive at informed predictions. The model's
performance is evaluated on eight benchmark datasets (covering three tasks:
POS-tagging, NER, and Chunking, and four languages: English, German, Dutch, and
Spanish). We observe state-of-the-art results on four of them: CoNLL-2012
(English NER), CoNLL-2002 (Dutch NER), GermEval 2014 (German NER), Tiger Corpus
(German POS-tagging), and competitive performance on the rest.
| 2,018 | Computation and Language |
Augmenting word2vec with latent Dirichlet allocation within a clinical
application | This paper presents three hybrid models that directly combine latent
Dirichlet allocation and word embedding for distinguishing between speakers
with and without Alzheimer's disease from transcripts of picture descriptions.
Two of our models get F-scores over the current state-of-the-art using
automatic methods on the DementiaBank dataset.
| 2,018 | Computation and Language |
Text Classification using Capsules | This paper presents an empirical exploration of the use of capsule networks
for text classification. While it has been shown that capsule networks are
effective for image classification, their validity in the domain of text has
not been explored. In this paper, we show that capsule networks indeed have the
potential for text classification and that they have several advantages over
convolutional neural networks. We further suggest a simple routing method that
effectively reduces the computational complexity of dynamic routing. We
utilized seven benchmark datasets to demonstrate that capsule networks, along
with the proposed routing method provide comparable results.
| 2,018 | Computation and Language |
Multimodal Differential Network for Visual Question Generation | Generating natural questions from an image is a semantic task that requires
using visual and language modality to learn multimodal representations. Images
can have multiple visual and language contexts that are relevant for generating
questions namely places, captions, and tags. In this paper, we propose the use
of exemplars for obtaining the relevant context. We obtain this by using a
Multimodal Differential Network to produce natural and engaging questions. The
generated questions show a remarkable similarity to the natural questions as
validated by a human study. Further, we observe that the proposed approach
substantially improves over state-of-the-art benchmarks on the quantitative
metrics (BLEU, METEOR, ROUGE, and CIDEr).
| 2,019 | Computation and Language |
Confidence penalty, annealing Gaussian noise and zoneout for biLSTM-CRF
networks for named entity recognition | Named entity recognition (NER) is used to identify relevant entities in text.
A bidirectional LSTM (long short term memory) encoder with a neural conditional
random fields (CRF) decoder (biLSTM-CRF) is the state of the art methodology.
In this work, we have done an analysis of several methods that intend to
optimize the performance of networks based on this architecture, which in some
cases encourage overfitting avoidance. These methods target exploration of
parameter space, regularization of LSTMs and penalization of confident output
distributions. Results show that the optimization methods improve the
performance of the biLSTM-CRF NER baseline system, setting a new state of the
art performance for the CoNLL-2003 Spanish set with an F1 of 87.18.
| 2,018 | Computation and Language |
Regularizing Neural Machine Translation by Target-bidirectional
Agreement | Although Neural Machine Translation (NMT) has achieved remarkable progress in
the past several years, most NMT systems still suffer from a fundamental
shortcoming as in other sequence generation tasks: errors made early in
generation process are fed as inputs to the model and can be quickly amplified,
harming subsequent sequence generation. To address this issue, we propose a
novel model regularization method for NMT training, which aims to improve the
agreement between translations generated by left-to-right (L2R) and
right-to-left (R2L) NMT decoders. This goal is achieved by introducing two
Kullback-Leibler divergence regularization terms into the NMT training
objective to reduce the mismatch between output probabilities of L2R and R2L
models. In addition, we also employ a joint training strategy to allow L2R and
R2L models to improve each other in an interactive update process. Experimental
results show that our proposed method significantly outperforms
state-of-the-art baselines on Chinese-English and English-German translation
tasks.
| 2,018 | Computation and Language |
Language Style Transfer from Sentences with Arbitrary Unknown Styles | Language style transfer is the problem of migrating the content of a source
sentence to a target style. In many of its applications, parallel training data
are not available and source sentences to be transferred may have arbitrary and
unknown styles. First, each sentence is encoded into its content and style
latent representations. Then, by recombining the content with the target style,
we decode a sentence aligned in the target domain. To adequately constrain the
encoding and decoding functions, we couple them with two loss functions. The
first is a style discrepancy loss, enforcing that the style representation
accurately encodes the style information guided by the discrepancy between the
sentence style and the target style. The second is a cycle consistency loss,
which ensures that the transferred sentence should preserve the content of the
original sentence disentangled from its style. We validate the effectiveness of
our model in three tasks: sentiment modification of restaurant reviews, dialog
response revision with a romantic style, and sentence rewriting with a
Shakespearean style.
| 2,018 | Computation and Language |
Live Video Comment Generation Based on Surrounding Frames and Live
Comments | In this paper, we propose the task of live comment generation. Live comments
are a new form of comments on videos, which can be regarded as a mixture of
comments and chats. A high-quality live comment should be not only relevant to
the video, but also interactive with other users. In this work, we first
construct a new dataset for live comment generation. Then, we propose a novel
end-to-end model to generate the human-like live comments by referring to the
video and the other users' comments. Finally, we evaluate our model on the
constructed dataset. Experimental results show that our method can
significantly outperform the baselines.
| 2,018 | Computation and Language |
Towards Audio to Scene Image Synthesis using Generative Adversarial
Network | Humans can imagine a scene from a sound. We want machines to do so by using
conditional generative adversarial networks (GANs). By applying the techniques
including spectral norm, projection discriminator and auxiliary classifier,
compared with naive conditional GAN, the model can generate images with better
quality in terms of both subjective and objective evaluations. Almost
three-fourth of people agree that our model have the ability to generate images
related to sounds. By inputting different volumes of the same sound, our model
output different scales of changes based on the volumes, showing that our model
truly knows the relationship between sounds and images to some extent.
| 2,018 | Computation and Language |
A Capsule Network-based Embedding Model for Knowledge Graph Completion
and Search Personalization | In this paper, we introduce an embedding model, named CapsE, exploring a
capsule network to model relationship triples (subject, relation, object). Our
CapsE represents each triple as a 3-column matrix where each column vector
represents the embedding of an element in the triple. This 3-column matrix is
then fed to a convolution layer where multiple filters are operated to generate
different feature maps. These feature maps are reconstructed into corresponding
capsules which are then routed to another capsule to produce a continuous
vector. The length of this vector is used to measure the plausibility score of
the triple. Our proposed CapsE obtains better performance than previous
state-of-the-art embedding models for knowledge graph completion on two
benchmark datasets WN18RR and FB15k-237, and outperforms strong search
personalization baselines on SEARCH17.
| 2,019 | Computation and Language |
Modeling Semantics with Gated Graph Neural Networks for Knowledge Base
Question Answering | The most approaches to Knowledge Base Question Answering are based on
semantic parsing. In this paper, we address the problem of learning vector
representations for complex semantic parses that consist of multiple entities
and relations. Previous work largely focused on selecting the correct semantic
relations for a question and disregarded the structure of the semantic parse:
the connections between entities and the directions of the relations. We
propose to use Gated Graph Neural Networks to encode the graph structure of the
semantic parse. We show on two data sets that the graph networks outperform all
baseline models that do not explicitly model the structure. The error analysis
confirms that our approach can successfully process complex semantic parses.
| 2,018 | Computation and Language |
Learning Explanations from Language Data | PatternAttribution is a recent method, introduced in the vision domain, that
explains classifications of deep neural networks. We demonstrate that it also
generates meaningful interpretations in the language domain.
| 2,018 | Computation and Language |
Multi-Task Learning for Sequence Tagging: An Empirical Study | We study three general multi-task learning (MTL) approaches on 11 sequence
tagging tasks. Our extensive empirical results show that in about 50% of the
cases, jointly learning all 11 tasks improves upon either independent or
pairwise learning of the tasks. We also show that pairwise MTL can inform us
what tasks can benefit others or what tasks can be benefited if they are
learned jointly. In particular, we identify tasks that can always benefit
others as well as tasks that can always be harmed by others. Interestingly, one
of our MTL approaches yields embeddings of the tasks that reveal the natural
clustering of semantic and syntactic tasks. Our inquiries have opened the doors
to further utilization of MTL in NLP.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.