Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
One for All: Neural Joint Modeling of Entities and Events | The previous work for event extraction has mainly focused on the predictions
for event triggers and argument roles, treating entity mentions as being
provided by human annotators. This is unrealistic as entity mentions are
usually predicted by some existing toolkits whose errors might be propagated to
the event trigger and argument role recognition. Few of the recent work has
addressed this problem by jointly predicting entity mentions, event triggers
and arguments. However, such work is limited to using discrete engineering
features to represent contextual information for the individual tasks and their
interactions. In this work, we propose a novel model to jointly perform
predictions for entity mentions, event triggers and arguments based on the
shared hidden representations from deep learning. The experiments demonstrate
the benefits of the proposed method, leading to the state-of-the-art
performance for event extraction.
| 2,018 | Computation and Language |
A Survey of Fake News: Fundamental Theories, Detection Methods, and
Opportunities | The explosive growth in fake news and its erosion to democracy, justice, and
public trust has increased the demand for fake news detection and intervention.
This survey reviews and evaluates methods that can detect fake news from four
perspectives: (1) the false knowledge it carries, (2) its writing style, (3)
its propagation patterns, and (4) the credibility of its source. The survey
also highlights some potential research tasks based on the review. In
particular, we identify and detail related fundamental theories across various
disciplines to encourage interdisciplinary research on fake news. We hope this
survey can facilitate collaborative efforts among experts in computer and
information sciences, social sciences, political science, and journalism to
research fake news, where such efforts can lead to fake news detection that is
not only efficient but more importantly, explainable.
| 2,020 | Computation and Language |
A Study on Dialogue Reward Prediction for Open-Ended Conversational
Agents | The amount of dialogue history to include in a conversational agent is often
underestimated and/or set in an empirical and thus possibly naive way. This
suggests that principled investigations into optimal context windows are
urgently needed given that the amount of dialogue history and corresponding
representations can play an important role in the overall performance of a
conversational system. This paper studies the amount of history required by
conversational agents for reliably predicting dialogue rewards. The task of
dialogue reward prediction is chosen for investigating the effects of varying
amounts of dialogue history and their impact on system performance.
Experimental results using a dataset of 18K human-human dialogues report that
lengthy dialogue histories of at least 10 sentences are preferred (25 sentences
being the best in our experiments) over short ones, and that lengthy histories
are useful for training dialogue reward predictors with strong positive
correlations between target dialogue rewards and predicted ones.
| 2,018 | Computation and Language |
Clinical Document Classification Using Labeled and Unlabeled Data Across
Hospitals | Reviewing radiology reports in emergency departments is an essential but
laborious task. Timely follow-up of patients with abnormal cases in their
radiology reports may dramatically affect the patient's outcome, especially if
they have been discharged with a different initial diagnosis. Machine learning
approaches have been devised to expedite the process and detect the cases that
demand instant follow up. However, these approaches require a large amount of
labeled data to train reliable predictive models. Preparing such a large
dataset, which needs to be manually annotated by health professionals, is
costly and time-consuming. This paper investigates a semi-supervised learning
framework for radiology report classification across three hospitals. The main
goal is to leverage clinical unlabeled data in order to augment the learning
process where limited labeled data is available. To further improve the
classification performance, we also integrate a transfer learning technique
into the semi-supervised learning pipeline . Our experimental findings show
that (1) convolutional neural networks (CNNs), while being independent of any
problem-specific feature engineering, achieve significantly higher
effectiveness compared to conventional supervised learning approaches, (2)
leveraging unlabeled data in training a CNN-based classifier reduces the
dependency on labeled data by more than 50% to reach the same performance of a
fully supervised CNN, and (3) transferring the knowledge gained from available
labeled data in an external source hospital significantly improves the
performance of a semi-supervised CNN model over their fully supervised
counterparts in a target hospital.
| 2,018 | Computation and Language |
Building Sequential Inference Models for End-to-End Response Selection | This paper presents an end-to-end response selection model for Track 1 of the
7th Dialogue System Technology Challenges (DSTC7). This task focuses on
selecting the correct next utterance from a set of candidates given a partial
conversation. We propose an end-to-end neural network based on enhanced
sequential inference model (ESIM) for this task. Our proposed model differs
from the original ESIM model in the following four aspects. First, a new word
representation method which combines the general pre-trained word embeddings
with those estimated on the task-specific training set is adopted in order to
address the challenge of out-of-vocabulary (OOV) words. Second, an attentive
hierarchical recurrent encoder (AHRE) is designed which is capable to encode
sentences hierarchically and generate more descriptive representations by
aggregation. Third, a new pooling method which combines multi-dimensional
pooling and last-state pooling is used instead of the simple combination of max
pooling and average pooling in the original ESIM. Last, a modification layer is
added before the softmax layer to emphasize the importance of the last
utterance in the context for response selection. In the released evaluation
results of DSTC7, our proposed method ranked second on the Ubuntu dataset and
third on the Advising dataset in subtask 1 of Track 1.
| 2,019 | Computation and Language |
The RGNLP Machine Translation Systems for WAT 2018 | This paper presents the system description of Machine Translation (MT)
system(s) for Indic Languages Multilingual Task for the 2018 edition of the WAT
Shared Task. In our experiments, we (the RGNLP team) explore both statistical
and neural methods across all language pairs. (We further present an extensive
comparison of language-related problems for both the approaches in the context
of low-resourced settings.) Our PBSMT models were highest score on all
automatic evaluation metrics in the English into Telugu, Hindi, Bengali, Tamil
portion of the shared task.
| 2,018 | Computation and Language |
Comparing Neural- and N-Gram-Based Language Models for Word Segmentation | Word segmentation is the task of inserting or deleting word boundary
characters in order to separate character sequences that correspond to words in
some language. In this article we propose an approach based on a beam search
algorithm and a language model working at the byte/character level, the latter
component implemented either as an n-gram model or a recurrent neural network.
The resulting system analyzes the text input with no word boundaries one token
at a time, which can be a character or a byte, and uses the information
gathered by the language model to determine if a boundary must be placed in the
current position or not. Our aim is to use this system in a preprocessing step
for a microtext normalization system. This means that it needs to effectively
cope with the data sparsity present on this kind of texts. We also strove to
surpass the performance of two readily available word segmentation systems: The
well-known and accessible Word Breaker by Microsoft, and the Python module
WordSegment by Grant Jenks. The results show that we have met our objectives,
and we hope to continue to improve both the precision and the efficiency of our
system in the future.
| 2,018 | Computation and Language |
Toward Scalable Neural Dialogue State Tracking Model | The latency in the current neural based dialogue state tracking models
prohibits them from being used efficiently for deployment in production
systems, albeit their highly accurate performance. This paper proposes a new
scalable and accurate neural dialogue state tracking model, based on the
recently proposed Global-Local Self-Attention encoder (GLAD) model by Zhong et
al. which uses global modules to share parameters between estimators for
different types (called slots) of dialogue states, and uses local modules to
learn slot-specific features. By using only one recurrent networks with global
conditioning, compared to (1 + \# slots) recurrent networks with global and
local conditioning used in the GLAD model, our proposed model reduces the
latency in training and inference times by $35\%$ on average, while preserving
performance of belief state tracking, by $97.38\%$ on turn request and
$88.51\%$ on joint goal and accuracy. Evaluation on Multi-domain dataset
(Multi-WoZ) also demonstrates that our model outperforms GLAD on turn inform
and joint goal accuracy.
| 2,018 | Computation and Language |
A Survey on Semantic Parsing | A significant amount of information in today's world is stored in structured
and semi-structured knowledge bases. Efficient and simple methods to query them
are essential and must not be restricted to only those who have expertise in
formal query languages. The field of semantic parsing deals with converting
natural language utterances to logical forms that can be easily executed on a
knowledge base. In this survey, we examine the various components of a semantic
parsing system and discuss prominent work ranging from the initial rule based
methods to the current neural approaches to program synthesis. We also discuss
methods that operate using varying levels of supervision and highlight the key
challenges involved in the learning of such systems.
| 2,019 | Computation and Language |
A System for Automated Image Editing from Natural Language Commands | This work presents the task of modifying images in an image editing program
using natural language written commands. We utilize a corpus of over 6000 image
edit text requests to alter real world images collected via crowdsourcing. A
novel framework composed of actions and entities to map a user's natural
language request to executable commands in an image editing program is
described. We resolve previously labeled annotator disagreement through a
voting process and complete annotation of the corpus. We experimented with
different machine learning models and found that the LSTM, the SVM, and the
bidirectional LSTM-CRF joint models are the best to detect image editing
actions and associated entities in a given utterance.
| 2,018 | Computation and Language |
e-SNLI: Natural Language Inference with Natural Language Explanations | In order for machine learning to garner widespread public adoption, models
must be able to provide interpretable and robust explanations for their
decisions, as well as learn from human-provided explanations at train time. In
this work, we extend the Stanford Natural Language Inference dataset with an
additional layer of human-annotated natural language explanations of the
entailment relations. We further implement models that incorporate these
explanations into their training process and output them at test time. We show
how our corpus of explanations, which we call e-SNLI, can be used for various
goals, such as obtaining full sentence justifications of a model's decisions,
improving universal sentence representations and transferring to out-of-domain
NLI datasets. Our dataset thus opens up a range of research directions for
using natural language explanations, both for improving models and for
asserting their trust.
| 2,018 | Computation and Language |
Practical Text Classification With Large Pre-Trained Language Models | Multi-emotion sentiment classification is a natural language processing (NLP)
problem with valuable use cases on real-world data. We demonstrate that
large-scale unsupervised language modeling combined with finetuning offers a
practical solution to this task on difficult datasets, including those with
label class imbalance and domain-specific context. By training an
attention-based Transformer network (Vaswani et al. 2017) on 40GB of text
(Amazon reviews) (McAuley et al. 2015) and fine-tuning on the training set, our
model achieves a 0.69 F1 score on the SemEval Task 1:E-c multi-dimensional
emotion classification problem (Mohammad et al. 2018), based on the Plutchik
wheel of emotions (Plutchik 1979). These results are competitive with state of
the art models, including strong F1 scores on difficult (emotion) categories
such as Fear (0.73), Disgust (0.77) and Anger (0.78), as well as competitive
results on rare categories such as Anticipation (0.42) and Surprise (0.37).
Furthermore, we demonstrate our application on a real world text classification
task. We create a narrowly collected text dataset of real tweets on several
topics, and show that our finetuned model outperforms general purpose
commercially available APIs for sentiment and multidimensional emotion
classification on this dataset by a significant margin. We also perform a
variety of additional studies, investigating properties of deep learning
architectures, datasets and algorithms for achieving practical multidimensional
sentiment classification. Overall, we find that unsupervised language modeling
and finetuning is a simple framework for achieving high quality results on
real-world sentiment classification.
| 2,018 | Computation and Language |
Transferable Natural Language Interface to Structured Queries aided by
Adversarial Generation | A natural language interface (NLI) to structured query is intriguing due to
its wide industrial applications and high economical values. In this work, we
tackle the problem of domain adaptation for NLI with limited data on target
domain. Two important approaches are considered: (a) effective
general-knowledge-learning on source domain semantic parsing, and (b) data
augmentation on target domain. We present a Structured Query Inference Network
(SQIN) to enhance learning for domain adaptation, by separating schema
information from NL and decoding SQL in a more structural-aware manner; we also
propose a GAN-based augmentation technique (AugmentGAN) to mitigate the issue
of lacking target domain data. We report solid results on GeoQuery, Overnight,
and WikiSQL to demonstrate state-of-the-art performances for both in-domain and
domain-transfer tasks.
| 2,018 | Computation and Language |
Quantification and Analysis of Scientific Language Variation Across
Research Fields | Quantifying differences in terminologies from various academic domains has
been a longstanding problem yet to be solved. We propose a computational
approach for analyzing linguistic variation among scientific research fields by
capturing the semantic change of terms based on a neural language model. The
model is trained on a large collection of literature in five computer science
research fields, for which we obtain field-specific vector representations for
key terms, and global vector representations for other words. Several
quantitative approaches are introduced to identify the terms whose semantics
have drastically changed, or remain unchanged across different research fields.
We also propose a metric to quantify the overall linguistic variation of
research fields. After quantitative evaluation on human annotated data and
qualitative comparison with other methods, we show that our model can improve
cross-disciplinary data collaboration by identifying terms that potentially
induce confusion during interdisciplinary studies.
| 2,018 | Computation and Language |
Tartan: A retrieval-based socialbot powered by a dynamic finite-state
machine architecture | This paper describes the Tartan conversational agent built for the 2018 Alexa
Prize Competition. Tartan is a non-goal-oriented socialbot focused around
providing users with an engaging and fluent casual conversation. Tartan's key
features include an emphasis on structured conversation based on flexible
finite-state models and an approach focused on understanding and using
conversational acts. To provide engaging conversations, Tartan blends
script-like yet dynamic responses with data-based generative and retrieval
models. Unique to Tartan is that our dialog manager is modeled as a dynamic
Finite State Machine. To our knowledge, no other conversational agent
implementation has followed this specific structure.
| 2,018 | Computation and Language |
Modeling natural language emergence with integral transform theory and
reinforcement learning | Zipf's law predicts a power-law relationship between word rank and frequency
in language communication systems and has been widely reported in a variety of
natural language processing applications. However, the emergence of natural
language is often modeled as a function of bias between speaker and listener
interests, which lacks a direct way of relating information-theoretic bias to
Zipfian rank. A function of bias also serves as an unintuitive interpretation
of the communicative effort exchanged between a speaker and a listener. We
counter these shortcomings by proposing a novel integral transform and kernel
for mapping communicative bias functions to corresponding word frequency-rank
representations at any arbitrary phase transition point, resulting in a direct
way to link communicative effort (modeled by speaker/listener bias) to specific
vocabulary used (represented by word rank). We demonstrate the practical
utility of our integral transform by showing how a change from bias to rank
results in greater accuracy and performance at an image classification task for
assigning word labels to images randomly subsampled from CIFAR10. We model this
task as a reinforcement learning game between a speaker and listener and
compare the relative impact of bias and Zipfian word rank on communicative
performance (and accuracy) between the two agents.
| 2,018 | Computation and Language |
Leveraging Multi-grained Sentiment Lexicon Information for Neural
Sequence Models | Neural sequence models have achieved great success in sentence-level
sentiment classification. However, some models are exceptionally complex or
based on expensive features. Some other models recognize the value of existed
linguistic resource but utilize it insufficiently. This paper proposes a novel
and general method to incorporate lexicon information, including sentiment
lexicons(+/-), negation words and intensifiers. Words are annotated in
fine-grained and coarse-grained labels. The proposed method first encodes the
fine-grained labels into sentiment embedding and concatenates it with word
embedding. Second, the coarse-grained labels are utilized to enhance the
attention mechanism to give large weight on sentiment-related words.
Experimental results show that our method can increase classification accuracy
for neural sequence models on both SST-5 and MR dataset. Specifically, the
enhanced Bi-LSTM model can even compare with a Tree-LSTM which uses expensive
phrase-level annotations. Further analysis shows that in most cases the lexicon
resource can offer the right annotations. Besides, the proposed method is
capable of overcoming the effect from inevitably wrong annotations.
| 2,019 | Computation and Language |
Playing Text-Adventure Games with Graph-Based Deep Reinforcement
Learning | Text-based adventure games provide a platform on which to explore
reinforcement learning in the context of a combinatorial action space, such as
natural language. We present a deep reinforcement learning architecture that
represents the game state as a knowledge graph which is learned during
exploration. This graph is used to prune the action space, enabling more
efficient exploration. The question of which action to take can be reduced to a
question-answering task, a form of transfer learning that pre-trains certain
parts of our architecture. In experiments using the TextWorld framework, we
show that our proposed technique can learn a control policy faster than
baseline alternatives. We have also open-sourced our code at
https://github.com/rajammanabrolu/KG-DQN.
| 2,019 | Computation and Language |
Impact of Sentiment Detection to Recognize Toxic and Subversive Online
Comments | The presence of toxic content has become a major problem for many online
communities. Moderators try to limit this problem by implementing more and more
refined comment filters, but toxic users are constantly finding new ways to
circumvent them. Our hypothesis is that while modifying toxic content and
keywords to fool filters can be easy, hiding sentiment is harder. In this
paper, we explore various aspects of sentiment detection and their correlation
to toxicity, and use our results to implement a toxicity detection tool. We
then test how adding the sentiment information helps detect toxicity in three
different real-world datasets, and incorporate subversion to these datasets to
simulate a user trying to circumvent the system. Our results show sentiment
information has a positive impact on toxicity detection against a subversive
user.
| 2,018 | Computation and Language |
Graph based Question Answering System | In today's digital age in the dawning era of big data analytics it is not the
information but the linking of information through entities and actions which
defines the discourse. Any textual data either available on the Internet off
off-line (like newspaper data, Wikipedia dump, etc) is basically connect
information which cannot be treated isolated for its wholesome semantics. There
is a need for an automated retrieval process with proper information extraction
to structure the data for relevant and fast text analytics. The first big
challenge is the conversion of unstructured textual data to structured data.
Unlike other databases, graph databases handle relationships and connections
elegantly. Our project aims at developing a graph-based information extraction
and retrieval system.
| 2,018 | Computation and Language |
Attention Boosted Sequential Inference Model | Attention mechanism has been proven effective on natural language processing.
This paper proposes an attention boosted natural language inference model named
aESIM by adding word attention and adaptive direction-oriented attention
mechanisms to the traditional Bi-LSTM layer of natural language inference
models, e.g. ESIM. This makes the inference model aESIM has the ability to
effectively learn the representation of words and model the local subsentential
inference between pairs of premise and hypothesis. The empirical studies on the
SNLI, MultiNLI and Quora benchmarks manifest that aESIM is superior to the
original ESIM model.
| 2,018 | Computation and Language |
An enhanced computational feature selection method for medical synonym
identification via bilingualism and multi-corpus training | Medical synonym identification has been an important part of medical natural
language processing (NLP). However, in the field of Chinese medical synonym
identification, there are problems like low precision and low recall rate. To
solve the problem, in this paper, we propose a method for identifying Chinese
medical synonyms. We first selected 13 features including Chinese and English
features. Then we studied the synonym identification results of each feature
alone and different combinations of the features. Through the comparison among
identification results, we present an optimal combination of features for
Chinese medical synonym identification. Experiments show that our selected
features have achieved 97.37% precision rate, 96.00% recall rate and 97.33% F1
score.
| 2,018 | Computation and Language |
MedSim: A Novel Semantic Similarity Measure in Bio-medical Knowledge
Graphs | We present MedSim, a novel semantic SIMilarity method based on public
well-established bio-MEDical knowledge graphs (KGs) and large-scale corpus, to
study the therapeutic substitution of antibiotics. Besides hierarchy and corpus
of KGs, MedSim further interprets medicine characteristics by constructing
multi-dimensional medicine-specific feature vectors. Dataset of 528 antibiotic
pairs scored by doctors is applied for evaluation and MedSim has produced
statistically significant improvement over other semantic similarity methods.
Furthermore, some promising applications of MedSim in drug substitution and
drug abuse prevention are presented in case study.
| 2,018 | Computation and Language |
Improving Medical Short Text Classification with Semantic Expansion
Using Word-Cluster Embedding | Automatic text classification (TC) research can be used for real-world
problems such as the classification of in-patient discharge summaries and
medical text reports, which is beneficial to make medical documents more
understandable to doctors. However, in electronic medical records (EMR), the
texts containing sentences are shorter than that in general domain, which leads
to the lack of semantic features and the ambiguity of semantic. To tackle this
challenge, we propose to add word-cluster embedding to deep neural network for
improving short text classification. Concretely, we first use hierarchical
agglomerative clustering to cluster the word vectors in the semantic space.
Then we calculate the cluster center vector which represents the implicit topic
information of words in the cluster. Finally, we expand word vector with
cluster center vector, and implement classifiers using CNN and LSTM
respectively. To evaluate the performance of our proposed method, we conduct
experiments on public data sets TREC and the medical short sentences data sets
which is constructed and released by us. The experimental results demonstrate
that our proposed method outperforms state-of-the-art baselines in short
sentence classification on both medical domain and general domain.
| 2,018 | Computation and Language |
Approach for Semi-automatic Construction of Anti-infective Drug Ontology
Based on Entity Linking | Ontology can be used for the interpretation of natural language. To construct
an anti-infective drug ontology, one needs to design and deploy a
methodological step to carry out the entity discovery and linking. Medical
synonym resources have been an important part of medical natural language
processing (NLP). However, there are problems such as low precision and low
recall rate. In this study, an NLP approach is adopted to generate candidate
entities. Open ontology is analyzed to extract semantic relations. Six-word
vector features and word-level features are selected to perform the entity
linking. The extraction results of synonyms with a single feature and different
combinations of features are studied. Experiments show that our selected
features have achieved a precision rate of 86.77%, a recall rate of 89.03% and
an F1 score of 87.89%. This paper finally presents the structure of the
proposed ontology and its relevant statistical data.
| 2,017 | Computation and Language |
A Knowledge Graph Based Solution for Entity Discovery and Linking in
Open-Domain Questions | Named entity discovery and linking is the fundamental and core component of
question answering. In Question Entity Discovery and Linking (QEDL) problem,
traditional methods are challenged because multiple entities in one short
question are difficult to be discovered entirely and the incomplete information
in short text makes entity linking hard to implement. To overcome these
difficulties, we proposed a knowledge graph based solution for QEDL and
developed a system consists of Question Entity Discovery (QED) module and
Entity Linking (EL) module. The method of QED module is a tradeoff and ensemble
of two methods. One is the method based on knowledge graph retrieval, which
could extract more entities in questions and guarantee the recall rate, the
other is the method based on Conditional Random Field (CRF), which improves the
precision rate. The EL module is treated as a ranking problem and Learning to
Rank (LTR) method with features such as semantic similarity, text similarity
and entity popularity is utilized to extract and make full use of the
information in short texts. On the official dataset of a shared QEDL evaluation
task, our approach could obtain 64.44% F1 score of QED and 64.86% accuracy of
EL, which ranks the 2nd place and indicates its practical use for QEDL problem.
| 2,017 | Computation and Language |
Inflection-Tolerant Ontology-Based Named Entity Recognition for
Real-Time Applications | A growing number of applications users daily interact with have to operate in
(near) real-time: chatbots, digital companions, knowledge work support systems
-- just to name a few. To perform the services desired by the user, these
systems have to analyze user activity logs or explicit user input extremely
fast. In particular, text content (e.g. in form of text snippets) needs to be
processed in an information extraction task. Regarding the aforementioned
temporal requirements, this has to be accomplished in just a few milliseconds,
which limits the number of methods that can be applied. Practically, only very
fast methods remain, which on the other hand deliver worse results than slower
but more sophisticated Natural Language Processing (NLP) pipelines. In this
paper, we investigate and propose methods for real-time capable Named Entity
Recognition (NER). As a first improvement step we address are word variations
induced by inflection, for example present in the German language. Our approach
is ontology-based and makes use of several language information sources like
Wiktionary. We evaluated it using the German Wikipedia (about 9.4B characters),
for which the whole NER process took considerably less than an hour. Since
precision and recall are higher than with comparably fast methods, we conclude
that the quality gap between high speed methods and sophisticated NLP pipelines
can be narrowed a bit more without losing too much runtime performance.
| 2,019 | Computation and Language |
End-to-end contextual speech recognition using class language models and
a token passing decoder | End-to-end modeling (E2E) of automatic speech recognition (ASR) blends all
the components of a traditional speech recognition system into a unified model.
Although it simplifies training and decoding pipelines, the unified model is
hard to adapt when mismatch exists between training and test data. In this
work, we focus on contextual speech recognition, which is particularly
challenging for E2E models because it introduces significant mismatch between
training and test data. To improve the performance in the presence of complex
contextual information, we propose to use class-based language models(CLM) that
can populate the classes with contextdependent information in real-time. To
enable this approach to scale to a large number of class members and minimize
search errors, we propose a token passing decoder with efficient token
recombination for E2E systems for the first time. We evaluate the proposed
system on general and contextual ASR, and achieve relative 62% Word Error
Rate(WER) reduction for contextual ASR without hurting performance for general
ASR. We show that the proposed method performs well without modification of the
decoding hyper-parameters across tasks, making it a general solution for E2E
ASR.
| 2,018 | Computation and Language |
Are you tough enough? Framework for Robustness Validation of Machine
Comprehension Systems | Deep Learning NLP domain lacks procedures for the analysis of model
robustness. In this paper we propose a framework which validates robustness of
any Question Answering model through model explainers. We propose that a robust
model should transgress the initial notion of semantic similarity induced by
word embeddings to learn a more human-like understanding of meaning. We test
this property by manipulating questions in two ways: swapping important
question word for 1) its semantically correct synonym and 2) for word vector
that is close in embedding space. We estimate importance of words in asked
questions with Locally Interpretable Model Agnostic Explanations method (LIME).
With these two steps we compare state-of-the-art Q&A models. We show that
although accuracy of state-of-the-art models is high, they are very fragile to
changes in the input. Moreover, we propose 2 adversarial training scenarios
which raise model sensitivity to true synonyms by up to 7% accuracy measure.
Our findings help to understand which models are more stable and how they can
be improved. In addition, we have created and published a new dataset that may
be used for validation of robustness of a Q&A model.
| 2,018 | Computation and Language |
Weighted Global Normalization for Multiple Choice Reading Comprehension
over Long Documents | Motivated by recent evidence pointing out the fragility of high-performing
span prediction models, we direct our attention to multiple choice reading
comprehension. In particular, this work introduces a novel method for improving
answer selection on long documents through weighted global normalization of
predictions over portions of the documents. We show that applying our method to
a span prediction model adapted for answer selection helps model performance on
long summaries from NarrativeQA, a challenging reading comprehension dataset
with an answer selection task, and we strongly improve on the task baseline
performance by +36.2 Mean Reciprocal Rank.
| 2,021 | Computation and Language |
Neural Abstractive Text Summarization with Sequence-to-Sequence Models | In the past few years, neural abstractive text summarization with
sequence-to-sequence (seq2seq) models have gained a lot of popularity. Many
interesting techniques have been proposed to improve seq2seq models, making
them capable of handling different challenges, such as saliency, fluency and
human readability, and generate high-quality summaries. Generally speaking,
most of these techniques differ in one of these three categories: network
structure, parameter inference, and decoding/generation. There are also other
concerns, such as efficiency and parallelism for training a model. In this
paper, we provide a comprehensive literature survey on different seq2seq models
for abstractive text summarization from the viewpoint of network structures,
training strategies, and summary generation algorithms. Several models were
first proposed for language modeling and generation tasks, such as machine
translation, and later applied to abstractive text summarization. Hence, we
also provide a brief review of these models. As part of this survey, we also
develop an open source library, namely, Neural Abstractive Text Summarizer
(NATS) toolkit, for the abstractive text summarization. An extensive set of
experiments have been conducted on the widely used CNN/Daily Mail dataset to
examine the effectiveness of several different neural network components.
Finally, we benchmark two models implemented in NATS on the two recently
released datasets, namely, Newsroom and Bytecup.
| 2,020 | Computation and Language |
EvoMSA: A Multilingual Evolutionary Approach for Sentiment Analysis | Sentiment analysis (SA) is a task related to understanding people's feelings
in written text; the starting point would be to identify the polarity level
(positive, neutral or negative) of a given text, moving on to identify emotions
or whether a text is humorous or not. This task has been the subject of several
research competitions in a number of languages, e.g., English, Spanish, and
Arabic, among others. In this contribution, we propose an SA system, namely
EvoMSA, that unifies our participating systems in various SA competitions,
making it domain independent and multilingual by processing text using only
language-independent techniques. EvoMSA is a classifier, based on Genetic
Programming, that works by combining the output of different text classifiers
and text models to produce the final prediction. We analyze EvoMSA on different
SA competitions to provide a global overview of its performance, and as the
results show, EvoMSA is competitive obtaining top rankings in several SA
competitions. Furthermore, we performed an analysis of EvoMSA's components to
measure their contribution to the performance; the idea is to facilitate a
practitioner or newcomer to implement a competitive SA classifier. Finally, it
is worth to mention that EvoMSA is available as open-source software.
| 2,020 | Computation and Language |
On the Inductive Bias of Word-Character-Level Multi-Task Learning for
Speech Recognition | End-to-end automatic speech recognition (ASR) commonly transcribes audio
signals into sequences of characters while its performance is evaluated by
measuring the word-error rate (WER). This suggests that predicting sequences of
words directly may be helpful instead. However, training with word-level
supervision can be more difficult due to the sparsity of examples per label
class. In this paper we analyze an end-to-end ASR model that combines a
word-and-character representation in a multi-task learning (MTL) framework. We
show that it improves on the WER and study how the word-level model can benefit
from character-level supervision by analyzing the learned inductive preference
bias of each model component empirically. We find that by adding
character-level supervision, the MTL model interpolates between recognizing
more frequent words (preferred by the word-level model) and shorter words
(preferred by the character-level model).
| 2,018 | Computation and Language |
The MeSH-gram Neural Network Model: Extending Word Embedding Vectors
with MeSH Concepts for UMLS Semantic Similarity and Relatedness in the
Biomedical Domain | Eliciting semantic similarity between concepts in the biomedical domain
remains a challenging task. Recent approaches founded on embedding vectors have
gained in popularity as they risen to efficiently capture semantic
relationships The underlying idea is that two words that have close meaning
gather similar contexts. In this study, we propose a new neural network model
named MeSH-gram which relies on a straighforward approach that extends the
skip-gram neural network model by considering MeSH (Medical Subject Headings)
descriptors instead words. Trained on publicly available corpus PubMed MEDLINE,
MeSH-gram is evaluated on reference standards manually annotated for semantic
similarity. MeSH-gram is first compared to skip-gram with vectors of size 300
and at several windows contexts. A deeper comparison is performed with tewenty
existing models. All the obtained results of Spearman's rank correlations
between human scores and computed similarities show that MeSH-gram outperforms
the skip-gram model, and is comparable to the best methods but that need more
computation and external resources.
| 2,018 | Computation and Language |
Adpositional Supersenses for Mandarin Chinese | This study adapts Semantic Network of Adposition and Case Supersenses (SNACS)
annotation to Mandarin Chinese and demonstrates that the same supersense
categories are appropriate for Chinese adposition semantics. We annotated 15
chapters of The Little Prince, with high interannotator agreement. The parallel
corpus gives insight into differences in construal between the two languages'
adpositions, namely a number of construals that are frequent in Chinese but
rare or unattested in the English corpus. The annotated corpus can further
support automatic disambiguation of adpositions in Chinese, and the common
inventory of supersenses between the two languages can potentially serve
cross-linguistic tasks such as machine translation.
| 2,019 | Computation and Language |
Multi-Task Learning with Multi-View Attention for Answer Selection and
Knowledge Base Question Answering | Answer selection and knowledge base question answering (KBQA) are two
important tasks of question answering (QA) systems. Existing methods solve
these two tasks separately, which requires large number of repetitive work and
neglects the rich correlation information between tasks. In this paper, we
tackle answer selection and KBQA tasks simultaneously via multi-task learning
(MTL), motivated by the following motivations. First, both answer selection and
KBQA can be regarded as a ranking problem, with one at text-level while the
other at knowledge-level. Second, these two tasks can benefit each other:
answer selection can incorporate the external knowledge from knowledge base
(KB), while KBQA can be improved by learning contextual information from answer
selection. To fulfill the goal of jointly learning these two tasks, we propose
a novel multi-task learning scheme that utilizes multi-view attention learned
from various perspectives to enable these tasks to interact with each other as
well as learn more comprehensive sentence representations. The experiments
conducted on several real-world datasets demonstrate the effectiveness of the
proposed method, and the performance of answer selection and KBQA is improved.
Also, the multi-view attention scheme is proved to be effective in assembling
attentive information from different representational perspectives.
| 2,018 | Computation and Language |
Exploring the importance of context and embeddings in neural NER models
for task-oriented dialogue systems | Named Entity Recognition (NER), a classic sequence labelling task, is an
essential component of natural language understanding (NLU) systems in
task-oriented dialog systems for slot filling. For well over a decade,
different methods from lookup using gazetteers and domain ontology, classifiers
over handcrafted features to end-to-end systems involving neural network
architectures have been evaluated mostly in language-independent
non-conversational settings. In this paper, we evaluate a modified version of
the recent state of the art neural architecture in a conversational setting
where messages are often short and noisy. We perform an array of experiments
with different combinations of including the previous utterance in the dialogue
as a source of additional features and using word and character level
embeddings trained on a larger external corpus. All methods are evaluated on a
combined dataset formed from two public English task-oriented conversational
datasets belonging to travel and restaurant domains respectively. For
additional evaluation, we also repeat some of our experiments after adding
automatically translated and transliterated (from translated) versions to the
English only dataset.
| 2,018 | Computation and Language |
The USTC-NEL Speech Translation system at IWSLT 2018 | This paper describes the USTC-NEL system to the speech translation task of
the IWSLT Evaluation 2018. The system is a conventional pipeline system which
contains 3 modules: speech recognition, post-processing and machine
translation. We train a group of hybrid-HMM models for our speech recognition,
and for machine translation we train transformer based neural machine
translation models with speech recognition output style text as input.
Experiments conducted on the IWSLT 2018 task indicate that, compared to
baseline system from KIT, our system achieved 14.9 BLEU improvement.
| 2,018 | Computation and Language |
Evaluating Architectural Choices for Deep Learning Approaches for
Question Answering over Knowledge Bases | The task of answering natural language questions over knowledge bases has
received wide attention in recent years. Various deep learning architectures
have been proposed for this task. However, architectural design choices are
typically not systematically compared nor evaluated under the same conditions.
In this paper, we contribute to a better understanding of the impact of
architectural design choices by evaluating four different architectures under
the same conditions. We address the task of answering simple questions,
consisting in predicting the subject and predicate of a triple given a
question. In order to provide a fair comparison of different architectures, we
evaluate them under the same strategy for inferring the subject, and compare
different architectures for inferring the predicate. The architecture for
inferring the subject is based on a standard LSTM model trained to recognize
the span of the subject in the question and on a linking component that links
the subject span to an entity in the knowledge base. The architectures for
predicate inference are based on i) a standard softmax classifier ranging over
all predicates as output, iii) a model that predicts a low-dimensional encoding
of the property given entity representation and question, iii) a model that
learns to score a pair of subject and predicate given the question as well as
iv) a model based on the well-known FastText model. The comparison of
architectures shows that FastText provides better results than other
architectures.
| 2,018 | Computation and Language |
Relevant Word Order Vectorization for Improved Natural Language
Processing in Electronic Healthcare Records | Objective: Electronic health records (EHR) represent a rich resource for
conducting observational studies, supporting clinical trials, and more.
However, much of the relevant information is stored in an unstructured format
that makes it difficult to use. Natural language processing approaches that
attempt to automatically classify the data depend on vectorization algorithms
that impose structure on the text, but these algorithms were not designed for
the unique characteristics of EHR. Here, we propose a new algorithm for
structuring so-called free-text that may help researchers make better use of
EHR. We call this method Relevant Word Order Vectorization (RWOV).
Materials and Methods: As a proof-of-concept, we attempted to classify the
hormone receptor status of breast cancer patients treated at the University of
Kansas Medical Center during a recent year, from the unstructured text of
pathology reports. Our approach attempts to account for the semi-structured way
that healthcare providers often enter information. We compared this approach to
the ngrams and word2vec methods.
Results: Our approach resulted in the most consistently high accuracy, as
measured by F1 score and area under the receiver operating characteristic curve
(AUC).
Discussion: Our results suggest that methods of structuring free text that
take into account its context may show better performance, and that our
approach is promising.
Conclusion: By using a method that accounts for the fact that healthcare
providers tend to use certain key words repetitively and that the order of
these key words is important, we showed improved performance over methods that
do not.
| 2,018 | Computation and Language |
Feature Analysis for Assessing the Quality of Wikipedia Articles through
Supervised Classification | Nowadays, thanks to Web 2.0 technologies, people have the possibility to
generate and spread contents on different social media in a very easy way. In
this context, the evaluation of the quality of the information that is
available online is becoming more and more a crucial issue. In fact, a constant
flow of contents is generated every day by often unknown sources, which are not
certified by traditional authoritative entities. This requires the development
of appropriate methodologies that can evaluate in a systematic way these
contents, based on `objective' aspects connected with them. This would help
individuals, who nowadays tend to increasingly form their opinions based on
what they read online and on social media, to come into contact with
information that is actually useful and verified. Wikipedia is nowadays one of
the biggest online resources on which users rely as a source of information.
The amount of collaboratively generated content that is sent to the online
encyclopedia every day can let to the possible creation of low-quality articles
(and, consequently, misinformation) if not properly monitored and revised. For
this reason, in this paper, the problem of automatically assessing the quality
of Wikipedia articles is considered. In particular, the focus is on the
analysis of hand-crafted features that can be employed by supervised machine
learning techniques to perform the classification of Wikipedia articles on
qualitative bases. With respect to prior literature, a wider set of
characteristics connected to Wikipedia articles are taken into account and
illustrated in detail. Evaluations are performed by considering a labeled
dataset provided in a prior work, and different supervised machine learning
algorithms, which produced encouraging results with respect to the considered
features.
| 2,018 | Computation and Language |
Generation of Synthetic Electronic Medical Record Text | Machine learning (ML) and Natural Language Processing (NLP) have achieved
remarkable success in many fields and have brought new opportunities and high
expectation in the analyses of medical data. The most common type of medical
data is the massive free-text electronic medical records (EMR). It is widely
regarded that mining such massive data can bring up important information for
improving medical practices as well as for possible new discoveries on complex
diseases. However, the free EMR texts are lacking consistent standards, rich of
private information, and limited in availability. Also, as they are accumulated
from everyday practices, it is often hard to have a balanced number of samples
for the types of diseases under study. These problems hinder the development of
ML and NLP methods for EMR data analysis. To tackle these problems, we
developed a model to generate synthetic text of EMRs called Medical Text
Generative Adversarial Network or mtGAN. It is based on the GAN framework and
is trained by the REINFORCE algorithm. It takes disease features as inputs and
generates synthetic texts as EMRs for the corresponding diseases. We evaluate
the model from micro-level, macro-level and application-level on a Chinese EMR
text dataset. The results show that the method has a good capacity to fit real
data and can generate realistic and diverse EMR samples. This provides a novel
way to avoid potential leakage of patient privacy while still supply sufficient
well-controlled cohort data for developing downstream ML and NLP methods. It
can also be used as a data augmentation method to assist studies based on real
EMR data.
| 2,018 | Computation and Language |
End-to-End Streaming Keyword Spotting | We present a system for keyword spotting that, except for a frontend
component for feature generation, it is entirely contained in a deep neural
network (DNN) model trained "end-to-end" to predict the presence of the keyword
in a stream of audio. The main contributions of this work are, first, an
efficient memoized neural network topology that aims at making better use of
the parameters and associated computations in the DNN by holding a memory of
previous activations distributed over the depth of the DNN. The second
contribution is a method to train the DNN, end-to-end, to produce the keyword
spotting score. This system significantly outperforms previous approaches both
in terms of quality of detection as well as size and computation.
| 2,019 | Computation and Language |
Attending to Mathematical Language with Transformers | Mathematical expressions were generated, evaluated and used to train neural
network models based on the transformer architecture. The expressions and their
targets were analyzed as a character-level sequence transduction task in which
the encoder and decoder are built on attention mechanisms. Three models were
trained to understand and evaluate symbolic variables and expressions in
mathematics: (1) the self-attentive and feed-forward transformer without
recurrence or convolution, (2) the universal transformer with recurrence, and
(3) the adaptive universal transformer with recurrence and adaptive computation
time. The models respectively achieved test accuracies as high as 76.1%, 78.8%
and 84.9% in evaluating the expressions to match the target values. For the
cases inferred incorrectly, the results differed from the targets by only one
or two characters. The models notably learned to add, subtract and multiply
both positive and negative decimal numbers of variable digits assigned to
symbolic variables.
| 2,019 | Computation and Language |
Intent Detection for code-mix utterances in task oriented dialogue
systems | Intent detection is an essential component of task oriented dialogue systems.
Over the years, extensive research has been conducted resulting in many state
of the art models directed towards resolving user's intents in dialogue. A
variety of vector representations foruser utterances have been explored for the
same. However, these models and vectorization approaches have more so been
evaluated in a single language environment. Dialogude systems generally have to
deal with queries in different languages. We thus conduct experiments across
combinations of models and various vectors representations for Code Mix as well
as multi language utterances and evaluate how these models scale to a multi
language environment. Our aim is to find the best suitable combination of
vector representation and models for the process of intent detection for Code
Mix utterances. we have evaluated the experiments on two different datasets
consisting of only Code Mix utterances and the other dataset consisting of
English, Hindi and Code Mix English Hindi utterances.
| 2,018 | Computation and Language |
Improving Retrieval-Based Question Answering with Deep Inference Models | Question answering is one of the most important and difficult applications at
the border of information retrieval and natural language processing, especially
when we talk about complex science questions which require some form of
inference to determine the correct answer. In this paper, we present a two-step
method that combines information retrieval techniques optimized for question
answering with deep learning models for natural language inference in order to
tackle the multi-choice question answering in the science domain. For each
question-answer pair, we use standard retrieval-based models to find relevant
candidate contexts and decompose the main problem into two different
sub-problems. First, assign correctness scores for each candidate answer based
on the context using retrieval models from Lucene. Second, we use deep learning
architectures to compute if a candidate answer can be inferred from some
well-chosen context consisting of sentences retrieved from the knowledge base.
In the end, all these solvers are combined using a simple neural network to
predict the correct answer. This proposed two-step model outperforms the best
retrieval-based solver by over 3% in absolute accuracy.
| 2,019 | Computation and Language |
An Unsupervised Approach for Aspect Category Detection Using Soft Cosine
Similarity Measure | Aspect category detection is one of the important and challenging subtasks of
aspect-based sentiment analysis. Given a set of pre-defined categories, this
task aims to detect categories which are indicated implicitly or explicitly in
a given review sentence. Supervised machine learning approaches perform well to
accomplish this subtask. Note that, the performance of these methods depends on
the availability of labeled train data, which is often difficult and costly to
obtain. Besides, most of these supervised methods require feature engineering
to perform well. In this paper, we propose an unsupervised method to address
aspect category detection task without the need for any feature engineering.
Our method utilizes clusters of unlabeled reviews and soft cosine similarity
measure to accomplish aspect category detection task. Experimental results on
SemEval-2014 restaurant dataset shows that proposed unsupervised approach
outperforms several baselines by a substantial margin.
| 2,019 | Computation and Language |
Dialogue Generation: From Imitation Learning to Inverse Reinforcement
Learning | The performance of adversarial dialogue generation models relies on the
quality of the reward signal produced by the discriminator. The reward signal
from a poor discriminator can be very sparse and unstable, which may lead the
generator to fall into a local optimum or to produce nonsense replies. To
alleviate the first problem, we first extend a recently proposed adversarial
dialogue generation method to an adversarial imitation learning solution. Then,
in the framework of adversarial inverse reinforcement learning, we propose a
new reward model for dialogue generation that can provide a more accurate and
precise reward signal for generator training. We evaluate the performance of
the resulting model with automatic metrics and human evaluations in two
annotation settings. Our experimental results demonstrate that our model can
generate more high-quality responses and achieve higher overall performance
than the state-of-the-art.
| 2,018 | Computation and Language |
SDNet: Contextualized Attention-based Deep Network for Conversational
Question Answering | Conversational question answering (CQA) is a novel QA task that requires
understanding of dialogue context. Different from traditional single-turn
machine reading comprehension (MRC) tasks, CQA includes passage comprehension,
coreference resolution, and contextual understanding. In this paper, we propose
an innovated contextualized attention-based deep neural network, SDNet, to fuse
context into traditional MRC models. Our model leverages both inter-attention
and self-attention to comprehend conversation context and extract relevant
information from passage. Furthermore, we demonstrated a novel method to
integrate the latest BERT contextual model. Empirical results show the
effectiveness of our model, which sets the new state of the art result in CoQA
leaderboard, outperforming the previous best model by 1.6% F1. Our ensemble
model further improves the result by 2.7% F1.
| 2,019 | Computation and Language |
Chat-crowd: A Dialog-based Platform for Visual Layout Composition | In this paper we introduce Chat-crowd, an interactive environment for visual
layout composition via conversational interactions. Chat-crowd supports
multiple agents with two conversational roles: agents who play the role of a
designer are in charge of placing objects in an editable canvas according to
instructions or commands issued by agents with a director role. The system can
be integrated with crowdsourcing platforms for both synchronous and
asynchronous data collection and is equipped with comprehensive quality
controls on the performance of both types of agents. We expect that this system
will be useful to build multimodal goal-oriented dialog tasks that require
spatial and geometric reasoning.
| 2,019 | Computation and Language |
Delta Embedding Learning | Unsupervised word embeddings have become a popular approach of word
representation in NLP tasks. However there are limitations to the semantics
represented by unsupervised embeddings, and inadequate fine-tuning of
embeddings can lead to suboptimal performance. We propose a novel learning
technique called Delta Embedding Learning, which can be applied to general NLP
tasks to improve performance by optimized tuning of the word embeddings. A
structured regularization is applied to the embeddings to ensure they are tuned
in an incremental way. As a result, the tuned word embeddings become better
word representations by absorbing semantic information from supervision without
"forgetting." We apply the method to various NLP tasks and see a consistent
improvement in performance. Evaluation also confirms the tuned word embeddings
have better semantic properties.
| 2,019 | Computation and Language |
Predicting the Effects of News Sentiments on the Stock Market | Stock market forecasting is very important in the planning of business
activities. Stock price prediction has attracted many researchers in multiple
disciplines including computer science, statistics, economics, finance, and
operations research. Recent studies have shown that the vast amount of online
information in the public domain such as Wikipedia usage pattern, news stories
from the mainstream media, and social media discussions can have an observable
effect on investors opinions towards financial markets. The reliability of the
computational models on stock market prediction is important as it is very
sensitive to the economy and can directly lead to financial loss. In this
paper, we retrieved, extracted, and analyzed the effects of news sentiments on
the stock market. Our main contributions include the development of a sentiment
analysis dictionary for the financial sector, the development of a
dictionary-based sentiment analysis model, and the evaluation of the model for
gauging the effects of news sentiments on stocks for the pharmaceutical market.
Using only news sentiments, we achieved a directional accuracy of 70.59% in
predicting the trends in short-term stock price movement.
| 2,019 | Computation and Language |
Machine Translation : From Statistical to modern Deep-learning practices | Machine translation (MT) is an area of study in Natural Language processing
which deals with the automatic translation of human language, from one language
to another by the computer. Having a rich research history spanning nearly
three decades, Machine translation is one of the most sought after area of
research in the linguistics and computational community. In this paper, we
investigate the models based on deep learning that have achieved substantial
progress in recent years and becoming the prominent method in MT. We shall
discuss the two main deep-learning based Machine Translation methods, one at
component or domain level which leverages deep learning models to enhance the
efficacy of Statistical Machine Translation (SMT) and end-to-end deep learning
models in MT which uses neural networks to find correspondence between the
source and target languages using the encoder-decoder architecture. We conclude
this paper by providing a time line of the major research problems solved by
the researchers and also provide a comprehensive overview of present areas of
research in Neural Machine Translation.
| 2,018 | Computation and Language |
Learning latent representations for style control and transfer in
end-to-end speech synthesis | In this paper, we introduce the Variational Autoencoder (VAE) to an
end-to-end speech synthesis model, to learn the latent representation of
speaking styles in an unsupervised manner. The style representation learned
through VAE shows good properties such as disentangling, scaling, and
combination, which makes it easy for style control. Style transfer can be
achieved in this framework by first inferring style representation through the
recognition network of VAE, then feeding it into TTS network to guide the style
in synthesizing speech. To avoid Kullback-Leibler (KL) divergence collapse in
training, several techniques are adopted. Finally, the proposed model shows
good performance of style control and outperforms Global Style Token (GST)
model in ABX preference tests on style transfer.
| 2,019 | Computation and Language |
RESIDE: Improving Distantly-Supervised Neural Relation Extraction using
Side Information | Distantly-supervised Relation Extraction (RE) methods train an extractor by
automatically aligning relation instances in a Knowledge Base (KB) with
unstructured text. In addition to relation instances, KBs often contain other
relevant side information, such as aliases of relations (e.g., founded and
co-founded are aliases for the relation founderOfCompany). RE models usually
ignore such readily available side information. In this paper, we propose
RESIDE, a distantly-supervised neural relation extraction method which utilizes
additional side information from KBs for improved relation extraction. It uses
entity type and relation alias information for imposing soft constraints while
predicting relations. RESIDE employs Graph Convolution Networks (GCN) to encode
syntactic information from text and improves performance even when limited side
information is available. Through extensive experiments on benchmark datasets,
we demonstrate RESIDE's effectiveness. We have made RESIDE's source code
available to encourage reproducible research.
| 2,018 | Computation and Language |
Conditional Variational Autoencoder for Neural Machine Translation | We explore the performance of latent variable models for conditional text
generation in the context of neural machine translation (NMT). Similar to Zhang
et al., we augment the encoder-decoder NMT paradigm by introducing a continuous
latent variable to model features of the translation process. We extend this
model with a co-attention mechanism motivated by Parikh et al. in the inference
network. Compared to the vision domain, latent variable models for text face
additional challenges due to the discrete nature of language, namely posterior
collapse. We experiment with different approaches to mitigate this issue. We
show that our conditional variational model improves upon both discriminative
attention-based translation and the variational baseline presented in Zhang et
al. Finally, we present some exploration of the learned latent space to
illustrate what the latent variable is capable of capturing. This is the first
reported conditional variational model for text that meaningfully utilizes the
latent variable without weakening the translation model.
| 2,018 | Computation and Language |
Von Mises-Fisher Loss for Training Sequence to Sequence Models with
Continuous Outputs | The Softmax function is used in the final layer of nearly all existing
sequence-to-sequence models for language generation. However, it is usually the
slowest layer to compute which limits the vocabulary size to a subset of most
frequent types; and it has a large memory footprint. We propose a general
technique for replacing the softmax layer with a continuous embedding layer.
Our primary innovations are a novel probabilistic loss, and a training and
inference procedure in which we generate a probability distribution over
pre-trained word embeddings, instead of a multinomial distribution over the
vocabulary obtained via softmax. We evaluate this new class of
sequence-to-sequence models with continuous outputs on the task of neural
machine translation. We show that our models obtain upto 2.5x speed-up in
training time while performing on par with the state-of-the-art models in terms
of translation quality. These models are capable of handling very large
vocabularies without compromising on translation quality. They also produce
more meaningful errors than in the softmax-based models, as these errors
typically lie in a subspace of the vector space of the reference translations.
| 2,019 | Computation and Language |
Scalable language model adaptation for spoken dialogue systems | Language models (LM) for interactive speech recognition systems are trained
on large amounts of data and the model parameters are optimized on past user
data. New application intents and interaction types are released for these
systems over time, imposing challenges to adapt the LMs since the existing
training data is no longer sufficient to model the future user interactions. It
is unclear how to adapt LMs to new application intents without degrading the
performance on existing applications. In this paper, we propose a solution to
(a) estimate n-gram counts directly from the hand-written grammar for training
LMs and (b) use constrained optimization to optimize the system parameters for
future use cases, while not degrading the performance on past usage. We
evaluated our approach on new applications intents for a personal assistant
system and find that the adaptation improves the word error rate by up to 15%
on new applications even when there is no adaptation data available for an
application.
| 2,018 | Computation and Language |
Unsupervised domain-agnostic identification of product names in social
media posts | Product name recognition is a significant practical problem, spurred by the
greater availability of platforms for discussing products such as social media
and product review functionalities of online marketplaces. Customers, product
manufacturers and online marketplaces may want to identify product names in
unstructured text to extract important insights, such as sentiment, surrounding
a product. Much extant research on product name identification has been
domain-specific (e.g., identifying mobile phone models) and used supervised or
semi-supervised methods. With massive numbers of new products released to the
market every year such methods may require retraining on updated labeled data
to stay relevant, and may transfer poorly across domains. This research
addresses this challenge and develops a domain-agnostic, unsupervised algorithm
for identifying product names based on Facebook posts. The algorithm consists
of two general steps: (a) candidate product name identification using an
off-the-shelf pretrained conditional random fields (CRF) model, part-of-speech
tagging and a set of simple patterns; and (b) filtering of candidate names to
remove spurious entries using clustering and word embeddings generated from the
data.
| 2,018 | Computation and Language |
Text Data Augmentation Made Simple By Leveraging NLP Cloud APIs | In practice, it is common to find oneself with far too little text data to
train a deep neural network. This "Big Data Wall" represents a challenge for
minority language communities on the Internet, organizations, laboratories and
companies that compete the GAFAM (Google, Amazon, Facebook, Apple, Microsoft).
While most of the research effort in text data augmentation aims on the
long-term goal of finding end-to-end learning solutions, which is equivalent to
"using neural networks to feed neural networks", this engineering work focuses
on the use of practical, robust, scalable and easy-to-implement data
augmentation pre-processing techniques similar to those that are successful in
computer vision. Several text augmentation techniques have been experimented.
Some existing ones have been tested for comparison purposes such as noise
injection or the use of regular expressions. Others are modified or improved
techniques like lexical replacement. Finally more innovative ones, such as the
generation of paraphrases using back-translation or by the transformation of
syntactic trees, are based on robust, scalable, and easy-to-use NLP Cloud APIs.
All the text augmentation techniques studied, with an amplification factor of
only 5, increased the accuracy of the results in a range of 4.3% to 21.6%, with
significant statistical fluctuations, on a standardized task of text polarity
prediction. Some standard deep neural network architectures were tested: the
multilayer perceptron (MLP), the long short-term memory recurrent network
(LSTM) and the bidirectional LSTM (biLSTM). Classical XGBoost algorithm has
been tested with up to 2.5% improvements.
| 2,018 | Computation and Language |
Context is Key: New Approaches to Neural Coherence Modeling | We formulate coherence modeling as a regression task and propose two novel
methods to combine techniques from our setup with pairwise approaches. The
first of our methods is a model that we call "first-next," which operates
similarly to selection sorting but conditions decision-making on information
about already-sorted sentences. The second consists of a technique for adding
context to regression-based models by concatenating sentence-level
representations with an encoding of its corresponding out-of-order paragraph.
This latter model achieves Kendall-tau distance and positional accuracy scores
that match or exceed the current state-of-the-art on these metrics. Our results
suggest that many of the gains that come from more complex, machine-translation
inspired approaches can be achieved with simpler, more efficient models.
| 2,018 | Computation and Language |
Sentence-wise Smooth Regularization for Sequence to Sequence Learning | Maximum-likelihood estimation (MLE) is widely used in sequence to sequence
tasks for model training. It uniformly treats the generation/prediction of each
target token as multi-class classification, and yields non-smooth prediction
probabilities: in a target sequence, some tokens are predicted with small
probabilities while other tokens are with large probabilities. According to our
empirical study, we find that the non-smoothness of the probabilities results
in low quality of generated sequences. In this paper, we propose a
sentence-wise regularization method which aims to output smooth prediction
probabilities for all the tokens in the target sequence. Our proposed method
can automatically adjust the weights and gradients of each token in one
sentence to ensure the predictions in a sequence uniformly well. Experiments on
three neural machine translation tasks and one text summarization task show
that our method outperforms conventional MLE loss on all these tasks and
achieves promising BLEU scores on WMT14 English-German and WMT17
Chinese-English translation task.
| 2,018 | Computation and Language |
Towards Understanding Language through Perception in Situated
Human-Robot Interaction: From Word Grounding to Grammar Induction | Robots are widely collaborating with human users in diferent tasks that
require high-level cognitive functions to make them able to discover the
surrounding environment. A difcult challenge that we briefy highlight in this
short paper is inferring the latent grammatical structure of language, which
includes grounding parts of speech (e.g., verbs, nouns, adjectives, and
prepositions) through visual perception, and induction of Combinatory
Categorial Grammar (CCG) for phrases. This paves the way towards grounding
phrases so as to make a robot able to understand human instructions
appropriately during interaction.
| 2,020 | Computation and Language |
A Multimodal LSTM for Predicting Listener Empathic Responses Over Time | People naturally understand the emotions of-and often also empathize
with-those around them. In this paper, we predict the emotional valence of an
empathic listener over time as they listen to a speaker narrating a life story.
We use the dataset provided by the OMG-Empathy Prediction Challenge, a workshop
held in conjunction with IEEE FG 2019. We present a multimodal LSTM model with
feature-level fusion and local attention that predicts empathic responses from
audio, text, and visual features. Our best-performing model, which used only
the audio and text features, achieved a concordance correlation coefficient
(CCC) of 0.29 and 0.32 on the Validation set for the Generalized and
Personalized track respectively, and achieved a CCC of 0.14 and 0.14 on the
held-out Test set. We discuss the difficulties faced and the lessons learnt
tackling this challenge.
| 2,019 | Computation and Language |
SMT vs NMT: A Comparison over Hindi & Bengali Simple Sentences | In the present article, we identified the qualitative differences between
Statistical Machine Translation (SMT) and Neural Machine Translation (NMT)
outputs. We have tried to answer two important questions: 1. Does NMT perform
equivalently well with respect to SMT and 2. Does it add extra flavor in
improving the quality of MT output by employing simple sentences as training
units. In order to obtain insights, we have developed three core models viz.,
SMT model based on Moses toolkit, followed by character and word level NMT
models. All of the systems use English-Hindi and English-Bengali language pairs
containing simple sentences as well as sentences of other complexity. In order
to preserve the translations semantics with respect to the target words of a
sentence, we have employed soft-attention into our word level NMT model. We
have further evaluated all the systems with respect to the scenarios where they
succeed and fail. Finally, the quality of translation has been validated using
BLEU and TER metrics along with manual parameters like fluency, adequacy etc.
We observed that NMT outperforms SMT in case of simple sentences whereas SMT
outperforms in case of all types of sentence.
| 2,018 | Computation and Language |
Temporal Analysis of Entity Relatedness and its Evolution using
Wikipedia and DBpedia | Many researchers have made use of the Wikipedia network for relatedness and
similarity tasks. However, most approaches use only the most recent information
and not historical changes in the network. We provide an analysis of entity
relatedness using temporal graph-based approaches over different versions of
the Wikipedia article link network and DBpedia, which is an open-source
knowledge base extracted from Wikipedia. We consider creating the Wikipedia
article link network as both a union and intersection of edges over multiple
time points and present a novel variation of the Jaccard index to weight edges
based on their transience. We evaluate our results against the KORE dataset,
which was created in 2010, and show that using the 2010 Wikipedia article link
network produces the strongest result, suggesting that semantic similarity is
time sensitive. We then show that integrating multiple time frames in our
methods can give a better overall similarity demonstrating that temporal
evolution can have an important effect on entity relatedness.
| 2,018 | Computation and Language |
Structured Neural Topic Models for Reviews | We present Variational Aspect-based Latent Topic Allocation (VALTA), a family
of autoencoding topic models that learn aspect-based representations of
reviews. VALTA defines a user-item encoder that maps bag-of-words vectors for
combined reviews associated with each paired user and item onto structured
embeddings, which in turn define per-aspect topic weights. We model individual
reviews in a structured manner by inferring an aspect assignment for each
sentence in a given review, where the per-aspect topic weights obtained by the
user-item encoder serve to define a mixture over topics, conditioned on the
aspect. The result is an autoencoding neural topic model for reviews, which can
be trained in a fully unsupervised manner to learn topics that are structured
into aspects. Experimental evaluation on large number of datasets demonstrates
that aspects are interpretable, yield higher coherence scores than
non-structured autoencoding topic model variants, and can be utilized to
perform aspect-based comparison and genre discovery.
| 2,019 | Computation and Language |
Recurrent Neural Networks with Pre-trained Language Model Embedding for
Slot Filling Task | In recent years, Recurrent Neural Networks (RNNs) based models have been
applied to the Slot Filling problem of Spoken Language Understanding and
achieved the state-of-the-art performances. In this paper, we investigate the
effect of incorporating pre-trained language models into RNN based Slot Filling
models. Our evaluation on the Airline Travel Information System (ATIS) data
corpus shows that we can significantly reduce the size of labeled training data
and achieve the same level of Slot Filling performance by incorporating extra
word embedding and language model embedding layers pre-trained on unlabeled
corpora.
| 2,018 | Computation and Language |
Joint Entity Extraction and Assertion Detection for Clinical Text | Negative medical findings are prevalent in clinical reports, yet
discriminating them from positive findings remains a challenging task for
information extraction. Most of the existing systems treat this task as a
pipeline of two separate tasks, i.e., named entity recognition (NER) and
rule-based negation detection. We consider this as a multi-task problem and
present a novel end-to-end neural model to jointly extract entities and
negations. We extend a standard hierarchical encoder-decoder NER model and
first adopt a shared encoder followed by separate decoders for the two tasks.
This architecture performs considerably better than the previous rule-based and
machine learning-based systems. To overcome the problem of increased parameter
size especially for low-resource settings, we propose the Conditional Softmax
Shared Decoder architecture which achieves state-of-art results for NER and
negation detection on the 2010 i2b2/VA challenge dataset and a proprietary
de-identified clinical dataset.
| 2,019 | Computation and Language |
Towards a General-Purpose Linguistic Annotation Backend | Language documentation is inherently a time-intensive process; transcription,
glossing, and corpus management consume a significant portion of documentary
linguists' work. Advances in natural language processing can help to accelerate
this work, using the linguists' past decisions as training material, but
questions remain about how to prioritize human involvement. In this extended
abstract, we describe the beginnings of a new project that will attempt to ease
this language documentation process through the use of natural language
processing (NLP) technology. It is based on (1) methods to adapt NLP tools to
new languages, based on recent advances in massively multilingual neural
networks, and (2) backend APIs and interfaces that allow linguists to upload
their data. We then describe our current progress on two fronts: automatic
phoneme transcription, and glossing. Finally, we briefly describe our future
directions.
| 2,018 | Computation and Language |
Dynamic Feature Generation Network for Answer Selection | Extracting appropriate features to represent a corpus is an important task
for textual mining. Previous attention based work usually enhance feature at
the lexical level, which lacks the exploration of feature augmentation at the
sentence level. In this paper, we exploit a Dynamic Feature Generation Network
(DFGN) to solve this problem. Specifically, DFGN generates features based on a
variety of attention mechanisms and attaches features to sentence
representation. Then a thresholder is designed to filter the mined features
automatically. DFGN extracts the most significant characteristics from datasets
to keep its practicability and robustness. Experimental results on multiple
well-known answer selection datasets show that our proposed approach
significantly outperforms state-of-the-art baselines. We give a detailed
analysis of the experiments to illustrate why DFGN provides excellent retrieval
and interpretative ability.
| 2,018 | Computation and Language |
Abstractive Text Summarization by Incorporating Reader Comments | In neural abstractive summarization field, conventional sequence-to-sequence
based models often suffer from summarizing the wrong aspect of the document
with respect to the main aspect. To tackle this problem, we propose the task of
reader-aware abstractive summary generation, which utilizes the reader comments
to help the model produce better summary about the main aspect. Unlike
traditional abstractive summarization task, reader-aware summarization
confronts two main challenges: (1) Comments are informal and noisy; (2) jointly
modeling the news document and the reader comments is challenging. To tackle
the above challenges, we design an adversarial learning model named
reader-aware summary generator (RASG), which consists of four components: (1) a
sequence-to-sequence based summary generator; (2) a reader attention module
capturing the reader focused aspects; (3) a supervisor modeling the semantic
gap between the generated summary and reader focused aspects; (4) a goal
tracker producing the goal for each generation step. The supervisor and the
goal tacker are used to guide the training of our framework in an adversarial
manner. Extensive experiments are conducted on our large-scale real-world text
summarization dataset, and the results show that RASG achieves the
state-of-the-art performance in terms of both automatic metrics and human
evaluations. The experimental results also demonstrate the effectiveness of
each module in our framework. We release our large-scale dataset for further
research.
| 2,018 | Computation and Language |
Find a Reasonable Ending for Stories: Does Logic Relation Help the Story
Cloze Test? | Natural language understanding is a challenging problem that covers a wide
range of tasks. While previous methods generally train each task separately, we
consider combining the cross-task features to enhance the task performance. In
this paper, we incorporate the logic information with the help of the Natural
Language Inference (NLI) task to the Story Cloze Test (SCT). Previous work on
SCT considered various semantic information, such as sentiment and topic, but
lack the logic information between sentences which is an essential element of
stories. Thus we propose to extract the logic information during the course of
the story to improve the understanding of the whole story. The logic
information is modeled with the help of the NLI task. Experimental results
prove the strength of the logic information.
| 2,018 | Computation and Language |
Don't Classify, Translate: Multi-Level E-Commerce Product Categorization
Via Machine Translation | E-commerce platforms categorize their products into a multi-level taxonomy
tree with thousands of leaf categories. Conventional methods for product
categorization are typically based on machine learning classification
algorithms. These algorithms take product information as input (e.g., titles
and descriptions) to classify a product into a leaf category. In this paper, we
propose a new paradigm based on machine translation. In our approach, we
translate a product's natural language description into a sequence of tokens
representing a root-to-leaf path in a product taxonomy. In our experiments on
two large real-world datasets, we show that our approach achieves better
predictive accuracy than a state-of-the-art classification system for product
categorization. In addition, we demonstrate that our machine translation models
can propose meaningful new paths between previously unconnected nodes in a
taxonomy tree, thereby transforming the taxonomy into a directed acyclic graph
(DAG). We discuss how the resultant taxonomy DAG promotes user-friendly
navigation, and how it is more adaptable to new products.
| 2,018 | Computation and Language |
A corpus of precise natural textual entailment problems | In this paper, we present a new corpus of entailment problems. This corpus
combines the following characteristics: 1. it is precise (does not leave out
implicit hypotheses) 2. it is based on "real-world" texts (i.e. most of the
premises were written for purposes other than testing textual entailment). 3.
its size is 150. The corpus was constructed by taking problems from the Real
Text Entailment and discovering missing hypotheses using a crowd of experts. We
believe that this corpus constitutes a first step towards wide-coverage testing
of precise natural-language inference systems.
| 2,018 | Computation and Language |
Detecting Reliable Novel Word Senses: A Network-Centric Approach | In this era of Big Data, due to expeditious exchange of information on the
web, words are being used to denote newer meanings, causing linguistic shift.
With the recent availability of large amounts of digitized texts, an automated
analysis of the evolution of language has become possible. Our study mainly
focuses on improving the detection of new word senses. This paper presents a
unique proposal based on network features to improve the precision of new word
sense detection. For a candidate word where a new sense (birth) has been
detected by comparing the sense clusters induced at two different time points,
we further compare the network properties of the subgraphs induced from novel
sense cluster across these two time points. Using the mean fractional change in
edge density, structural similarity and average path length as features in an
SVM classifier, manual evaluation gives precision values of 0.86 and 0.74 for
the task of new sense detection, when tested on 2 distinct time-point pairs, in
comparison to the precision values in the range of 0.23-0.32, when the proposed
scheme is not used. The outlined method can therefore be used as a new post-hoc
step to improve the precision of novel word sense detection in a robust and
reliable way where the underlying framework uses a graph structure. Another
important observation is that even though our proposal is a post-hoc step, it
can be used in isolation and that itself results in a very decent performance
achieving a precision of 0.54-0.62. Finally, we show that our method is able to
detect the well-known historical shifts in 80% cases.
| 2,018 | Computation and Language |
Measuring Similarity: Computationally Reproducing the Scholar's
Interests | Computerized document classification already orders the news articles that
Apple's "News" app or Google's "personalized search" feature groups together to
match a reader's interests. The invisible and therefore illegible decisions
that go into these tailored searches have been the subject of a critique by
scholars who emphasize that our intelligence about documents is only as good as
our ability to understand the criteria of search. This article will attempt to
unpack the procedures used in computational classification of texts,
translating them into term legible to humanists, and examining opportunities to
render the computational text classification process subject to expert critique
and improvement.
| 2,018 | Computation and Language |
A Neural Multi-Task Learning Framework to Jointly Model Medical Named
Entity Recognition and Normalization | State-of-the-art studies have demonstrated the superiority of joint modelling
over pipeline implementation for medical named entity recognition and
normalization due to the mutual benefits between the two processes. To exploit
these benefits in a more sophisticated way, we propose a novel deep neural
multi-task learning framework with explicit feedback strategies to jointly
model recognition and normalization. On one hand, our method benefits from the
general representations of both tasks provided by multi-task learning. On the
other hand, our method successfully converts hierarchical tasks into a parallel
multi-task setting while maintaining the mutual supports between tasks. Both of
these aspects improve the model performance. Experimental results demonstrate
that our method performs significantly better than state-of-the-art approaches
on two publicly available medical literature datasets.
| 2,018 | Computation and Language |
Coupled Representation Learning for Domains, Intents and Slots in Spoken
Language Understanding | Representation learning is an essential problem in a wide range of
applications and it is important for performing downstream tasks successfully.
In this paper, we propose a new model that learns coupled representations of
domains, intents, and slots by taking advantage of their hierarchical
dependency in a Spoken Language Understanding system. Our proposed model learns
the vector representation of intents based on the slots tied to these intents
by aggregating the representations of the slots. Similarly, the vector
representation of a domain is learned by aggregating the representations of the
intents tied to a specific domain. To the best of our knowledge, it is the
first approach to jointly learning the representations of domains, intents, and
slots using their hierarchical relationships. The experimental results
demonstrate the effectiveness of the representations learned by our model, as
evidenced by improved performance on the contextual cross-domain reranking
task.
| 2,018 | Computation and Language |
Few-shot classification in Named Entity Recognition Task | For many natural language processing (NLP) tasks the amount of annotated data
is limited. This urges a need to apply semi-supervised learning techniques,
such as transfer learning or meta-learning. In this work we tackle Named Entity
Recognition (NER) task using Prototypical Network - a metric learning
technique. It learns intermediate representations of words which cluster well
into named entity classes. This property of the model allows classifying words
with extremely limited number of training examples, and can potentially be used
as a zero-shot learning method. By coupling this technique with transfer
learning we achieve well-performing classifiers trained on only 20 instances of
a target class.
| 2,018 | Computation and Language |
Inter-sentence Relation Extraction for Associating Biological Context
with Events in Biomedical Texts | We present an analysis of the problem of identifying biological context and
associating it with biochemical events in biomedical texts. This constitutes a
non-trivial, inter-sentential relation extraction task. We focus on biological
context as descriptions of the species, tissue type and cell type that are
associated with biochemical events. We describe the properties of an annotated
corpus of context-event relations and present and evaluate several classifiers
for context-event association trained on syntactic, distance and frequency
features.
| 2,018 | Computation and Language |
Wikipedia2Vec: An Efficient Toolkit for Learning and Visualizing the
Embeddings of Words and Entities from Wikipedia | The embeddings of entities in a large knowledge base (e.g., Wikipedia) are
highly beneficial for solving various natural language tasks that involve real
world knowledge. In this paper, we present Wikipedia2Vec, a Python-based
open-source tool for learning the embeddings of words and entities from
Wikipedia. The proposed tool enables users to learn the embeddings efficiently
by issuing a single command with a Wikipedia dump file as an argument. We also
introduce a web-based demonstration of our tool that allows users to visualize
and explore the learned embeddings. In our experiments, our tool achieved a
state-of-the-art result on the KORE entity relatedness dataset, and competitive
results on various standard benchmark datasets. Furthermore, our tool has been
used as a key component in various recent studies. We publicize the source
code, demonstration, and the pretrained embeddings for 12 languages at
https://wikipedia2vec.github.io.
| 2,020 | Computation and Language |
Siamese Networks for Semantic Pattern Similarity | Semantic Pattern Similarity is an interesting, though not often encountered
NLP task where two sentences are compared not by their specific meaning, but by
their more abstract semantic pattern (e.g., preposition or frame). We utilize
Siamese Networks to model this task, and show its usefulness in determining SQL
patterns for unseen questions in a database-backed question answering scenario.
Our approach achieves high accuracy and contains a built-in proxy for
confidence, which can be used to keep precision arbitrarily high.
| 2,018 | Computation and Language |
Conditional BERT Contextual Augmentation | We propose a novel data augmentation method for labeled sentences called
conditional BERT contextual augmentation. Data augmentation methods are often
applied to prevent overfitting and improve generalization of deep neural
network models. Recently proposed contextual augmentation augments labeled
sentences by randomly replacing words with more varied substitutions predicted
by language model. BERT demonstrates that a deep bidirectional language model
is more powerful than either an unidirectional language model or the shallow
concatenation of a forward and backward model. We retrofit BERT to conditional
BERT by introducing a new conditional masked language model\footnote{The term
"conditional masked language model" appeared once in original BERT paper, which
indicates context-conditional, is equivalent to term "masked language model".
In our paper, "conditional masked language model" indicates we apply extra
label-conditional constraint to the "masked language model".} task. The well
trained conditional BERT can be applied to enhance contextual augmentation.
Experiments on six various different text classification tasks show that our
method can be easily applied to both convolutional or recurrent neural networks
classifier to obtain obvious improvement.
| 2,018 | Computation and Language |
A Tutorial on Deep Latent Variable Models of Natural Language | There has been much recent, exciting work on combining the complementary
strengths of latent variable models and deep learning. Latent variable modeling
makes it easy to explicitly specify model constraints through conditional
independence properties, while deep learning makes it possible to parameterize
these conditional likelihoods with powerful function approximators. While these
"deep latent variable" models provide a rich, flexible framework for modeling
many real-world phenomena, difficulties exist: deep parameterizations of
conditional likelihoods usually make posterior inference intractable, and
latent variable objectives often complicate backpropagation by introducing
points of non-differentiability. This tutorial explores these issues in depth
through the lens of variational inference.
| 2,019 | Computation and Language |
Fully Convolutional Speech Recognition | Current state-of-the-art speech recognition systems build on recurrent neural
networks for acoustic and/or language modeling, and rely on feature extraction
pipelines to extract mel-filterbanks or cepstral coefficients. In this paper we
present an alternative approach based solely on convolutional neural networks,
leveraging recent advances in acoustic models from the raw waveform and
language modeling. This fully convolutional approach is trained end-to-end to
predict characters from the raw waveform, removing the feature extraction step
altogether. An external convolutional language model is used to decode words.
On Wall Street Journal, our model matches the current state-of-the-art. On
Librispeech, we report state-of-the-art performance among end-to-end models,
including Deep Speech 2 trained with 12 times more acoustic data and
significantly more linguistic data.
| 2,019 | Computation and Language |
Multi-task learning to improve natural language understanding | Recently advancements in sequence-to-sequence neural network architectures
have led to an improved natural language understanding. When building a neural
network-based Natural Language Understanding component, one main challenge is
to collect enough training data. The generation of a synthetic dataset is an
inexpensive and quick way to collect data. Since this data often has less
variety than real natural language, neural networks often have problems to
generalize to unseen utterances during testing. In this work, we address this
challenge by using multi-task learning. We train out-of-domain real data
alongside in-domain synthetic data to improve natural language understanding.
We evaluate this approach in the domain of airline travel information with two
synthetic datasets. As out-of-domain real data, we test two datasets based on
the subtitles of movies and series. By using an attention-based encoder-decoder
model, we were able to improve the F1-score over strong baselines from 80.76 %
to 84.98 % in the smaller synthetic dataset.
| 2,019 | Computation and Language |
From FiLM to Video: Multi-turn Question Answering with Multi-modal
Context | Understanding audio-visual content and the ability to have an informative
conversation about it have both been challenging areas for intelligent systems.
The Audio Visual Scene-aware Dialog (AVSD) challenge, organized as a track of
the Dialog System Technology Challenge 7 (DSTC7), proposes a combined task,
where a system has to answer questions pertaining to a video given a dialogue
with previous question-answer pairs and the video itself. We propose for this
task a hierarchical encoder-decoder model which computes a multi-modal
embedding of the dialogue context. It first embeds the dialogue history using
two LSTMs. We extract video and audio frames at regular intervals and compute
semantic features using pre-trained I3D and VGGish models, respectively. Before
summarizing both modalities into fixed-length vectors using LSTMs, we use FiLM
blocks to condition them on the embeddings of the current question, which
allows us to reduce the dimensionality considerably. Finally, we use an LSTM
decoder that we train with scheduled sampling and evaluate using beam search.
Compared to the modality-fusing baseline model released by the AVSD challenge
organizers, our model achieves a relative improvements of more than 16%,
scoring 0.36 BLEU-4 and more than 33%, scoring 0.997 CIDEr.
| 2,018 | Computation and Language |
Learning Private Neural Language Modeling with Attentive Aggregation | Mobile keyboard suggestion is typically regarded as a word-level language
modeling problem. Centralized machine learning technique requires massive user
data collected to train on, which may impose privacy concerns for sensitive
personal typing data of users. Federated learning (FL) provides a promising
approach to learning private language modeling for intelligent personalized
keyboard suggestion by training models in distributed clients rather than
training in a central server. To obtain a global model for prediction, existing
FL algorithms simply average the client models and ignore the importance of
each client during model aggregation. Furthermore, there is no optimization for
learning a well-generalized global model on the central server. To solve these
problems, we propose a novel model aggregation with the attention mechanism
considering the contribution of clients models to the global model, together
with an optimization technique during server aggregation. Our proposed
attentive aggregation method minimizes the weighted distance between the server
model and client models through iterative parameters updating while attends the
distance between the server model and client models. Through experiments on two
popular language modeling datasets and a social media dataset, our proposed
method outperforms its counterparts in terms of perplexity and communication
cost in most settings of comparison.
| 2,020 | Computation and Language |
Multiple topic identification in human/human conversations | The paper deals with the automatic analysis of real-life telephone
conversations between an agent and a customer of a customer care service (ccs).
The application domain is the public transportation system in Paris and the
purpose is to collect statistics about customer problems in order to monitor
the service and decide priorities on the intervention for improving user
satisfaction. Of primary importance for the analysis is the detection of themes
that are the object of customer problems. Themes are defined in the application
requirements and are part of the application ontology that is implicit in the
ccs documentation. Due to variety of customer population, the structure of
conversations with an agent is unpredictable. A conversation may be about one
or more themes. Theme mentions can be interleaved with mentions of facts that
are irrelevant for the application purpose. Furthermore, in certain
conversations theme mentions are localized in specific conversation segments
while in other conversations mentions cannot be localized. As a consequence,
approaches to feature extraction with and without mention localization are
considered. Application domain relevant themes identified by an automatic
procedure are expressed by specific sentences whose words are hypothesized by
an automatic speech recognition (asr) system. The asr system is error prone.
The word error rates can be very high for many reasons. Among them it is worth
mentioning unpredictable background noise, speaker accent, and various types of
speech disfluencies. As the application task requires the composition of
proportions of theme mentions, a sequential decision strategy is introduced in
this paper for performing a survey of the large amount of conversations made
available in a given time period. The strategy has to sample the conversations
to form a survey containing enough data analyzed with high accuracy so that
proportions can be estimated with sufficient accuracy. Due to the unpredictable
type of theme mentions, it is appropriate to consider methods for theme
hypothesization based on global as well as local feature extraction. Two
systems based on each type of feature extraction will be considered by the
strategy. One of the four methods is novel. It is based on a new definition of
density of theme mentions and on the localization of high density zones whose
boundaries do not need to be precisely detected. The sequential decision
strategy starts by grouping theme hypotheses into sets of different expected
accuracy and coverage levels. For those sets for which accuracy can be improved
with a consequent increase of coverage a new system with new features is
introduced. Its execution is triggered only when specific preconditions are met
on the hypotheses generated by the basic four systems. Experimental results are
provided on a corpus collected in the call center of the Paris transportation
system known as ratp. The results show that surveys with high accuracy and
coverage can be composed with the proposed strategy and systems. This makes it
possible to apply a previously published proportion estimation approach that
takes into account hypothesization errors .
| 2,015 | Computation and Language |
Attend, Copy, Parse -- End-to-end information extraction from documents | Document information extraction tasks performed by humans create data
consisting of a PDF or document image input, and extracted string outputs. This
end-to-end data is naturally consumed and produced when performing the task
because it is valuable in and of itself. It is naturally available, at no
additional cost. Unfortunately, state-of-the-art word classification methods
for information extraction cannot use this data, instead requiring word-level
labels which are expensive to create and consequently not available for many
real life tasks. In this paper we propose the Attend, Copy, Parse architecture,
a deep neural network model that can be trained directly on end-to-end data,
bypassing the need for word-level labels. We evaluate the proposed architecture
on a large diverse set of invoices, and outperform a state-of-the-art
production system based on word classification. We believe our proposed
architecture can be used on many real life information extraction tasks where
word classification cannot be used due to a lack of the required word-level
labels.
| 2,019 | Computation and Language |
Predicting user intent from search queries using both CNNs and RNNs | Predicting user behaviour on a website is a difficult task, which requires
the integration of multiple sources of information, such as geo-location, user
profile or web surfing history. In this paper we tackle the problem of
predicting the user intent, based on the queries that were used to access a
certain webpage. We make no additional assumptions, such as domain detection,
device used or location, and only use the word information embedded in the
given query. In order to build competitive classifiers, we label a small
fraction of the EDI query intent prediction dataset
\cite{edi-challenge-dataset}, which is used as ground truth. Then, using
various rule-based approaches, we automatically label the rest of the dataset,
train the classifiers and evaluate the quality of the automatic labeling on the
ground truth dataset. We used both recurrent and convolutional networks as the
models, while representing the words in the query with multiple embedding
methods.
| 2,018 | Computation and Language |
Supervised Domain Enablement Attention for Personalized Domain
Classification | In large-scale domain classification for natural language understanding,
leveraging each user's domain enablement information, which refers to the
preferred or authenticated domains by the user, with attention mechanism has
been shown to improve the overall domain classification performance. In this
paper, we propose a supervised enablement attention mechanism, which utilizes
sigmoid activation for the attention weighting so that the attention can be
computed with more expressive power without the weight sum constraint of
softmax attention. The attention weights are explicitly encouraged to be
similar to the corresponding elements of the ground-truth's one-hot vector by
supervised attention, and the attention information of the other enabled
domains is leveraged through self-distillation. By evaluating on the actual
utterances from a large-scale IPDA, we show that our approach significantly
improves domain classification performance.
| 2,018 | Computation and Language |
wav2letter++: The Fastest Open-source Speech Recognition System | This paper introduces wav2letter++, the fastest open-source deep learning
speech recognition framework. wav2letter++ is written entirely in C++, and uses
the ArrayFire tensor library for maximum efficiency. Here we explain the
architecture and design of the wav2letter++ system and compare it to other
major open-source speech recognition systems. In some cases wav2letter++ is
more than 2x faster than other optimized frameworks for training end-to-end
neural networks for speech recognition. We also show that wav2letter++'s
training times scale linearly to 64 GPUs, the highest we tested, for models
with 100 million parameters. High-performance frameworks enable fast iteration,
which is often a crucial factor in successful research and model tuning on new
datasets and tasks.
| 2,020 | Computation and Language |
Streaming Voice Query Recognition using Causal Convolutional Recurrent
Neural Networks | Voice-enabled commercial products are ubiquitous, typically enabled by
lightweight on-device keyword spotting (KWS) and full automatic speech
recognition (ASR) in the cloud. ASR systems require significant computational
resources in training and for inference, not to mention copious amounts of
annotated speech data. KWS systems, on the other hand, are less
resource-intensive but have limited capabilities. On the Comcast Xfinity X1
entertainment platform, we explore a middle ground between ASR and KWS: We
introduce a novel, resource-efficient neural network for voice query
recognition that is much more accurate than state-of-the-art CNNs for KWS, yet
can be easily trained and deployed with limited resources. On an evaluation
dataset representing the top 200 voice queries, we achieve a low false alarm
rate of 1% and a query error rate of 6%. Our model performs inference 8.24x
faster than the current ASR system.
| 2,018 | Computation and Language |
DTMT: A Novel Deep Transition Architecture for Neural Machine
Translation | Past years have witnessed rapid developments in Neural Machine Translation
(NMT). Most recently, with advanced modeling and training techniques, the
RNN-based NMT (RNMT) has shown its potential strength, even compared with the
well-known Transformer (self-attentional) model. Although the RNMT model can
possess very deep architectures through stacking layers, the transition depth
between consecutive hidden states along the sequential axis is still shallow.
In this paper, we further enhance the RNN-based NMT through increasing the
transition depth between consecutive hidden states and build a novel Deep
Transition RNN-based Architecture for Neural Machine Translation, named DTMT.
This model enhances the hidden-to-hidden transition with multiple non-linear
transformations, as well as maintains a linear transformation path throughout
this deep transition by the well-designed linear transformation mechanism to
alleviate the gradient vanishing problem. Experiments show that with the
specially designed deep transition modules, our DTMT can achieve remarkable
improvements on translation quality. Experimental results on Chinese->English
translation task show that DTMT can outperform the Transformer model by +2.09
BLEU points and achieve the best results ever reported in the same dataset. On
WMT14 English->German and English->French translation tasks, DTMT shows
superior quality to the state-of-the-art NMT systems, including the Transformer
and the RNMT+.
| 2,019 | Computation and Language |
Self-Attention: A Better Building Block for Sentiment Analysis Neural
Network Classifiers | Sentiment Analysis has seen much progress in the past two decades. For the
past few years, neural network approaches, primarily RNNs and CNNs, have been
the most successful for this task. Recently, a new category of neural networks,
self-attention networks (SANs), have been created which utilizes the attention
mechanism as the basic building block. Self-attention networks have been shown
to be effective for sequence modeling tasks, while having no recurrence or
convolutions. In this work we explore the effectiveness of the SANs for
sentiment analysis. We demonstrate that SANs are superior in performance to
their RNN and CNN counterparts by comparing their classification accuracy on
six datasets as well as their model characteristics such as training speed and
memory consumption. Finally, we explore the effects of various SAN
modifications such as multi-head attention as well as two methods of
incorporating sequence position information into SANs.
| 2,018 | Computation and Language |
Switch-LSTMs for Multi-Criteria Chinese Word Segmentation | Multi-criteria Chinese word segmentation is a promising but challenging task,
which exploits several different segmentation criteria and mines their common
underlying knowledge. In this paper, we propose a flexible multi-criteria
learning for Chinese word segmentation. Usually, a segmentation criterion could
be decomposed into multiple sub-criteria, which are shareable with other
segmentation criteria. The process of word segmentation is a routing among
these sub-criteria. From this perspective, we present Switch-LSTMs to segment
words, which consist of several long short-term memory neural networks (LSTM),
and a switcher to automatically switch the routing among these LSTMs. With
these auto-switched LSTMs, our model provides a more flexible solution for
multi-criteria CWS, which is also easy to transfer the learned knowledge to new
criteria. Experiments show that our model obtains significant improvements on
eight corpora with heterogeneous segmentation criteria, compared to the
previous method and single-criterion learning.
| 2,018 | Computation and Language |
Semantic Frame Parsing for Information Extraction : the CALOR corpus | This paper presents a publicly available corpus of French encyclopedic
history texts annotated according to the Berkeley FrameNet formalism. The main
difference in our approach compared to previous works on semantic parsing with
FrameNet is that we are not interested here in full text parsing but rather on
partial parsing. The goal is to select from the FrameNet resources the minimal
set of frames that are going to be useful for the applicative framework
targeted, in our case Information Extraction from encyclopedic documents. Such
an approach leverages the manual annotation of larger corpora than those
obtained through full text parsing and therefore opens the door to alternative
methods for Frame parsing than those used so far on the FrameNet 1.5 benchmark
corpus. The approaches compared in this study rely on an integrated sequence
labeling model which jointly optimizes frame identification and semantic role
segmentation and identification. The models compared are CRFs and multitasks
bi-LSTMs.
| 2,018 | Computation and Language |
FrameNet automatic analysis : a study on a French corpus of encyclopedic
texts | This article presents an automatic frame analysis system evaluated on a
corpus of French encyclopedic history texts annotated according to the FrameNet
formalism. The chosen approach relies on an integrated sequence labeling model
which jointly optimizes frame identification and semantic role segmentation and
identification. The purpose of this study is to analyze the task complexity
from several dimensions. Hence we provide detailed evaluations from a feature
selection point of view and from the data point of view.
| 2,017 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.