Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Hierarchical Multi-task Approach for Learning Embeddings from Semantic
Tasks | Much effort has been devoted to evaluate whether multi-task learning can be
leveraged to learn rich representations that can be used in various Natural
Language Processing (NLP) down-stream applications. However, there is still a
lack of understanding of the settings in which multi-task learning has a
significant effect. In this work, we introduce a hierarchical model trained in
a multi-task learning setup on a set of carefully selected semantic tasks. The
model is trained in a hierarchical fashion to introduce an inductive bias by
supervising a set of low level tasks at the bottom layers of the model and more
complex tasks at the top layers of the model. This model achieves
state-of-the-art results on a number of tasks, namely Named Entity Recognition,
Entity Mention Detection and Relation Extraction without hand-engineered
features or external NLP tools like syntactic parsers. The hierarchical
training supervision induces a set of shared semantic representations at lower
layers of the model. We show that as we move from the bottom to the top layers
of the model, the hidden states of the layers tend to represent more complex
semantic information.
| 2,018 | Computation and Language |
Automatic Grammar Augmentation for Robust Voice Command Recognition | This paper proposes a novel pipeline for automatic grammar augmentation that
provides a significant improvement in the voice command recognition accuracy
for systems with small footprint acoustic model (AM). The improvement is
achieved by augmenting the user-defined voice command set, also called grammar
set, with alternate grammar expressions. For a given grammar set, a set of
potential grammar expressions (candidate set) for augmentation is constructed
from an AM-specific statistical pronunciation dictionary that captures the
consistent patterns and errors in the decoding of AM induced by variations in
pronunciation, pitch, tempo, accent, ambiguous spellings, and noise conditions.
Using this candidate set, greedy optimization based and cross-entropy-method
(CEM) based algorithms are considered to search for an augmented grammar set
with improved recognition accuracy utilizing a command-specific dataset. Our
experiments show that the proposed pipeline along with algorithms considered in
this paper significantly reduce the mis-detection and mis-classification rate
without increasing the false-alarm rate. Experiments also demonstrate the
consistent superior performance of CEM method over greedy-based algorithms.
| 2,018 | Computation and Language |
Exploiting Sentence Embedding for Medical Question Answering | Despite the great success of word embedding, sentence embedding remains a
not-well-solved problem. In this paper, we present a supervised learning
framework to exploit sentence embedding for the medical question answering
task. The learning framework consists of two main parts: 1) a sentence
embedding producing module, and 2) a scoring module. The former is developed
with contextual self-attention and multi-scale techniques to encode a sentence
into an embedding tensor. This module is shortly called Contextual
self-Attention Multi-scale Sentence Embedding (CAMSE). The latter employs two
scoring strategies: Semantic Matching Scoring (SMS) and Semantic Association
Scoring (SAS). SMS measures similarity while SAS captures association between
sentence pairs: a medical question concatenated with a candidate choice, and a
piece of corresponding supportive evidence. The proposed framework is examined
by two Medical Question Answering(MedicalQA) datasets which are collected from
real-world applications: medical exam and clinical diagnosis based on
electronic medical records (EMR). The comparison results show that our proposed
framework achieved significant improvements compared to competitive baseline
approaches. Additionally, a series of controlled experiments are also conducted
to illustrate that the multi-scale strategy and the contextual self-attention
layer play important roles for producing effective sentence embedding, and the
two kinds of scoring strategies are highly complementary to each other for
question answering problems.
| 2,018 | Computation and Language |
Implementing a Portable Clinical NLP System with a Common Data Model - a
Lisp Perspective | This paper presents a Lisp architecture for a portable NLP system, termed
LAPNLP, for processing clinical notes. LAPNLP integrates multiple standard,
customized and in-house developed NLP tools. Our system facilitates portability
across different institutions and data systems by incorporating an enriched
Common Data Model (CDM) to standardize necessary data elements. It utilizes
UMLS to perform domain adaptation when integrating generic domain NLP tools. It
also features stand-off annotations that are specified by positional reference
to the original document. We built an interval tree based search engine to
efficiently query and retrieve the stand-off annotations by specifying
positional requirements. We also developed a utility to convert an inline
annotation format to stand-off annotations to enable the reuse of clinical text
datasets with inline annotations. We experimented with our system on several
NLP facilitated tasks including computational phenotyping for lymphoma patients
and semantic relation extraction for clinical notes. These experiments
showcased the broader applicability and utility of LAPNLP.
| 2,018 | Computation and Language |
Characterizing Design Patterns of EHR-Driven Phenotype Extraction
Algorithms | The automatic development of phenotype algorithms from Electronic Health
Record data with machine learning (ML) techniques is of great interest given
the current practice is very time-consuming and resource intensive. The
extraction of design patterns from phenotype algorithms is essential to
understand their rationale and standard, with great potential to automate the
development process. In this pilot study, we perform network visualization on
the design patterns and their associations with phenotypes and sites. We
classify design patterns using the fragments from previously annotated
phenotype algorithms as the ground truth. The classification performance is
used as a proxy for coherence at the attribution level. The bag-of-words
representation with knowledge-based features generated a good performance in
the classification task (0.79 macro-f1 scores). Good classification accuracy
with simple features demonstrated the attribution coherence and the feasibility
of automatic identification of design patterns. Our results point to both the
feasibility and challenges of automatic identification of phenotyping design
patterns, which would power the automatic development of phenotype algorithms.
| 2,018 | Computation and Language |
Combining Axiom Injection and Knowledge Base Completion for Efficient
Natural Language Inference | In logic-based approaches to reasoning tasks such as Recognizing Textual
Entailment (RTE), it is important for a system to have a large amount of
knowledge data. However, there is a tradeoff between adding more knowledge data
for improved RTE performance and maintaining an efficient RTE system, as such a
big database is problematic in terms of the memory usage and computational
complexity. In this work, we show the processing time of a state-of-the-art
logic-based RTE system can be significantly reduced by replacing its
search-based axiom injection (abduction) mechanism by that based on Knowledge
Base Completion (KBC). We integrate this mechanism in a Coq plugin that
provides a proof automation tactic for natural language inference.
Additionally, we show empirically that adding new knowledge data contributes to
better RTE performance while not harming the processing speed in this
framework.
| 2,018 | Computation and Language |
Survey of Computational Approaches to Lexical Semantic Change | Our languages are in constant flux driven by external factors such as
cultural, societal and technological changes, as well as by only partially
understood internal motivations. Words acquire new meanings and lose old
senses, new words are coined or borrowed from other languages and obsolete
words slide into obscurity. Understanding the characteristics of shifts in the
meaning and in the use of words is useful for those who work with the content
of historical texts, the interested general public, but also in and of itself.
The findings from automatic lexical semantic change detection, and the models
of diachronic conceptual change are currently being incorporated in approaches
for measuring document across-time similarity, information retrieval from
long-term document archives, the design of OCR algorithms, and so on. In recent
years we have seen a surge in interest in the academic community in
computational methods and tools supporting inquiry into diachronic conceptual
change and lexical replacement. This article is an extract of a survey of
recent computational techniques to tackle lexical semantic change currently
under review. In this article we focus on diachronic conceptual change as an
extension of semantic change.
| 2,019 | Computation and Language |
End-to-End Learning for Answering Structured Queries Directly over Text | Structured queries expressed in languages (such as SQL, SPARQL, or XQuery)
offer a convenient and explicit way for users to express their information
needs for a number of tasks. In this work, we present an approach to answer
these directly over text data without storing results in a database. We
specifically look at the case of knowledge bases where queries are over
entities and the relations between them. Our approach combines distributed
query answering (e.g. Triple Pattern Fragments) with models built for
extractive question answering. Importantly, by applying distributed querying
answering we are able to simplify the model learning problem. We train models
for a large portion (572) of the relations within Wikidata and achieve an
average 0.70 F1 measure across all models. We also present a systematic method
to construct the necessary training data for this task from knowledge graphs
and describe a prototype implementation.
| 2,019 | Computation and Language |
Effect of data reduction on sequence-to-sequence neural TTS | Recent speech synthesis systems based on sampling from autoregressive neural
networks models can generate speech almost undistinguishable from human
recordings. However, these models require large amounts of data. This paper
shows that the lack of data from one speaker can be compensated with data from
other speakers. The naturalness of Tacotron2-like models trained on a blend of
5k utterances from 7 speakers is better than that of speaker dependent models
trained on 15k utterances, but in terms of stability multi-speaker models are
always more stable. We also demonstrate that models mixing only 1250 utterances
from a target speaker with 5k utterances from another 6 speakers can produce
significantly better quality than state-of-the-art DNN-guided unit selection
systems trained on more than 10 times the data from the target speaker.
| 2,018 | Computation and Language |
On Generality and Knowledge Transferability in Cross-Domain Duplicate
Question Detection for Heterogeneous Community Question Answering | Duplicate question detection is an ongoing challenge in community question
answering because semantically equivalent questions can have significantly
different words and structures. In addition, the identification of duplicate
questions can reduce the resources required for retrieval, when the same
questions are not repeated. This study compares the performance of deep neural
networks and gradient tree boosting, and explores the possibility of domain
adaptation with transfer learning to improve the under-performing target
domains for the text-pair duplicates classification task, using three
heterogeneous datasets: general-purpose Quora, technical Ask Ubuntu, and
academic English Stack Exchange. Ultimately, our study exposes the alternative
hypothesis that the meaning of a "duplicate" is not inherently general-purpose,
but rather is dependent on the domain of learning, hence reducing the chance of
transfer learning through adapting to the domain.
| 2,018 | Computation and Language |
Streaming End-to-end Speech Recognition For Mobile Devices | End-to-end (E2E) models, which directly predict output character sequences
given input speech, are good candidates for on-device speech recognition. E2E
models, however, present numerous challenges: In order to be truly useful, such
models must decode speech utterances in a streaming fashion, in real time; they
must be robust to the long tail of use cases; they must be able to leverage
user-specific context (e.g., contact lists); and above all, they must be
extremely accurate. In this work, we describe our efforts at building an E2E
speech recognizer using a recurrent neural network transducer. In experimental
evaluations, we find that the proposed approach can outperform a conventional
CTC-based model in terms of both latency and accuracy in a number of evaluation
categories.
| 2,018 | Computation and Language |
Nudging Neural Conversational Model with Domain Knowledge | Neural conversation models are attractive because one can train a model
directly on dialog examples with minimal labeling. With a small amount of data,
however, they often fail to generalize over test data since they tend to
capture spurious features instead of semantically meaningful domain knowledge.
To address this issue, we propose a novel approach that allows any human
teachers to transfer their domain knowledge to the conversation model in the
form of natural language rules. We tested our method with three different
dialog datasets. The improved performance across all domains demonstrates the
efficacy of our proposed method.
| 2,018 | Computation and Language |
Investigating the Effects of Word Substitution Errors on Sentence
Embeddings | A key initial step in several natural language processing (NLP) tasks
involves embedding phrases of text to vectors of real numbers that preserve
semantic meaning. To that end, several methods have been recently proposed with
impressive results on semantic similarity tasks. However, all of these
approaches assume that perfect transcripts are available when generating the
embeddings. While this is a reasonable assumption for analysis of written text,
it is limiting for analysis of transcribed text. In this paper we investigate
the effects of word substitution errors, such as those coming from automatic
speech recognition errors (ASR), on several state-of-the-art sentence embedding
methods. To do this, we propose a new simulator that allows the experimenter to
induce ASR-plausible word substitution errors in a corpus at a desired word
error rate. We use this simulator to evaluate the robustness of several
sentence embedding methods. Our results show that pre-trained neural sentence
encoders are both robust to ASR errors and perform well on textual similarity
tasks after errors are introduced. Meanwhile, unweighted averages of word
vectors perform well with perfect transcriptions, but their performance
degrades rapidly on textual similarity tasks for text with word substitution
errors.
| 2,019 | Computation and Language |
Mining Entity Synonyms with Efficient Neural Set Generation | Mining entity synonym sets (i.e., sets of terms referring to the same entity)
is an important task for many entity-leveraging applications. Previous work
either rank terms based on their similarity to a given query term, or treats
the problem as a two-phase task (i.e., detecting synonymy pairs, followed by
organizing these pairs into synonym sets). However, these approaches fail to
model the holistic semantics of a set and suffer from the error propagation
issue. Here we propose a new framework, named SynSetMine, that efficiently
generates entity synonym sets from a given vocabulary, using example sets from
external knowledge bases as distant supervision. SynSetMine consists of two
novel modules: (1) a set-instance classifier that jointly learns how to
represent a permutation invariant synonym set and whether to include a new
instance (i.e., a term) into the set, and (2) a set generation algorithm that
enumerates the vocabulary only once and applies the learned set-instance
classifier to detect all entity synonym sets in it. Experiments on three real
datasets from different domains demonstrate both effectiveness and efficiency
of SynSetMine for mining entity synonym sets.
| 2,018 | Computation and Language |
Analyzing Compositionality-Sensitivity of NLI Models | Success in natural language inference (NLI) should require a model to
understand both lexical and compositional semantics. However, through
adversarial evaluation, we find that several state-of-the-art models with
diverse architectures are over-relying on the former and fail to use the
latter. Further, this compositionality unawareness is not reflected via
standard evaluation on current datasets. We show that removing RNNs in existing
models or shuffling input words during training does not induce large
performance loss despite the explicit removal of compositional information.
Therefore, we propose a compositionality-sensitivity testing setup that
analyzes models on natural examples from existing datasets that cannot be
solved via lexical features alone (i.e., on which a bag-of-words model gives a
high probability to one wrong label), hence revealing the models' actual
compositionality awareness. We show that this setup not only highlights the
limited compositional ability of current NLI models, but also differentiates
model performance based on design, e.g., separating shallow bag-of-words models
from deeper, linguistically-grounded tree-based models. Our evaluation setup is
an important analysis tool: complementing currently existing adversarial and
linguistically driven diagnostic evaluations, and exposing opportunities for
future work on evaluating models' compositional understanding.
| 2,018 | Computation and Language |
Combining Fact Extraction and Verification with Neural Semantic Matching
Networks | The increasing concern with misinformation has stimulated research efforts on
automatic fact checking. The recently-released FEVER dataset introduced a
benchmark fact-verification task in which a system is asked to verify a claim
using evidential sentences from Wikipedia documents. In this paper, we present
a connected system consisting of three homogeneous neural semantic matching
models that conduct document retrieval, sentence selection, and claim
verification jointly for fact extraction and verification. For evidence
retrieval (document retrieval and sentence selection), unlike traditional
vector space IR models in which queries and sources are matched in some
pre-designed term vector space, we develop neural models to perform deep
semantic matching from raw textual input, assuming no intermediate term
representation and no access to structured external knowledge bases. We also
show that Pageview frequency can also help improve the performance of evidence
retrieval results, that later can be matched by using our neural semantic
matching network. For claim verification, unlike previous approaches that
simply feed upstream retrieved evidence and the claim to a natural language
inference (NLI) model, we further enhance the NLI model by providing it with
internal semantic relatedness scores (hence integrating it with the evidence
retrieval modules) and ontological WordNet features. Experiments on the FEVER
dataset indicate that (1) our neural semantic matching method outperforms
popular TF-IDF and encoder models, by significant margins on all evidence
retrieval metrics, (2) the additional relatedness score and WordNet features
improve the NLI model via better semantic awareness, and (3) by formalizing all
three subtasks as a similar semantic matching problem and improving on all
three stages, the complete model is able to achieve the state-of-the-art
results on the FEVER test set.
| 2,018 | Computation and Language |
Using Sentiment Induction to Understand Variation in Gendered Online
Communities | We analyze gendered communities defined in three different ways: text, users,
and sentiment. Differences across these representations reveal facets of
communities' distinctive identities, such as social group, topic, and
attitudes. Two communities may have high text similarity but not user
similarity or vice versa, and word usage also does not vary according to a
clearcut, binary perspective of gender. Community-specific sentiment lexicons
demonstrate that sentiment can be a useful indicator of words' social meaning
and community values, especially in the context of discussion content and user
demographics. Our results show that social platforms such as Reddit are active
settings for different constructions of gender.
| 2,018 | Computation and Language |
Detecting Incongruity Between News Headline and Body Text via a Deep
Hierarchical Encoder | Some news headlines mislead readers with overrated or false information, and
identifying them in advance will better assist readers in choosing proper news
stories to consume. This research introduces million-scale pairs of news
headline and body text dataset with incongruity label, which can uniquely be
utilized for detecting news stories with misleading headlines. On this dataset,
we develop two neural networks with hierarchical architectures that model a
complex textual representation of news articles and measure the incongruity
between the headline and the body text. We also present a data augmentation
method that dramatically reduces the text input size a model handles by
independently investigating each paragraph of news stories, which further
boosts the performance. Our experiments and qualitative evaluations demonstrate
that the proposed methods outperform existing approaches and efficiently detect
news stories with misleading headlines in the real world.
| 2,019 | Computation and Language |
An Affect-Rich Neural Conversational Model with Biased Attention and
Weighted Cross-Entropy Loss | Affect conveys important implicit information in human communication. Having
the capability to correctly express affect during human-machine conversations
is one of the major milestones in artificial intelligence. In recent years,
extensive research on open-domain neural conversational models has been
conducted. However, embedding affect into such models is still under explored.
In this paper, we propose an end-to-end affect-rich open-domain neural
conversational model that produces responses not only appropriate in syntax and
semantics, but also with rich affect. Our model extends the Seq2Seq model and
adopts VAD (Valence, Arousal and Dominance) affective notations to embed each
word with affects. In addition, our model considers the effect of negators and
intensifiers via a novel affective attention mechanism, which biases attention
towards affect-rich words in input sentences. Lastly, we train our model with
an affect-incorporated objective function to encourage the generation of
affect-rich words in the output responses. Evaluations based on both perplexity
and human evaluations show that our model outperforms the state-of-the-art
baseline model of comparable size in producing natural and affect-rich
responses.
| 2,018 | Computation and Language |
Bilingual Dictionary Induction for Bantu Languages | We present a method for learning bilingual translation dictionaries between
English and Bantu languages. We show that exploiting the grammatical structure
common to Bantu languages enables bilingual dictionary induction for languages
where training data is unavailable.
| 2,019 | Computation and Language |
Unnamed Entity Recognition of Sense Mentions | We consider the problem of recognizing mentions of human senses in text. Our
contribution is a method for acquiring labeled data, and a learning method that
is trained on this data. Experiments show the effectiveness of our proposed
data labeling approach and our learning model on the task of sense recognition
in text.
| 2,019 | Computation and Language |
Sense Perception Common Sense Relationships | Often missing in existing knowledge bases of facts, are relationships that
encode common sense knowledge about unnamed entities. In this paper, we propose
to extract novel, common sense relationships pertaining to sense perception
concepts such as sound and smell.
| 2,019 | Computation and Language |
Robust cross-domain disfluency detection with pattern match networks | In this paper we introduce a novel pattern match neural network architecture
that uses neighbor similarity scores as features, eliminating the need for
feature engineering in a disfluency detection task. We evaluate the approach in
disfluency detection for four different speech genres, showing that the
approach is as effective as hand-engineered pattern match features when used on
in-domain data and achieves superior performance in cross-domain scenarios.
| 2,018 | Computation and Language |
Quantifying Uncertainties in Natural Language Processing Tasks | Reliable uncertainty quantification is a first step towards building
explainable, transparent, and accountable artificial intelligent systems.
Recent progress in Bayesian deep learning has made such quantification
realizable. In this paper, we propose novel methods to study the benefits of
characterizing model and data uncertainties for natural language processing
(NLP) tasks. With empirical experiments on sentiment analysis, named entity
recognition, and language modeling using convolutional and recurrent neural
network models, we show that explicitly modeling uncertainties is not only
necessary to measure output confidence levels, but also useful at enhancing
model performances in various NLP tasks.
| 2,018 | Computation and Language |
Neural Multi-Task Learning for Citation Function and Provenance | Citation function and provenance are two cornerstone tasks in citation
analysis. Given a citation, the former task determines its rhetorical role,
while the latter locates the text in the cited paper that contains the relevant
cited information. We hypothesize that these two tasks are synergistically
related, and build a model that validates this claim. For both tasks, we show
that a single-layer convolutional neural network (CNN) outperforms existing
state-of-the-art baselines. More importantly, we show that the two tasks are
indeed synergistic: by jointly training both of the tasks in a multi-task
learning setup, we demonstrate additional performance gains. Altogether, our
models improve the current state-of-the-arts up to 2\%, with statistical
significance for both citation function and provenance prediction tasks.
| 2,019 | Computation and Language |
Understanding and Measuring Psychological Stress using Social Media | A body of literature has demonstrated that users' mental health conditions,
such as depression and anxiety, can be predicted from their social media
language. There is still a gap in the scientific understanding of how
psychological stress is expressed on social media. Stress is one of the primary
underlying causes and correlates of chronic physical illnesses and mental
health conditions. In this paper, we explore the language of psychological
stress with a dataset of 601 social media users, who answered the Perceived
Stress Scale questionnaire and also consented to share their Facebook and
Twitter data. Firstly, we find that stressed users post about exhaustion,
losing control, increased self-focus and physical pain as compared to posts
about breakfast, family-time, and travel by users who are not stressed.
Secondly, we find that Facebook language is more predictive of stress than
Twitter language. Thirdly, we demonstrate how the language based models thus
developed can be adapted and be scaled to measure county-level trends. Since
county-level language is easily available on Twitter using the Streaming API,
we explore multiple domain adaptation algorithms to adapt user-level Facebook
models to Twitter language. We find that domain-adapted and scaled social
media-based measurements of stress outperform sociodemographic variables (age,
gender, race, education, and income), against ground-truth survey-based stress
measurements, both at the user- and the county-level in the U.S. Twitter
language that scores higher in stress is also predictive of poorer health, less
access to facilities and lower socioeconomic status in counties. We conclude
with a discussion of the implications of using social media as a new tool for
monitoring stress levels of both individuals and counties.
| 2,019 | Computation and Language |
A Comparative Analysis of Content-based Geolocation in Blogs and Tweets | The geolocation of online information is an essential component in any
geospatial application. While most of the previous work on geolocation has
focused on Twitter, in this paper we quantify and compare the performance of
text-based geolocation methods on social media data drawn from both Blogger and
Twitter. We introduce a novel set of location specific features that are both
highly informative and easily interpretable, and show that we can achieve error
rate reductions of up to 12.5% with respect to the best previously proposed
geolocation features. We also show that despite posting longer text, Blogger
users are significantly harder to geolocate than Twitter users. Additionally,
we investigate the effect of training and testing on different media
(cross-media predictions), or combining multiple social media sources
(multi-media predictions). Finally, we explore the geolocability of social
media in relation to three user dimensions: state, gender, and industry.
| 2,018 | Computation and Language |
Switch-based Active Deep Dyna-Q: Efficient Adaptive Planning for
Task-Completion Dialogue Policy Learning | Training task-completion dialogue agents with reinforcement learning usually
requires a large number of real user experiences. The Dyna-Q algorithm extends
Q-learning by integrating a world model, and thus can effectively boost
training efficiency using simulated experiences generated by the world model.
The effectiveness of Dyna-Q, however, depends on the quality of the world model
- or implicitly, the pre-specified ratio of real vs. simulated experiences used
for Q-learning. To this end, we extend the recently proposed Deep Dyna-Q (DDQ)
framework by integrating a switcher that automatically determines whether to
use a real or simulated experience for Q-learning. Furthermore, we explore the
use of active learning for improving sample efficiency, by encouraging the
world model to generate simulated experiences in the state-action space where
the agent has not (fully) explored. Our results show that by combining switcher
and active learning, the new framework named as Switch-based Active Deep Dyna-Q
(Switch-DDQ), leads to significant improvement over DDQ and Q-learning
baselines in both simulation and human evaluations.
| 2,018 | Computation and Language |
Chat More If You Like: Dynamic Cue Words Planning to Flow Longer
Conversations | To build an open-domain multi-turn conversation system is one of the most
interesting and challenging tasks in Artificial Intelligence. Many research
efforts have been dedicated to building such dialogue systems, yet few shed
light on modeling the conversation flow in an ongoing dialogue. Besides, it is
common for people to talk about highly relevant aspects during a conversation.
And the topics are coherent and drift naturally, which demonstrates the
necessity of dialogue flow modeling. To this end, we present the multi-turn
cue-words driven conversation system with reinforcement learning method (RLCw),
which strives to select an adaptive cue word with the greatest future credit,
and therefore improve the quality of generated responses. We introduce a new
reward to measure the quality of cue words in terms of effectiveness and
relevance. To further optimize the model for long-term conversations, a
reinforcement approach is adopted in this paper. Experiments on real-life
dataset demonstrate that our model consistently outperforms a set of
competitive baselines in terms of simulated turns, diversity and human
evaluation.
| 2,018 | Computation and Language |
Beam Search Decoding using Manner of Articulation Detection Knowledge
Derived from Connectionist Temporal Classification | Manner of articulation detection using deep neural networks require a priori
knowledge of the attribute discriminative features or the decent phoneme
alignments. However generating an appropriate phoneme alignment is complex and
its performance depends on the choice of optimal number of senones, Gaussians,
etc. In the first part of our work, we exploit the manner of articulation
detection using connectionist temporal classification (CTC) which doesn't need
any phoneme alignment. Later we modify the state-of-the-art character based
posteriors generated by CTC using the manner of articulation CTC detector. Beam
search decoding is performed on the modified posteriors and it's impact on open
source datasets such as AN4 and LibriSpeech is observed.
| 2,018 | Computation and Language |
The Mafiascum Dataset: A Large Text Corpus for Deception Detection | Detecting deception in natural language has a wide variety of applications,
but because of its hidden nature there are currently no public, large-scale
sources of labeled deceptive text. This work introduces the Mafiascum dataset
[1], a collection of over 700 games of Mafia, in which players are randomly
assigned either deceptive or non-deceptive roles and then interact via forum
postings. Over 9000 documents were compiled from the dataset, which each
contained all messages written by a single player in a single game. This corpus
was used to construct a set of hand-picked linguistic features based on prior
deception research, as well as a set of average word vectors enriched with
subword information. A logistic regression classifier fit on a combination of
these feature sets achieved an average precision of 0.39 (chance = 0.26) and an
AUROC of 0.68 on 5000+ word documents. On 50+ word documents, an average
precision of 0.29 (chance = 0.23) and an AUROC of 0.59 was achieved.
[1] https://bitbucket.org/bopjesvla/thesis/src
| 2,019 | Computation and Language |
Unsupervised Pseudo-Labeling for Extractive Summarization on Electronic
Health Records | Extractive summarization is very useful for physicians to better manage and
digest Electronic Health Records (EHRs). However, the training of a supervised
model requires disease-specific medical background and is thus very expensive.
We studied how to utilize the intrinsic correlation between multiple EHRs to
generate pseudo-labels and train a supervised model with no external
annotation. Experiments on real-patient data validate that our model is
effective in summarizing crucial disease-specific information for patients.
| 2,018 | Computation and Language |
QuaRel: A Dataset and Models for Answering Questions about Qualitative
Relationships | Many natural language questions require recognizing and reasoning with
qualitative relationships (e.g., in science, economics, and medicine), but are
challenging to answer with corpus-based methods. Qualitative modeling provides
tools that support such reasoning, but the semantic parsing task of mapping
questions into those models has formidable challenges. We present QuaRel, a
dataset of diverse story questions involving qualitative relationships that
characterize these challenges, and techniques that begin to address them. The
dataset has 2771 questions relating 19 different types of quantities. For
example, "Jenny observes that the robot vacuum cleaner moves slower on the
living room carpet than on the bedroom carpet. Which carpet has more friction?"
We contribute (1) a simple and flexible conceptual framework for representing
these kinds of questions; (2) the QuaRel dataset, including logical forms,
exemplifying the parsing challenges; and (3) two novel models for this task,
built as extensions of type-constrained semantic parsing. The first of these
models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel.
The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to
handle new qualitative relationships without requiring additional training
data, something not possible with previous models. This work thus makes inroads
into answering complex, qualitative questions that require reasoning, and
scaling to new relationships at low cost. The dataset and models are available
at http://data.allenai.org/quarel.
| 2,018 | Computation and Language |
An empirical evaluation of AMR parsing for legal documents | Many approaches have been proposed to tackle the problem of Abstract Meaning
Representation (AMR) parsing, helps solving various natural language processing
issues recently. In our paper, we provide an overview of different methods in
AMR parsing and their performances when analyzing legal documents. We conduct
experiments of different AMR parsers on our annotated dataset extracted from
the English version of Japanese Civil Code. Our results show the limitations as
well as open a room for improvements of current parsing techniques when
applying in this complicated domain.
| 2,018 | Computation and Language |
Another Diversity-Promoting Objective Function for Neural Dialogue
Generation | Although generation-based dialogue systems have been widely researched, the
response generations by most existing systems have very low diversities. The
most likely reason for this problem is Maximum Likelihood Estimation (MLE) with
Softmax Cross-Entropy (SCE) loss. MLE trains models to generate the most
frequent responses from enormous generation candidates, although in actual
dialogues there are various responses based on the context. In this paper, we
propose a new objective function called Inverse Token Frequency (ITF) loss,
which individually scales smaller loss for frequent token classes and larger
loss for rare token classes. This function encourages the model to generate
rare tokens rather than frequent tokens. It does not complicate the model and
its training is stable because we only replace the objective function. On the
OpenSubtitles dialogue dataset, our loss model establishes a state-of-the-art
DIST-1 of 7.56, which is the unigram diversity score, while maintaining a good
BLEU-1 score. On a Japanese Twitter replies dataset, our loss model achieves a
DIST-1 score comparable to the ground truth.
| 2,018 | Computation and Language |
DeepZip: Lossless Data Compression using Recurrent Neural Networks | Sequential data is being generated at an unprecedented pace in various forms,
including text and genomic data. This creates the need for efficient
compression mechanisms to enable better storage, transmission and processing of
such data. To solve this problem, many of the existing compressors attempt to
learn models for the data and perform prediction-based compression. Since
neural networks are known as universal function approximators with the
capability to learn arbitrarily complex mappings, and in practice show
excellent performance in prediction tasks, we explore and devise methods to
compress sequential data using neural network predictors. We combine recurrent
neural network predictors with an arithmetic coder and losslessly compress a
variety of synthetic, text and genomic datasets. The proposed compressor
outperforms Gzip on the real datasets and achieves near-optimal compression for
the synthetic datasets. The results also help understand why and where neural
networks are good alternatives for traditional finite context models
| 2,018 | Computation and Language |
Fading of collective attention shapes the evolution of linguistic
variants | Language change involves the competition between alternative linguistic forms
(1). The spontaneous evolution of these forms typically results in monotonic
growths or decays (2, 3) like in winner-take-all attractor behaviors. In the
case of the Spanish past subjunctive, the spontaneous evolution of its two
competing forms (ended in -ra and -se) was perturbed by the appearance of the
Royal Spanish Academy in 1713, which enforced the spelling of both forms as
perfectly interchangeable variants (4), at a moment in which the -ra form was
dominant (5). Time series extracted from a massive corpus of books (6) reveal
that this regulation in fact produced a transient renewed interest for the old
form -se which, once faded, left the -ra again as the dominant form up to the
present day. We show that time series are successfully explained by a
two-dimensional linear model that integrates an imitative and a novelty
component. The model reveals that the temporal scale over which collective
attention fades is in inverse proportion to the verb frequency. The integration
of the two basic mechanisms of imitation and attention to novelty allows to
understand diverse competing objects, with lifetimes that range from hours for
memes and news (7, 8) to decades for verbs, suggesting the existence of a
general mechanism underlying cultural evolution.
| 2,019 | Computation and Language |
Neural Machine Translation with Adequacy-Oriented Learning | Although Neural Machine Translation (NMT) models have advanced
state-of-the-art performance in machine translation, they face problems like
the inadequate translation. We attribute this to that the standard Maximum
Likelihood Estimation (MLE) cannot judge the real translation quality due to
its several limitations. In this work, we propose an adequacy-oriented learning
mechanism for NMT by casting translation as a stochastic policy in
Reinforcement Learning (RL), where the reward is estimated by explicitly
measuring translation adequacy. Benefiting from the sequence-level training of
RL strategy and a more accurate reward designed specifically for translation,
our model outperforms multiple strong baselines, including (1) standard and
coverage-augmented attention models with MLE-based training, and (2) advanced
reinforcement and adversarial training strategies with rewards based on both
word-level BLEU and character-level chrF3. Quantitative and qualitative
analyses on different language pairs and NMT architectures demonstrate the
effectiveness and universality of the proposed approach.
| 2,018 | Computation and Language |
Contextualized Non-local Neural Networks for Sequence Learning | Recently, a large number of neural mechanisms and models have been proposed
for sequence learning, of which self-attention, as exemplified by the
Transformer model, and graph neural networks (GNNs) have attracted much
attention. In this paper, we propose an approach that combines and draws on the
complementary strengths of these two methods. Specifically, we propose
contextualized non-local neural networks (CN$^{\textbf{3}}$), which can both
dynamically construct a task-specific structure of a sentence and leverage rich
local dependencies within a particular neighborhood.
Experimental results on ten NLP tasks in text classification, semantic
matching, and sequence labeling show that our proposed model outperforms
competitive baselines and discovers task-specific dependency structures, thus
providing better interpretability to users.
| 2,018 | Computation and Language |
Neural Collective Entity Linking | Entity Linking aims to link entity mentions in texts to knowledge bases, and
neural models have achieved recent success in this task. However, most existing
methods rely on local contexts to resolve entities independently, which may
usually fail due to the data sparsity of local information. To address this
issue, we propose a novel neural model for collective entity linking, named as
NCEL. NCEL applies Graph Convolutional Network to integrate both local
contextual features and global coherence information for entity linking. To
improve the computation efficiency, we approximately perform graph convolution
on a subgraph of adjacent entity mentions instead of those in the entire text.
We further introduce an attention scheme to improve the robustness of NCEL to
data noise and train the model on Wikipedia hyperlinks to avoid overfitting and
domain bias. In experiments, we evaluate NCEL on five publicly available
datasets to verify the linking performance as well as generalization ability.
We also conduct an extensive analysis of time complexity, the impact of key
modules, and qualitative results, which demonstrate the effectiveness and
efficiency of our proposed method.
| 2,018 | Computation and Language |
Convolutional Spatial Attention Model for Reading Comprehension with
Multiple-Choice Questions | Machine Reading Comprehension (MRC) with multiple-choice questions requires
the machine to read given passage and select the correct answer among several
candidates. In this paper, we propose a novel approach called Convolutional
Spatial Attention (CSA) model which can better handle the MRC with
multiple-choice questions. The proposed model could fully extract the mutual
information among the passage, question, and the candidates, to form the
enriched representations. Furthermore, to merge various attention results, we
propose to use convolutional operation to dynamically summarize the attention
values within the different size of regions. Experimental results show that the
proposed model could give substantial improvements over various
state-of-the-art systems on both RACE and SemEval-2018 Task11 datasets.
| 2,019 | Computation and Language |
Multi Task Deep Morphological Analyzer: Context Aware Joint
Morphological Tagging and Lemma Prediction | The ambiguities introduced by the recombination of morphemes constructing
several possible inflections for a word makes the prediction of syntactic
traits in Morphologically Rich Languages (MRLs) a notoriously complicated task.
We propose the Multi Task Deep Morphological analyzer (MT-DMA), a
character-level neural morphological analyzer based on multitask learning of
word-level tag markers for Hindi and Urdu. MT-DMA predicts a set of six
morphological tags for words of Indo-Aryan languages: Parts-of-speech (POS),
Gender (G), Number (N), Person (P), Case (C), Tense-Aspect-Modality (TAM)
marker as well as the Lemma (L) by jointly learning all these in one trainable
framework. We show the effectiveness of training of such deep neural networks
by the simultaneous optimization of multiple loss functions and sharing of
initial parameters for context-aware morphological analysis. Exploiting
character-level features in phonological space optimized for each tag using
multi-objective genetic algorithm, our model establishes a new state-of-the-art
accuracy score upon all seven of the tasks for both the languages. MT-DMA is
publicly accessible: code, models and data are available at
https://github.com/Saurav0074/morph_analyzer.
| 2,019 | Computation and Language |
The Best of Both Worlds: Lexical Resources To Improve Low-Resource
Part-of-Speech Tagging | In natural language processing, the deep learning revolution has shifted the
focus from conventional hand-crafted symbolic representations to dense inputs,
which are adequate representations learned automatically from corpora. However,
particularly when working with low-resource languages, small amounts of
symbolic lexical resources such as user-generated lexicons are often available
even when gold-standard corpora are not. Such additional linguistic information
is though often neglected, and recent neural approaches to cross-lingual
tagging typically rely only on word and subword embeddings. While these
representations are effective, our recent work has shown clear benefits of
combining the best of both worlds: integrating conventional lexical information
improves neural cross-lingual part-of-speech (PoS) tagging. However, little is
known on how complementary such additional information is, and to what extent
improvements depend on the coverage and quality of these external resources.
This paper seeks to fill this gap by providing the first thorough analysis on
the contributions of lexical resources for cross-lingual PoS tagging in neural
times.
| 2,018 | Computation and Language |
Learning cross-lingual phonological and orthagraphic adaptations: a case
study in improving neural machine translation between low-resource languages | Out-of-vocabulary (OOV) words can pose serious challenges for machine
translation (MT) tasks, and in particular, for low-resource language (LRL)
pairs, i.e., language pairs for which few or no parallel corpora exist. Our
work adapts variants of seq2seq models to perform transduction of such words
from Hindi to Bhojpuri (an LRL instance), learning from a set of cognate pairs
built from a bilingual dictionary of Hindi--Bhojpuri words. We demonstrate that
our models can be effectively used for language pairs that have limited
parallel corpora; our models work at the character level to grasp phonetic and
orthographic similarities across multiple types of word adaptations, whether
synchronic or diachronic, loan words or cognates. We describe the training
aspects of several character level NMT systems that we adapted to this task and
characterize their typical errors. Our method improves BLEU score by 6.3 on the
Hindi-to-Bhojpuri translation task. Further, we show that such transductions
can generalize well to other languages by applying it successfully to Hindi --
Bangla cognate pairs. Our work can be seen as an important step in the process
of: (i) resolving the OOV words problem arising in MT tasks, (ii) creating
effective parallel corpora for resource-constrained languages, and (iii)
leveraging the enhanced semantic knowledge captured by word-level embeddings to
perform character-level tasks.
| 2,021 | Computation and Language |
Resource Mention Extraction for MOOC Discussion Forums | In discussions hosted on discussion forums for MOOCs, references to online
learning resources are often of central importance. They contextualize the
discussion, anchoring the discussion participants' presentation of the issues
and their understanding. However they are usually mentioned in free text,
without appropriate hyperlinking to their associated resource. Automated
learning resource mention hyperlinking and categorization will facilitate
discussion and searching within MOOC forums, and also benefit the
contextualization of such resources across disparate views. We propose the
novel problem of learning resource mention identification in MOOC forums. As
this is a novel task with no publicly available data, we first contribute a
large-scale labeled dataset, dubbed the Forum Resource Mention (FoRM) dataset,
to facilitate our current research and future research on this task. We then
formulate this task as a sequence tagging problem and investigate solution
architectures to address the problem. Importantly, we identify two major
challenges that hinder the application of sequence tagging models to the task:
(1) the diversity of resource mention expression, and (2) long-range contextual
dependencies. We address these challenges by incorporating character-level and
thread context information into a LSTM-CRF model. First, we incorporate a
character encoder to address the out-of-vocabulary problem caused by the
diversity of mention expressions. Second, to address the context dependency
challenge, we encode thread contexts using an RNN-based context encoder, and
apply the attention mechanism to selectively leverage useful context
information during sequence tagging. Experiments on FoRM show that the proposed
method improves the baseline deep sequence tagging models notably,
significantly bettering performance on instances that exemplify the two
challenges.
| 2,018 | Computation and Language |
AutoSense Model for Word Sense Induction | Word sense induction (WSI), or the task of automatically discovering multiple
senses or meanings of a word, has three main challenges: domain adaptability,
novel sense detection, and sense granularity flexibility. While current latent
variable models are known to solve the first two challenges, they are not
flexible to different word sense granularities, which differ very much among
words, from aardvark with one sense, to play with over 50 senses. Current
models either require hyperparameter tuning or nonparametric induction of the
number of senses, which we find both to be ineffective. Thus, we aim to
eliminate these requirements and solve the sense granularity problem by
proposing AutoSense, a latent variable model based on two observations: (1)
senses are represented as a distribution over topics, and (2) senses generate
pairings between the target word and its neighboring word. These observations
alleviate the problem by (a) throwing garbage senses and (b) additionally
inducing fine-grained word senses. Results show great improvements over the
state-of-the-art models on popular WSI datasets. We also show that AutoSense is
able to learn the appropriate sense granularity of a word. Finally, we apply
AutoSense to the unsupervised author name disambiguation task where the sense
granularity problem is more evident and show that AutoSense is evidently better
than competing models. We share our data and code here:
https://github.com/rktamplayo/AutoSense.
| 2,018 | Computation and Language |
Learning to Discover, Ground and Use Words with Segmental Neural
Language Models | We propose a segmental neural language model that combines the generalization
power of neural networks with the ability to discover word-like units that are
latent in unsegmented character sequences. In contrast to previous segmentation
models that treat word segmentation as an isolated task, our model unifies word
discovery, learning how words fit together to form sentences, and, by
conditioning the model on visual context, how words' meanings ground in
representations of non-linguistic modalities. Experiments show that the
unconditional model learns predictive distributions better than character LSTM
models, discovers words competitively with nonparametric Bayesian word
segmentation models, and that modeling language conditional on visual context
improves performance on both.
| 2,019 | Computation and Language |
Words Can Shift: Dynamically Adjusting Word Representations Using
Nonverbal Behaviors | Humans convey their intentions through the usage of both verbal and nonverbal
behaviors during face-to-face communication. Speaker intentions often vary
dynamically depending on different nonverbal contexts, such as vocal patterns
and facial expressions. As a result, when modeling human language, it is
essential to not only consider the literal meaning of the words but also the
nonverbal contexts in which these words appear. To better model human language,
we first model expressive nonverbal representations by analyzing the
fine-grained visual and acoustic patterns that occur during word segments. In
addition, we seek to capture the dynamic nature of nonverbal intents by
shifting word representations based on the accompanying nonverbal behaviors. To
this end, we propose the Recurrent Attended Variation Embedding Network (RAVEN)
that models the fine-grained structure of nonverbal subword sequences and
dynamically shifts word representations based on nonverbal cues. Our proposed
model achieves competitive performance on two publicly available datasets for
multimodal sentiment analysis and emotion recognition. We also visualize the
shifted word representations in different nonverbal contexts and summarize
common patterns regarding multimodal variations of word representations.
| 2,018 | Computation and Language |
Learning pronunciation from a foreign language in speech synthesis
networks | Although there are more than 6,500 languages in the world, the pronunciations
of many phonemes sound similar across the languages. When people learn a
foreign language, their pronunciation often reflects their native language's
characteristics. This motivates us to investigate how the speech synthesis
network learns the pronunciation from datasets from different languages. In
this study, we are interested in analyzing and taking advantage of multilingual
speech synthesis network. First, we train the speech synthesis network
bilingually in English and Korean and analyze how the network learns the
relations of phoneme pronunciation between the languages. Our experimental
result shows that the learned phoneme embedding vectors are located closer if
their pronunciations are similar across the languages. Consequently, the
trained networks can synthesize the English speakers' Korean speech and vice
versa. Using this result, we propose a training framework to utilize
information from a different language. To be specific, we pre-train a speech
synthesis network using datasets from both high-resource language and
low-resource language, then we fine-tune the network using the low-resource
language dataset. Finally, we conducted more simulations on 10 different
languages to show it is generally extendable to other languages.
| 2,020 | Computation and Language |
Fine Grained Classification of Personal Data Entities | Entity Type Classification can be defined as the task of assigning category
labels to entity mentions in documents. While neural networks have recently
improved the classification of general entity mentions, pattern matching and
other systems continue to be used for classifying personal data entities (e.g.
classifying an organization as a media company or a government institution for
GDPR, and HIPAA compliance). We propose a neural model to expand the class of
personal data entities that can be classified at a fine grained level, using
the output of existing pattern matching systems as additional contextual
features. We introduce new resources, a personal data entities hierarchy with
134 types, and two datasets from the Wikipedia pages of elected representatives
and Enron emails. We hope these resource will aid research in the area of
personal data discovery, and to that effect, we provide baseline results on
these datasets, and compare our method with state of the art models on
OntoNotes dataset.
| 2,018 | Computation and Language |
Explicit Interaction Model towards Text Classification | Text classification is one of the fundamental tasks in natural language
processing. Recently, deep neural networks have achieved promising performance
in the text classification task compared to shallow models. Despite of the
significance of deep models, they ignore the fine-grained (matching signals
between words and classes) classification clues since their classifications
mainly rely on the text-level representations. To address this problem, we
introduce the interaction mechanism to incorporate word-level matching signals
into the text classification task. In particular, we design a novel framework,
EXplicit interAction Model (dubbed as EXAM), equipped with the interaction
mechanism. We justified the proposed approach on several benchmark datasets
including both multi-label and multi-class text classification tasks. Extensive
experimental results demonstrate the superiority of the proposed method. As a
byproduct, we have released the codes and parameter settings to facilitate
other researches.
| 2,019 | Computation and Language |
Natural language understanding for task oriented dialog in the
biomedical domain in a low resources context | In the biomedical domain, the lack of sharable datasets often limit the
possibility of developing natural language processing systems, especially
dialogue applications and natural language understanding models. To overcome
this issue, we explore data generation using templates and terminologies and
data augmentation approaches. Namely, we report our experiments using
paraphrasing and word representations learned on a large EHR corpus with
Fasttext and ELMo, to learn a NLU model without any available dataset. We
evaluate on a NLU task of natural language queries in EHRs divided in
slot-filling and intent classification sub-tasks. On the slot-filling task, we
obtain a F-score of 0.76 with the ELMo representation; and on the
classification task, a mean F-score of 0.71. Our results show that this method
could be used to develop a baseline system.
| 2,018 | Computation and Language |
A Hierarchical Neural Network for Sequence-to-Sequences Learning | In recent years, the sequence-to-sequence learning neural networks with
attention mechanism have achieved great progress. However, there are still
challenges, especially for Neural Machine Translation (NMT), such as lower
translation quality on long sentences. In this paper, we present a hierarchical
deep neural network architecture to improve the quality of long sentences
translation. The proposed network embeds sequence-to-sequence neural networks
into a two-level category hierarchy by following the coarse-to-fine paradigm.
Long sentences are input by splitting them into shorter sequences, which can be
well processed by the coarse category network as the long distance dependencies
for short sentences is able to be handled by network based on
sequence-to-sequence neural network. Then they are concatenated and corrected
by the fine category network. The experiments shows that our method can achieve
superior results with higher BLEU(Bilingual Evaluation Understudy) scores,
lower perplexity and better performance in imitating expression style and words
usage than the traditional networks.
| 2,018 | Computation and Language |
Estimation of Inter-Sentiment Correlations Employing Deep Neural Network
Models | This paper focuses on sentiment mining and sentiment correlation analysis of
web events. Although neural network models have contributed a lot to mining
text information, little attention is paid to analysis of the inter-sentiment
correlations. This paper fills the gap between sentiment calculation and
inter-sentiment correlations. In this paper, the social emotion is divided into
six categories: love, joy, anger, sadness, fear, and surprise. Two deep neural
network models are presented for sentiment calculation. Three datasets - the
titles, the bodies, the comments of news articles - are collected, covering
both objective and subjective texts in varying lengths (long and short). From
each dataset, three kinds of features are extracted: explicit expression,
implicit expression, and alphabet characters. The performance of the two models
are analyzed, with respect to each of the three kinds of the features. There is
controversial phenomenon on the interpretation of anger (fn) and love (gd). In
subjective text, other emotions are easily to be considered as anger. By
contrast, in objective news bodies and titles, it is easy to regard text as
caused love (gd). It means, journalist may want to arouse emotion love by
writing news, but cause anger after the news is published. This result reflects
the sentiment complexity and unpredictability.
| 2,018 | Computation and Language |
Strategy of the Negative Sampling for Training Retrieval-Based Dialogue
Systems | The article describes the new approach for quality improvement of automated
dialogue systems for customer support service. Analysis produced in the paper
demonstrates the dependency of the quality of the retrieval-based dialogue
system quality on the choice of negative responses. The proposed approach
implies choosing the negative samples according to the distribution of
responses in the train set. In this implementation the negative samples are
randomly chosen from the original response distribution and from the
"artificial" distribution of negative responses, such as uniform distribution
or the distribution obtained by transformation of the original one. The results
obtained for the implemented systems and reported in this paper confirm the
significant improvement of automated dialogue systems quality in case of using
the negative responses from transformed distribution.
| 2,018 | Computation and Language |
Recurrently Controlled Recurrent Networks | Recurrent neural networks (RNNs) such as long short-term memory and gated
recurrent units are pivotal building blocks across a broad spectrum of sequence
modeling problems. This paper proposes a recurrently controlled recurrent
network (RCRN) for expressive and powerful sequence encoding. More concretely,
the key idea behind our approach is to learn the recurrent gating functions
using recurrent networks. Our architecture is split into two components - a
controller cell and a listener cell whereby the recurrent controller actively
influences the compositionality of the listener cell. We conduct extensive
experiments on a myriad of tasks in the NLP domain such as sentiment analysis
(SST, IMDb, Amazon reviews, etc.), question classification (TREC), entailment
classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading
comprehension (NarrativeQA). Across all 26 datasets, our results demonstrate
that RCRN not only consistently outperforms BiLSTMs but also stacked BiLSTMs,
suggesting that our controller architecture might be a suitable replacement for
the widely adopted stacked architecture.
| 2,018 | Computation and Language |
LSICC: A Large Scale Informal Chinese Corpus | Deep learning based natural language processing model is proven powerful, but
need large-scale dataset. Due to the significant gap between the real-world
tasks and existing Chinese corpus, in this paper, we introduce a large-scale
corpus of informal Chinese. This corpus contains around 37 million book reviews
and 50 thousand netizen's comments to the news. We explore the informal words
frequencies of the corpus and show the difference between our corpus and the
existing ones. The corpus can be further used to train deep learning based
natural language processing tasks such as Chinese word segmentation, sentiment
analysis.
| 2,018 | Computation and Language |
Improving Gated Recurrent Unit Based Acoustic Modeling with Batch
Normalization and Enlarged Context | The use of future contextual information is typically shown to be helpful for
acoustic modeling. Recently, we proposed a RNN model called minimal gated
recurrent unit with input projection (mGRUIP), in which a context module namely
temporal convolution, is specifically designed to model the future context.
This model, mGRUIP with context module (mGRUIP-Ctx), has been shown to be able
of utilizing the future context effectively, meanwhile with quite low model
latency and computation cost. In this paper, we continue to improve mGRUIP-Ctx
with two revisions: applying BN methods and enlarging model context.
Experimental results on two Mandarin ASR tasks (8400 hours and 60K hours) show
that, the revised mGRUIP-Ctx outperform LSTM with a large margin (11% to 38%).
It even performs slightly better than a superior BLSTM on the 8400h task, with
33M less parameters and just 290ms model latency.
| 2,018 | Computation and Language |
Implanting Rational Knowledge into Distributed Representation at
Morpheme Level | Previously, researchers paid no attention to the creation of unambiguous
morpheme embeddings independent from the corpus, while such information plays
an important role in expressing the exact meanings of words for parataxis
languages like Chinese. In this paper, after constructing the Chinese lexical
and semantic ontology based on word-formation, we propose a novel approach to
implanting the structured rational knowledge into distributed representation at
morpheme level, naturally avoiding heavy disambiguation in the corpus. We
design a template to create the instances as pseudo-sentences merely from the
pieces of knowledge of morphemes built in the lexicon. To exploit hierarchical
information and tackle the data sparseness problem, the instance proliferation
technique is applied based on similarity to expand the collection of
pseudo-sentences. The distributed representation for morphemes can then be
trained on these pseudo-sentences using word2vec. For evaluation, we validate
the paradigmatic and syntagmatic relations of morpheme embeddings, and apply
the obtained embeddings to word similarity measurement, achieving significant
improvements over the classical models by more than 5 Spearman scores or 8
percentage points, which shows very promising prospects for adoption of the new
source of knowledge.
| 2,018 | Computation and Language |
Multi-task Learning over Graph Structures | We present two architectures for multi-task learning with neural sequence
models. Our approach allows the relationships between different tasks to be
learned dynamically, rather than using an ad-hoc pre-defined structure as in
previous work. We adopt the idea from message-passing graph neural networks and
propose a general \textbf{graph multi-task learning} framework in which
different tasks can communicate with each other in an effective and
interpretable way. We conduct extensive experiments in text classification and
sequence labeling to evaluate our approach on multi-task learning and transfer
learning. The empirical results show that our models not only outperform
competitive baselines but also learn interpretable and transferable patterns
across tasks.
| 2,018 | Computation and Language |
A Rule-based Kurdish Text Transliteration System | In this article, we present a rule-based approach for transliterating two
mostly used orthographies in Sorani Kurdish. Our work consists of detecting a
character in a word by removing the possible ambiguities and mapping it into
the target orthography. We describe different challenges in Kurdish text mining
and propose novel ideas concerning the transliteration task for Sorani Kurdish.
Our transliteration system, named Wergor, achieves 82.79% overall precision and
more than 99% in detecting the double-usage characters. We also present a
manually transliterated corpus for Kurdish.
| 2,018 | Computation and Language |
Combining neural and knowledge-based approaches to Named Entity
Recognition in Polish | Named entity recognition (NER) is one of the tasks in natural language
processing that can greatly benefit from the use of external knowledge sources.
We propose a named entity recognition framework composed of knowledge-based
feature extractors and a deep learning model including contextual word
embeddings, long short-term memory (LSTM) layers and conditional random fields
(CRF) inference layer. We use an entity linking module to integrate our system
with Wikipedia. The combination of effective neural architecture and external
resources allows us to obtain state-of-the-art results on recognition of Polish
proper names. We evaluate our model on data from PolEval 2018 NER challenge on
which it outperforms other methods, reducing the error rate by 22.4% compared
to the winning solution. Our work shows that combining neural NER model and
entity linking model with a knowledge base is more effective in recognizing
named entities than using NER model alone.
| 2,019 | Computation and Language |
Creating a contemporary corpus of similes in Serbian by using natural
language processing | Simile is a figure of speech that compares two things through the use of
connection words, but where comparison is not intended to be taken literally.
They are often used in everyday communication, but they are also a part of
linguistic cultural heritage. In this paper we present a methodology for
semi-automated collection of similes from the World Wide Web using text mining
and machine learning techniques. We expanded an existing corpus by collecting
442 similes from the internet and adding them to the existing corpus collected
by Vuk Stefanovic Karadzic that contained 333 similes. We, also, introduce
crowdsourcing to the collection of figures of speech, which helped us to build
corpus containing 787 unique similes.
| 2,018 | Computation and Language |
Sentence Encoding with Tree-constrained Relation Networks | The meaning of a sentence is a function of the relations that hold between
its words. We instantiate this relational view of semantics in a series of
neural models based on variants of relation networks (RNs) which represent a
set of objects (for us, words forming a sentence) in terms of representations
of pairs of objects. We propose two extensions to the basic RN model for
natural language. First, building on the intuition that not all word pairs are
equally informative about the meaning of a sentence, we use constraints based
on both supervised and unsupervised dependency syntax to control which
relations influence the representation. Second, since higher-order relations
are poorly captured by a sum of pairwise relations, we use a recurrent
extension of RNs to propagate information so as to form representations of
higher order relations. Experiments on sentence classification, sentence pair
classification, and machine translation reveal that, while basic RNs are only
modestly effective for sentence representation, recurrent RNs with latent
syntax are a reliably powerful representational device.
| 2,018 | Computation and Language |
CLEAR: A Dataset for Compositional Language and Elementary Acoustic
Reasoning | We introduce the task of acoustic question answering (AQA) in the area of
acoustic reasoning. In this task an agent learns to answer questions on the
basis of acoustic context. In order to promote research in this area, we
propose a data generation paradigm adapted from CLEVR (Johnson et al. 2017). We
generate acoustic scenes by leveraging a bank elementary sounds. We also
provide a number of functional programs that can be used to compose questions
and answers that exploit the relationships between the attributes of the
elementary sounds in each scene. We provide AQA datasets of various sizes as
well as the data generation code. As a preliminary experiment to validate our
data, we report the accuracy of current state of the art visual question
answering models when they are applied to the AQA task without modifications.
Although there is a plethora of question answering tasks based on text, image
or video data, to our knowledge, we are the first to propose answering
questions directly on audio streams. We hope this contribution will facilitate
the development of research in the area.
| 2,018 | Computation and Language |
Speaker Diarization With Lexical Information | This work presents a novel approach to leverage lexical information for
speaker diarization. We introduce a speaker diarization system that can
directly integrate lexical as well as acoustic information into a speaker
clustering process. Thus, we propose an adjacency matrix integration technique
to integrate word level speaker turn probabilities with speaker embeddings in a
comprehensive way. Our proposed method works without any reference transcript.
Words, and word boundary information are provided by an ASR system. We show
that our proposed method improves a baseline speaker diarization system solely
based on speaker embeddings, achieving a meaningful improvement on the CALLHOME
American English Speech dataset.
| 2,019 | Computation and Language |
Verb Argument Structure Alternations in Word and Sentence Embeddings | Verbs occur in different syntactic environments, or frames. We investigate
whether artificial neural networks encode grammatical distinctions necessary
for inferring the idiosyncratic frame-selectional properties of verbs. We
introduce five datasets, collectively called FAVA, containing in aggregate
nearly 10k sentences labeled for grammatical acceptability, illustrating
different verbal argument structure alternations. We then test whether models
can distinguish acceptable English verb-frame combinations from unacceptable
ones using a sentence embedding alone. For converging evidence, we further
construct LaVA, a corresponding word-level dataset, and investigate whether the
same syntactic features can be extracted from word embeddings. Our models
perform reliable classifications for some verbal alternations but not others,
suggesting that while these representations do encode fine-grained lexical
information, it is incomplete or can be hard to extract. Further, differences
between the word- and sentence-level models show that some information present
in word embeddings is not passed on to the down-stream sentence embeddings.
| 2,018 | Computation and Language |
Joint Representation Learning of Cross-lingual Words and Entities via
Attentive Distant Supervision | Joint representation learning of words and entities benefits many NLP tasks,
but has not been well explored in cross-lingual settings. In this paper, we
propose a novel method for joint representation learning of cross-lingual words
and entities. It captures mutually complementary knowledge, and enables
cross-lingual inferences among knowledge bases and texts. Our method does not
require parallel corpora, and automatically generates comparable data via
distant supervision using multi-lingual knowledge bases. We utilize two types
of regularizers to align cross-lingual words and entities, and design knowledge
attention and cross-lingual attention to further reduce noises. We conducted a
series of experiments on three tasks: word translation, entity relatedness, and
cross-lingual entity linking. The results, both qualitatively and
quantitatively, demonstrate the significance of our method.
| 2,018 | Computation and Language |
The Fact Extraction and VERification (FEVER) Shared Task | We present the results of the first Fact Extraction and VERification (FEVER)
Shared Task. The task challenged participants to classify whether human-written
factoid claims could be Supported or Refuted using evidence retrieved from
Wikipedia. We received entries from 23 competing teams, 19 of which scored
higher than the previously published baseline. The best performing system
achieved a FEVER score of 64.21%. In this paper, we present the results of the
shared task and a summary of the systems, highlighting commonalities and
innovations among participating systems.
| 2,018 | Computation and Language |
HCqa: Hybrid and Complex Question Answering on Textual Corpus and
Knowledge Graph | Question Answering (QA) systems provide easy access to the vast amount of
knowledge without having to know the underlying complex structure of the
knowledge. The research community has provided ad hoc solutions to the key QA
tasks, including named entity recognition and disambiguation, relation
extraction and query building. Furthermore, some have integrated and composed
these components to implement many tasks automatically and efficiently.
However, in general, the existing solutions are limited to simple and short
questions and still do not address complex questions composed of several
sub-questions. Exploiting the answer to complex questions is further challenged
if it requires integrating knowledge from unstructured data sources, i.e.,
textual corpus, as well as structured data sources, i.e., knowledge graphs. In
this paper, an approach (HCqa) is introduced for dealing with complex questions
requiring federating knowledge from a hybrid of heterogeneous data sources
(structured and unstructured). We contribute in developing (i) a decomposition
mechanism which extracts sub-questions from potentially long and complex input
questions, (ii) a novel comprehensive schema, first of its kind, for extracting
and annotating relations, and (iii) an approach for executing and aggregating
the answers of sub-questions. The evaluation of HCqa showed a superior accuracy
in the fundamental tasks, such as relation extraction, as well as the
federation task.
| 2,019 | Computation and Language |
Generating Responses Expressing Emotion in an Open-domain Dialogue
System | Neural network-based Open-ended conversational agents automatically generate
responses based on predictive models learned from a large number of pairs of
utterances. The generated responses are typically acceptable as a sentence but
are often dull, generic, and certainly devoid of any emotion. In this paper, we
present neural models that learn to express a given emotion in the generated
response. We propose four models and evaluate them against 3 baselines. An
encoder-decoder framework-based model with multiple attention layers provides
the best overall performance in terms of expressing the required emotion. While
it does not outperform other models on all emotions, it presents promising
results in most cases.
| 2,019 | Computation and Language |
CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling | In real-world applications of natural language generation, there are often
constraints on the target sentences in addition to fluency and naturalness
requirements. Existing language generation techniques are usually based on
recurrent neural networks (RNNs). However, it is non-trivial to impose
constraints on RNNs while maintaining generation quality, since RNNs generate
sentences sequentially (or with beam search) from the first word to the last.
In this paper, we propose CGMH, a novel approach using Metropolis-Hastings
sampling for constrained sentence generation. CGMH allows complicated
constraints such as the occurrence of multiple keywords in the target
sentences, which cannot be handled in traditional RNN-based approaches.
Moreover, CGMH works in the inference stage, and does not require parallel
corpora for training. We evaluate our method on a variety of tasks, including
keywords-to-sentence generation, unsupervised sentence paraphrasing, and
unsupervised sentence error correction. CGMH achieves high performance compared
with previous supervised methods for sentence generation. Our code is released
at https://github.com/NingMiao/CGMH
| 2,019 | Computation and Language |
Exploiting Coarse-to-Fine Task Transfer for Aspect-level Sentiment
Classification | Aspect-level sentiment classification (ASC) aims at identifying sentiment
polarities towards aspects in a sentence, where the aspect can behave as a
general Aspect Category (AC) or a specific Aspect Term (AT). However, due to
the especially expensive and labor-intensive labeling, existing public corpora
in AT-level are all relatively small. Meanwhile, most of the previous methods
rely on complicated structures with given scarce data, which largely limits the
efficacy of the neural models. In this paper, we exploit a new direction named
coarse-to-fine task transfer, which aims to leverage knowledge learned from a
rich-resource source domain of the coarse-grained AC task, which is more easily
accessible, to improve the learning in a low-resource target domain of the
fine-grained AT task. To resolve both the aspect granularity inconsistency and
feature mismatch between domains, we propose a Multi-Granularity Alignment
Network (MGAN). In MGAN, a novel Coarse2Fine attention guided by an auxiliary
task can help the AC task modeling at the same fine-grained level with the AT
task. To alleviate the feature false alignment, a contrastive feature alignment
method is adopted to align aspect-specific feature representations
semantically. In addition, a large-scale multi-domain dataset for the AC task
is provided. Empirically, extensive experiments demonstrate the effectiveness
of the MGAN.
| 2,018 | Computation and Language |
Unsupervised Post-processing of Word Vectors via Conceptor Negation | Word vectors are at the core of many natural language processing tasks.
Recently, there has been interest in post-processing word vectors to enrich
their semantic information. In this paper, we introduce a novel word vector
post-processing technique based on matrix conceptors (Jaeger2014), a family of
regularized identity maps. More concretely, we propose to use conceptors to
suppress those latent features of word vectors having high variances. The
proposed method is purely unsupervised: it does not rely on any corpus or
external linguistic database. We evaluate the post-processed word vectors on a
battery of intrinsic lexical evaluation tasks, showing that the proposed method
consistently outperforms existing state-of-the-art alternatives. We also show
that post-processed word vectors can be used for the downstream natural
language processing task of dialogue state tracking, yielding improved results
in different dialogue domains.
| 2,018 | Computation and Language |
Correcting the Common Discourse Bias in Linear Representation of
Sentences using Conceptors | Distributed representations of words, better known as word embeddings, have
become important building blocks for natural language processing tasks.
Numerous studies are devoted to transferring the success of unsupervised word
embeddings to sentence embeddings. In this paper, we introduce a simple
representation of sentences in which a sentence embedding is represented as a
weighted average of word vectors followed by a soft projection. We demonstrate
the effectiveness of this proposed method on the clinical semantic textual
similarity task of the BioCreative/OHNLP Challenge 2018.
| 2,018 | Computation and Language |
Application of Clinical Concept Embeddings for Heart Failure Prediction
in UK EHR data | Electronic health records (EHR) are increasingly being used for constructing
disease risk prediction models. Feature engineering in EHR data however is
challenging due to their highly dimensional and heterogeneous nature.
Low-dimensional representations of EHR data can potentially mitigate these
challenges. In this paper, we use global vectors (GloVe) to learn word
embeddings for diagnoses and procedures recorded using 13 million ontology
terms across 2.7 million hospitalisations in national UK EHR. We demonstrate
the utility of these embeddings by evaluating their performance in identifying
patients which are at higher risk of being hospitalised for congestive heart
failure. Our findings indicate that embeddings can enable the creation of
robust EHR-derived disease risk prediction models and address some the
limitations associated with manual clinical feature engineering.
| 2,018 | Computation and Language |
Latent Dirichlet Allocation with Residual Convolutional Neural Network
Applied in Evaluating Credibility of Chinese Listed Companies | This project demonstrated a methodology to estimating cooperate credibility
with a Natural Language Processing approach. As cooperate transparency impacts
both the credibility and possible future earnings of the firm, it is an
important factor to be considered by banks and investors on risk assessments of
listed firms. This approach of estimating cooperate credibility can bypass
human bias and inconsistency in the risk assessment, the use of large
quantitative data and neural network models provides more accurate estimation
in a more efficient manner compare to manual assessment. At the beginning, the
model will employs Latent Dirichlet Allocation and THU Open Chinese Lexicon
from Tsinghua University to classify topics in articles which are potentially
related to corporate credibility. Then with the keywords related to each
topics, we trained a residual convolutional neural network with data labeled
according to surveys of fund manager and accountant's opinion on corporate
credibility. After the training, we run the model with preprocessed news
reports regarding to all of the 3065 listed companies, the model is supposed to
give back companies ranking based on the level of their transparency.
| 2,018 | Computation and Language |
Translating and Evolving: Towards a Model of Language Change in DisCoCat | The categorical compositional distributional (DisCoCat) model of meaning
developed by Coecke et al. (2010) has been successful in modeling various
aspects of meaning. However, it fails to model the fact that language can
change. We give an approach to DisCoCat that allows us to represent language
models and translations between them, enabling us to describe translations from
one language to another, or changes within the same language. We unify the
product space representation given in (Coecke et al., 2010) and the functorial
description in (Kartsaklis et al., 2013), in a way that allows us to view a
language as a catalogue of meanings. We formalize the notion of a lexicon in
DisCoCat, and define a dictionary of meanings between two lexicons. All this is
done within the framework of monoidal categories. We give examples of how to
apply our methods, and give a concrete suggestion for compositional translation
in corpora.
| 2,018 | Computation and Language |
Learning to detect dysarthria from raw speech | Speech classifiers of paralinguistic traits traditionally learn from diverse
hand-crafted low-level features, by selecting the relevant information for the
task at hand. We explore an alternative to this selection, by learning jointly
the classifier, and the feature extraction. Recent work on speech recognition
has shown improved performance over speech features by learning from the
waveform. We extend this approach to paralinguistic classification and propose
a neural network that can learn a filterbank, a normalization factor and a
compression power from the raw speech, jointly with the rest of the
architecture. We apply this model to dysarthria detection from sentence-level
audio recordings. Starting from a strong attention-based baseline on which
mel-filterbanks outperform standard low-level descriptors, we show that
learning the filters or the normalization and compression improves over fixed
features by 10% absolute accuracy. We also observe a gain over OpenSmile
features by learning jointly the feature extraction, the normalization, and the
compression factor with the architecture. This constitutes a first attempt at
learning jointly all these operations from raw audio for a speech
classification task.
| 2,019 | Computation and Language |
SOC: hunting the underground inside story of the ethereum Social-network
Opinion and Comment | The cryptocurrency is attracting more and more attention because of the
blockchain technology. Ethereum is gaining a significant popularity in
blockchain community, mainly due to the fact that it is designed in a way that
enables developers to write smart contracts and decentralized applications
(Dapps). There are many kinds of cryptocurrency information on the social
network. The risks and fraud problems behind it have pushed many countries
including the United States, South Korea, and China to make warnings and set up
corresponding regulations. However, the security of Ethereum smart contracts
has not gained much attention. Through the Deep Learning approach, we propose a
method of sentiment analysis for Ethereum's community comments. In this
research, we first collected the users' cryptocurrency comments from the social
network and then fed to our LSTM + CNN model for training. Then we made
prediction through sentiment analysis. With our research result, we have
demonstrated that both the precision and the recall of sentiment analysis can
achieve 0.80+. More importantly, we deploy our sentiment analysis1 on
RatingToken and Coin Master (mobile application of Cheetah Mobile Blockchain
Security Center23). We can effectively provide detail information to resolve
the risks of being fake and fraud problems.
| 2,018 | Computation and Language |
Cross-Lingual Approaches to Reference Resolution in Dialogue Systems | In the slot-filling paradigm, where a user can refer back to slots in the
context during the conversation, the goal of the contextual understanding
system is to resolve the referring expressions to the appropriate slots in the
context. In this paper, we build on the context carryover
system~\citep{Naik2018ContextualSC}, which provides a scalable multi-domain
framework for resolving references. However, scaling this approach across
languages is not a trivial task, due to the large demand on acquisition of
annotated data in the target language. Our main focus is on cross-lingual
methods for reference resolution as a way to alleviate the need for annotated
data in the target language. In the cross-lingual setup, we assume there is
access to annotated resources as well as a well trained model in the source
language and little to no annotated data in the target language. In this paper,
we explore three different approaches for cross-lingual transfer \textemdash~\
delexicalization as data augmentation, multilingual embeddings and machine
translation. We compare these approaches both on a low resource setting as well
as a large resource setting. Our experiments show that multilingual embeddings
and delexicalization via data augmentation have a significant impact in the low
resource setting, but the gains diminish as the amount of available data in the
target language increases. Furthermore, when combined with machine translation
we can get performance very close to actual live data in the target language,
with only 25\% of the data projected into the target language.
| 2,018 | Computation and Language |
A Deep Cascade Model for Multi-Document Reading Comprehension | A fundamental trade-off between effectiveness and efficiency needs to be
balanced when designing an online question answering system. Effectiveness
comes from sophisticated functions such as extractive machine reading
comprehension (MRC), while efficiency is obtained from improvements in
preliminary retrieval components such as candidate document selection and
paragraph ranking. Given the complexity of the real-world multi-document MRC
scenario, it is difficult to jointly optimize both in an end-to-end system. To
address this problem, we develop a novel deep cascade learning model, which
progressively evolves from the document-level and paragraph-level ranking of
candidate texts to more precise answer extraction with machine reading
comprehension. Specifically, irrelevant documents and paragraphs are first
filtered out with simple functions for efficiency consideration. Then we
jointly train three modules on the remaining texts for better tracking the
answer: the document extraction, the paragraph extraction and the answer
extraction. Experiment results show that the proposed method outperforms the
previous state-of-the-art methods on two large-scale multi-document benchmark
datasets, i.e., TriviaQA and DuReader. In addition, our online system can
stably serve typical scenarios with millions of daily requests in less than
50ms.
| 2,019 | Computation and Language |
Context-Aware Dialog Re-Ranking for Task-Oriented Dialog Systems | Dialog response ranking is used to rank response candidates by considering
their relation to the dialog history. Although researchers have addressed this
concept for open-domain dialogs, little attention has been focused on
task-oriented dialogs. Furthermore, no previous studies have analyzed whether
response ranking can improve the performance of existing dialog systems in real
human-computer dialogs with speech recognition errors. In this paper, we
propose a context-aware dialog response re-ranking system. Our system reranks
responses in two steps: (1) it calculates matching scores for each candidate
response and the current dialog context; (2) it combines the matching scores
and a probability distribution of the candidates from an existing dialog system
for response re-ranking. By using neural word embedding-based models and
handcrafted or logistic regression-based ensemble models, we have improved the
performance of a recently proposed end-to-end task-oriented dialog system on
real dialogs with speech recognition errors.
| 2,018 | Computation and Language |
Sequence Learning with RNNs for Medical Concept Normalization in
User-Generated Texts | In this work, we consider the medical concept normalization problem, i.e.,
the problem of mapping a disease mention in free-form text to a concept in a
controlled vocabulary, usually to the standard thesaurus in the Unified Medical
Language System (UMLS). This task is challenging since medical terminology is
very different when coming from health care professionals or from the general
public in the form of social media texts. We approach it as a sequence learning
problem, with recurrent neural networks trained to obtain semantic
representations of one- and multi-word expressions. We develop end-to-end
neural architectures tailored specifically to medical concept normalization,
including bidirectional LSTM and GRU with an attention mechanism and additional
semantic similarity features based on UMLS. Our evaluation over a standard
benchmark shows that our model improves over a state of the art baseline for
classification based on CNNs.
| 2,018 | Computation and Language |
Few-Shot Generalization Across Dialogue Tasks | Machine-learning based dialogue managers are able to learn complex behaviors
in order to complete a task, but it is not straightforward to extend their
capabilities to new domains. We investigate different policies' ability to
handle uncooperative user behavior, and how well expertise in completing one
task (such as restaurant reservations) can be reapplied when learning a new one
(e.g. booking a hotel). We introduce the Recurrent Embedding Dialogue Policy
(REDP), which embeds system actions and dialogue states in the same vector
space. REDP contains a memory component and attention mechanism based on a
modified Neural Turing Machine, and significantly outperforms a baseline LSTM
classifier on this task. We also show that both our architecture and baseline
solve the bAbI dialogue task, achieving 100% test accuracy.
| 2,018 | Computation and Language |
Multi-granularity hierarchical attention fusion networks for reading
comprehension and question answering | This paper describes a novel hierarchical attention network for reading
comprehension style question answering, which aims to answer questions for a
given narrative paragraph. In the proposed method, attention and fusion are
conducted horizontally and vertically across layers at different levels of
granularity between question and paragraph. Specifically, it first encode the
question and paragraph with fine-grained language embeddings, to better capture
the respective representations at semantic level. Then it proposes a
multi-granularity fusion approach to fully fuse information from both global
and attended representations. Finally, it introduces a hierarchical attention
network to focuses on the answer span progressively with multi-level
softalignment. Extensive experiments on the large-scale SQuAD and TriviaQA
datasets validate the effectiveness of the proposed method. At the time of
writing the paper (Jan. 12th 2018), our model achieves the first position on
the SQuAD leaderboard for both single and ensemble models. We also achieves
state-of-the-art results on TriviaQA, AddSent and AddOne-Sent datasets.
| 2,019 | Computation and Language |
HYPE: A High Performing NLP System for Automatically Detecting
Hypoglycemia Events from Electronic Health Record Notes | Hypoglycemia is common and potentially dangerous among those treated for
diabetes. Electronic health records (EHRs) are important resources for
hypoglycemia surveillance. In this study, we report the development and
evaluation of deep learning-based natural language processing systems to
automatically detect hypoglycemia events from the EHR narratives. Experts in
Public Health annotated 500 EHR notes from patients with diabetes. We used this
annotated dataset to train and evaluate HYPE, supervised NLP systems for
hypoglycemia detection. In our experiment, the convolutional neural network
model yielded promising performance $Precision=0.96 \pm 0.03, Recall=0.86 \pm
0.03, F1=0.91 \pm 0.03$ in a 10-fold cross-validation setting. Despite the
annotated data is highly imbalanced, our CNN-based HYPE system still achieved a
high performance for hypoglycemia detection. HYPE could be used for EHR-based
hypoglycemia surveillance and to facilitate clinicians for timely treatment of
high-risk patients.
| 2,018 | Computation and Language |
Large-scale Generative Modeling to Improve Automated Veterinary Disease
Coding | Supervised learning is limited both by the quantity and quality of the
labeled data. In the field of medical record tagging, writing styles between
hospitals vary drastically. The knowledge learned from one hospital might not
transfer well to another. This problem is amplified in veterinary medicine
domain because veterinary clinics rarely apply medical codes to their records.
We proposed and trained the first large-scale generative modeling algorithm in
automated disease coding. We demonstrate that generative modeling can learn
discriminative features when additionally trained with supervised fine-tuning.
We systematically ablate and evaluate the effect of generative modeling on the
final system's performance. We compare the performance of our model with
several baselines in a challenging cross-hospital setting with substantial
domain shift. We outperform competitive baselines by a large margin. In
addition, we provide interpretation for what is learned by our model.
| 2,018 | Computation and Language |
Non-entailed subsequences as a challenge for natural language inference | Neural network models have shown great success at natural language inference
(NLI), the task of determining whether a premise entails a hypothesis. However,
recent studies suggest that these models may rely on fallible heuristics rather
than deep language understanding. We introduce a challenge set to test whether
NLI systems adopt one such heuristic: assuming that a sentence entails all of
its subsequences, such as assuming that "Alice believes Mary is lying" entails
"Alice believes Mary." We evaluate several competitive NLI models on this
challenge set and find strong evidence that they do rely on the subsequence
heuristic.
| 2,018 | Computation and Language |
Improving Robustness of Neural Dialog Systems in a Data-Efficient Way
with Turn Dropout | Neural network-based dialog models often lack robustness to anomalous,
out-of-domain (OOD) user input which leads to unexpected dialog behavior and
thus considerably limits such models' usage in mission-critical production
environments. The problem is especially relevant in the setting of dialog
system bootstrapping with limited training data and no access to OOD examples.
In this paper, we explore the problem of robustness of such systems to
anomalous input and the associated to it trade-off in accuracies on seen and
unseen data. We present a new dataset for studying the robustness of dialog
systems to OOD input, which is bAbI Dialog Task 6 augmented with OOD content in
a controlled way. We then present turn dropout, a simple yet efficient negative
sampling-based technique for improving robustness of neural dialog models. We
demonstrate its effectiveness applied to Hybrid Code Network-family models
(HCNs) which reach state-of-the-art results on our OOD-augmented dataset as
well as the original one. Specifically, an HCN trained with turn dropout
achieves state-of-the-art performance of more than 75% per-utterance accuracy
on the augmented dataset's OOD turns and 74% F1-score as an OOD detector.
Furthermore, we introduce a Variational HCN enhanced with turn dropout which
achieves more than 56.5% accuracy on the original bAbI Task 6 dataset, thus
outperforming the initially reported HCN's result.
| 2,018 | Computation and Language |
Counterfactual Learning from Human Proofreading Feedback for Semantic
Parsing | In semantic parsing for question-answering, it is often too expensive to
collect gold parses or even gold answers as supervision signals. We propose to
convert model outputs into a set of human-understandable statements which allow
non-expert users to act as proofreaders, providing error markings as learning
signals to the parser. Because model outputs were suggested by a historic
system, we operate in a counterfactual, or off-policy, learning setup. We
introduce new estimators which can effectively leverage the given feedback and
which avoid known degeneracies in counterfactual learning, while still being
applicable to stochastic gradient optimization for neural semantic parsing.
Furthermore, we discuss how our feedback collection method can be seamlessly
integrated into deployed virtual personal assistants that embed a semantic
parser. Our work is the first to show that semantic parsers can be improved
significantly by counterfactual learning from logged human feedback data.
| 2,018 | Computation and Language |
Improving Hospital Mortality Prediction with Medical Named Entities and
Multimodal Learning | Clinical text provides essential information to estimate the acuity of a
patient during hospital stays in addition to structured clinical data. In this
study, we explore how clinical text can complement a clinical predictive
learning task. We leverage an internal medical natural language processing
service to perform named entity extraction and negation detection on clinical
notes and compose selected entities into a new text corpus to train document
representations. We then propose a multimodal neural network to jointly train
time series signals and unstructured clinical text representations to predict
the in-hospital mortality risk for ICU patients. Our model outperforms the
benchmark by 2% AUC.
| 2,018 | Computation and Language |
Inferring Concept Prerequisite Relations from Online Educational
Resources | The Internet has rich and rapidly increasing sources of high quality
educational content. Inferring prerequisite relations between educational
concepts is required for modern large-scale online educational technology
applications such as personalized recommendations and automatic curriculum
creation. We present PREREQ, a new supervised learning method for inferring
concept prerequisite relations. PREREQ is designed using latent representations
of concepts obtained from the Pairwise Latent Dirichlet Allocation model, and a
neural network based on the Siamese network architecture. PREREQ can learn
unknown concept prerequisites from course prerequisites and labeled concept
prerequisite data. It outperforms state-of-the-art approaches on benchmark
datasets and can effectively learn from very less training data. PREREQ can
also use unlabeled video playlists, a steadily growing source of training data,
to learn concept prerequisites, thus obviating the need for manual annotation
of course prerequisites.
| 2,019 | Computation and Language |
Document Structure Measure for Hypernym discovery | Hypernym discovery is the problem of finding terms that have is-a
relationship with a given term. We introduce a new context type, and a
relatedness measure to differentiate hypernyms from other types of semantic
relationships. Our Document Structure measure is based on hierarchical position
of terms in a document, and their presence or otherwise in definition text.
This measure quantifies the document structure using multiple attributes, and
classes of weighted distance functions.
| 2,018 | Computation and Language |
TIFTI: A Framework for Extracting Drug Intervals from Longitudinal
Clinic Notes | Oral drugs are becoming increasingly common in oncology care. In contrast to
intravenous chemotherapy, which is administered in the clinic and carefully
tracked via structure electronic health records (EHRs), oral drug treatment is
self-administered and therefore not tracked as well. Often, the details of oral
cancer treatment occur only in unstructured clinic notes. Extracting this
information is critical to understanding a patient's treatment history. Yet,
this a challenging task because treatment intervals must be inferred
longitudinally from both explicit mentions in the text as well as from document
timestamps. In this work, we present TIFTI (Temporally Integrated Framework for
Treatment Intervals), a robust framework for extracting oral drug treatment
intervals from a patient's unstructured notes. TIFTI leverages distinct sources
of temporal information by breaking the problem down into two separate
subtasks: document-level sequence labeling and date extraction. On a labeled
dataset of metastatic renal-cell carcinoma (RCC) patients, it exactly matched
the labeled start date in 46% of the examples (86% of the examples within 30
days), and it exactly matched the labeled end date in 52% of the examples (78%
of the examples within 30 days). Without retraining, the model achieved a
similar level of performance on a labeled dataset of advanced non-small-cell
lung cancer (NSCLC) patients.
| 2,018 | Computation and Language |
Systematic Generalization: What Is Required and Can It Be Learned? | Numerous models for grounded language understanding have been recently
proposed, including (i) generic models that can be easily adapted to any given
task and (ii) intuitively appealing modular models that require background
knowledge to be instantiated. We compare both types of models in how much they
lend themselves to a particular form of systematic generalization. Using a
synthetic VQA test, we evaluate which models are capable of reasoning about all
possible object pairs after training on only a small subset of them. Our
findings show that the generalization of modular models is much more systematic
and that it is highly sensitive to the module layout, i.e. to how exactly the
modules are connected. We furthermore investigate if modular models that
generalize well could be made more end-to-end by learning their layout and
parametrization. We find that end-to-end methods from prior work often learn
inappropriate layouts or parametrizations that do not facilitate systematic
generalization. Our results suggest that, in addition to modularity, systematic
generalization in language understanding may require explicit regularizers or
priors.
| 2,019 | Computation and Language |
Detecting Offensive Content in Open-domain Conversations using Two Stage
Semi-supervision | As open-ended human-chatbot interaction becomes commonplace, sensitive
content detection gains importance. In this work, we propose a two stage
semi-supervised approach to bootstrap large-scale data for automatic sensitive
language detection from publicly available web resources. We explore various
data selection methods including 1) using a blacklist to rank online discussion
forums by the level of their sensitiveness followed by randomly sampling
utterances and 2) training a weakly supervised model in conjunction with the
blacklist for scoring sentences from online discussion forums to curate a
dataset. Our data collection strategy is flexible and allows the models to
detect implicit sensitive content for which manual annotations may be
difficult. We train models using publicly available annotated datasets as well
as using the proposed large-scale semi-supervised datasets. We evaluate the
performance of all the models on Twitter and Toxic Wikipedia comments testsets
as well as on a manually annotated spoken language dataset collected during a
large scale chatbot competition. Results show that a model trained on this
collected data outperforms the baseline models by a large margin on both
in-domain and out-of-domain testsets, achieving an F1 score of 95.5% on an
out-of-domain testset compared to a score of 75% for models trained on public
datasets. We also showcase that large scale two stage semi-supervision
generalizes well across multiple classes of sensitivities such as hate speech,
racism, sexual and pornographic content, etc. without even providing explicit
labels for these classes, leading to an average recall of 95.5% versus the
models trained using annotated public datasets which achieve an average recall
of 73.2% across seven sensitive classes on out-of-domain testsets.
| 2,018 | Computation and Language |
The Indus Script and Economics. A Role for Indus Seals and Tablets in
Rationing and Administration of Labor | The Indus script remains one of the last major undeciphered scripts of the
ancient world. We focus here on Indus inscriptions on a group of miniature
tablets discovered by Meadow and Kenoyer in Harappa in 1997. By drawing
parallels with proto-Elamite and proto-Cuneiform inscriptions, we explore how
these miniature tablets may have been used to record rations allocated to
porters or laborers. We then show that similar inscriptions are found on stamp
seals, leading to the potentially provocative conclusion that rather than
simply indicating ownership of property, Indus seals may have been used for
generating tokens, tablets and sealings for repetitive economic transactions
such as rations and exchange of canonical amounts of goods, grains, animals,
and labor in a barter-based economy.
| 2,018 | Computation and Language |
QADiver: Interactive Framework for Diagnosing QA Models | Question answering (QA) extracting answers from text to the given question in
natural language, has been actively studied and existing models have shown a
promise of outperforming human performance when trained and evaluated with
SQuAD dataset. However, such performance may not be replicated in the actual
setting, for which we need to diagnose the cause, which is non-trivial due to
the complexity of model. We thus propose a web-based UI that provides how each
model contributes to QA performances, by integrating visualization and analysis
tools for model explanation. We expect this framework can help QA model
researchers to refine and improve their models.
| 2,018 | Computation and Language |
A Deep Sequential Model for Discourse Parsing on Multi-Party Dialogues | Discourse structures are beneficial for various NLP tasks such as dialogue
understanding, question answering, sentiment analysis, and so on. This paper
presents a deep sequential model for parsing discourse dependency structures of
multi-party dialogues. The proposed model aims to construct a discourse
dependency tree by predicting dependency relations and constructing the
discourse structure jointly and alternately. It makes a sequential scan of the
Elementary Discourse Units (EDUs) in a dialogue. For each EDU, the model
decides to which previous EDU the current one should link and what the
corresponding relation type is. The predicted link and relation type are then
used to build the discourse structure incrementally with a structured encoder.
During link prediction and relation classification, the model utilizes not only
local information that represents the concerned EDUs, but also global
information that encodes the EDU sequence and the discourse structure that is
already built at the current step. Experiments show that the proposed model
outperforms all the state-of-the-art baselines.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.