Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Finding the way from \"a to a: Sub-character morphological inflection
for the SIGMORPHON 2018 Shared Task | In this paper we describe the system submitted by UHH to the
CoNLL--SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection. We
propose a neural architecture based on the concepts of UZH (Makarov et al.,
2017), adding new ideas and techniques to their key concept and evaluating
different combinations of parameters. The resulting system is a
language-agnostic network model that aims to reduce the number of learned edit
operations by introducing equivalence classes over graphical features of
individual characters. We try to pinpoint advantages and drawbacks of this
approach by comparing different network configurations and evaluating our
results over a wide range of languages.
| 2,018 | Computation and Language |
Analysis of Risk Factor Domains in Psychosis Patient Health Records | Readmission after discharge from a hospital is disruptive and costly,
regardless of the reason. However, it can be particularly problematic for
psychiatric patients, so predicting which patients may be readmitted is
critically important but also very difficult. Clinical narratives in
psychiatric electronic health records (EHRs) span a wide range of topics and
vocabulary; therefore, a psychiatric readmission prediction model must begin
with a robust and interpretable topic extraction component. We created a data
pipeline for using document vector similarity metrics to perform topic
extraction on psychiatric EHR data in service of our long-term goal of creating
a readmission risk classifier. We show initial results for our topic extraction
model and identify additional features we will be incorporating in the future.
| 2,018 | Computation and Language |
Dual Memory Network Model for Biased Product Review Classification | In sentiment analysis (SA) of product reviews, both user and product
information are proven to be useful. Current tasks handle user profile and
product information in a unified model which may not be able to learn salient
features of users and products effectively. In this work, we propose a dual
user and product memory network (DUPMN) model to learn user profiles and
product reviews using separate memory networks. Then, the two representations
are used jointly for sentiment prediction. The use of separate models aims to
capture user profiles and product information more effectively. Compared to
state-of-the-art unified prediction models, the evaluations on three benchmark
datasets, IMDB, Yelp13, and Yelp14, show that our dual learning model gives
performance gain of 0.6%, 1.2%, and 0.9%, respectively. The improvements are
also deemed very significant measured by p-values.
| 2,018 | Computation and Language |
Development of deep learning algorithms to categorize free-text notes
pertaining to diabetes: convolution neural networks achieve higher accuracy
than support vector machines | Health professionals can use natural language processing (NLP) technologies
when reviewing electronic health records (EHR). Machine learning free-text
classifiers can help them identify problems and make critical decisions. We aim
to develop deep learning neural network algorithms that identify EHR progress
notes pertaining to diabetes and validate the algorithms at two institutions.
The data used are 2,000 EHR progress notes retrieved from patients with
diabetes and all notes were annotated manually as diabetic or non-diabetic.
Several deep learning classifiers were developed, and their performances were
evaluated with the area under the ROC curve (AUC). The convolutional neural
network (CNN) model with a separable convolution layer accurately identified
diabetes-related notes in the Brigham and Womens Hospital testing set with the
highest AUC of 0.975. Deep learning classifiers can be used to identify EHR
progress notes pertaining to diabetes. In particular, the CNN-based classifier
can achieve a higher AUC than an SVM-based classifier.
| 2,018 | Computation and Language |
Cross-Domain Labeled LDA for Cross-Domain Text Classification | Cross-domain text classification aims at building a classifier for a target
domain which leverages data from both source and target domain. One promising
idea is to minimize the feature distribution differences of the two domains.
Most existing studies explicitly minimize such differences by an exact
alignment mechanism (aligning features by one-to-one feature alignment,
projection matrix etc.). Such exact alignment, however, will restrict models'
learning ability and will further impair models' performance on classification
tasks when the semantic distributions of different domains are very different.
To address this problem, we propose a novel group alignment which aligns the
semantics at group level. In addition, to help the model learn better semantic
groups and semantics within these groups, we also propose a partial supervision
for model's learning in source domain. To this end, we embed the group
alignment and a partial supervision into a cross-domain topic model, and
propose a Cross-Domain Labeled LDA (CDL-LDA). On the standard 20Newsgroup and
Reuters dataset, extensive quantitative (classification, perplexity etc.) and
qualitative (topic detection) experiments are conducted to show the
effectiveness of the proposed group alignment and partial supervision.
| 2,019 | Computation and Language |
Meta-Embedding as Auxiliary Task Regularization | Word embeddings have been shown to benefit from ensambling several word
embedding sources, often carried out using straightforward mathematical
operations over the set of word vectors. More recently, self-supervised
learning has been used to find a lower-dimensional representation, similar in
size to the individual word embeddings within the ensemble. However, these
methods do not use the available manually labeled datasets that are often used
solely for the purpose of evaluation. We propose to reconstruct an ensemble of
word embeddings as an auxiliary task that regularises a main task while both
tasks share the learned meta-embedding layer. We carry out intrinsic evaluation
(6 word similarity datasets and 3 analogy datasets) and extrinsic evaluation (4
downstream tasks). For intrinsic task evaluation, supervision comes from
various labeled word similarity datasets. Our experimental results show that
the performance is improved for all word similarity datasets when compared to
self-supervised learning methods with a mean increase of $11.33$ in Spearman
correlation. Specifically, the proposed method shows the best performance in 4
out of 6 of word similarity datasets when using a cosine reconstruction loss
and Brier's word similarity loss. Moreover, improvements are also made when
performing word meta-embedding reconstruction in sequence tagging and sentence
meta-embedding for sentence classification.
| 2,020 | Computation and Language |
Generating Informative and Diverse Conversational Responses via
Adversarial Information Maximization | Responses generated by neural conversational models tend to lack
informativeness and diversity. We present Adversarial Information Maximization
(AIM), an adversarial learning strategy that addresses these two related but
distinct problems. To foster response diversity, we leverage adversarial
training that allows distributional matching of synthetic and real responses.
To improve informativeness, our framework explicitly optimizes a variational
lower bound on pairwise mutual information between query and response.
Empirical results from automatic and human evaluations demonstrate that our
methods significantly boost informativeness and diversity.
| 2,018 | Computation and Language |
Open-world Learning and Application to Product Classification | Classic supervised learning makes the closed-world assumption, meaning that
classes seen in testing must have been seen in training. However, in the
dynamic world, new or unseen class examples may appear constantly. A model
working in such an environment must be able to reject unseen classes (not seen
or used in training). If enough data is collected for the unseen classes, the
system should incrementally learn to accept/classify them. This learning
paradigm is called open-world learning (OWL). Existing OWL methods all need
some form of re-training to accept or include the new classes in the overall
model. In this paper, we propose a meta-learning approach to the problem. Its
key novelty is that it only needs to train a meta-classifier, which can then
continually accept new classes when they have enough labeled data for the
meta-classifier to use, and also detect/reject future unseen classes. No
re-training of the meta-classifier or a new overall classifier covering all old
and new classes is needed. In testing, the method only uses the examples of the
seen classes (including the newly added classes) on-the-fly for classification
and rejection. Experimental results demonstrate the effectiveness of the new
approach.
| 2,019 | Computation and Language |
Similarity measure for Public Persons | For the webportal "Who is in the News!" with statistics about the appearence
of persons in written news we developed an extension, which measures the
relationship of public persons depending on a time parameter, as the
relationship may vary over time. On a training corpus of English and German
news articles we built a measure by extracting the persons occurrence in the
text via pretrained named entity extraction and then construct time series of
counts for each person. Pearson correlation over a sliding window is then used
to measure the relation of two persons.
| 2,018 | Computation and Language |
Open Subtitles Paraphrase Corpus for Six Languages | This paper accompanies the release of Opusparcus, a new paraphrase corpus for
six European languages: German, English, Finnish, French, Russian, and Swedish.
The corpus consists of paraphrases, that is, pairs of sentences in the same
language that mean approximately the same thing. The paraphrases are extracted
from the OpenSubtitles2016 corpus, which contains subtitles from movies and TV
shows. The informal and colloquial genre that occurs in subtitles makes such
data a very interesting language resource, for instance, from the perspective
of computer assisted language learning. For each target language, the
Opusparcus data have been partitioned into three types of data sets: training,
development and test sets. The training sets are large, consisting of millions
of sentence pairs, and have been compiled automatically, with the help of
probabilistic ranking functions. The development and test sets consist of
sentence pairs that have been checked manually; each set contains approximately
1000 sentence pairs that have been verified to be acceptable paraphrases by two
annotators.
| 2,018 | Computation and Language |
Categorizing Comparative Sentences | We tackle the tasks of automatically identifying comparative sentences and
categorizing the intended preference (e.g., "Python has better NLP libraries
than MATLAB" => (Python, better, MATLAB). To this end, we manually annotate
7,199 sentences for 217 distinct target item pairs from several domains (27% of
the sentences contain an oriented comparison in the sense of "better" or
"worse"). A gradient boosting model based on pre-trained sentence embeddings
reaches an F1 score of 85% in our experimental evaluation. The model can be
used to extract comparative sentences for pro/con argumentation in comparative
/ argument search engines or debating technologies.
| 2,019 | Computation and Language |
The Fast and the Flexible: training neural networks to learn to follow
instructions from small data | Learning to follow human instructions is a long-pursued goal in artificial
intelligence. The task becomes particularly challenging if no prior knowledge
of the employed language is assumed while relying only on a handful of examples
to learn from. Work in the past has relied on hand-coded components or manually
engineered features to provide strong inductive biases that make learning in
such situations possible. In contrast, here we seek to establish whether this
knowledge can be acquired automatically by a neural network system through a
two phase training procedure: A (slow) offline learning stage where the network
learns about the general structure of the task and a (fast) online adaptation
phase where the network learns the language of a new given speaker. Controlled
experiments show that when the network is exposed to familiar instructions but
containing novel words, the model adapts very efficiently to the new
vocabulary. Moreover, even for human speakers whose language usage can depart
significantly from our artificial training language, our network can still make
use of its automatically acquired inductive bias to learn to follow
instructions more effectively.
| 2,019 | Computation and Language |
Unsupervised Sense-Aware Hypernymy Extraction | In this paper, we show how unsupervised sense representations can be used to
improve hypernymy extraction. We present a method for extracting disambiguated
hypernymy relationships that propagates hypernyms to sets of synonyms
(synsets), constructs embeddings for these sets, and establishes sense-aware
relationships between matching synsets. Evaluation on two gold standard
datasets for English and Russian shows that the method successfully recognizes
hypernymy relationships that cannot be found with standard Hearst patterns and
Wiktionary datasets for the respective languages.
| 2,023 | Computation and Language |
Style Transfer Through Multilingual and Feedback-Based Back-Translation | Style transfer is the task of transferring an attribute of a sentence (e.g.,
formality) while maintaining its semantic content. The key challenge in style
transfer is to strike a balance between the competing goals, one to preserve
meaning and the other to improve the style transfer accuracy. Prior research
has identified that the task of meaning preservation is generally harder to
attain and evaluate. This paper proposes two extensions of the state-of-the-art
style transfer models aiming at improving the meaning preservation in style
transfer. Our evaluation shows that these extensions help to ground meaning
better while improving the transfer accuracy.
| 2,018 | Computation and Language |
Adversarial Text Generation via Feature-Mover's Distance | Generative adversarial networks (GANs) have achieved significant success in
generating real-valued data. However, the discrete nature of text hinders the
application of GAN to text-generation tasks. Instead of using the standard GAN
objective, we propose to improve text-generation GAN via a novel approach
inspired by optimal transport. Specifically, we consider matching the latent
feature distributions of real and synthetic sentences using a novel metric,
termed the feature-mover's distance (FMD). This formulation leads to a highly
discriminative critic and easy-to-optimize objective, overcoming the
mode-collapsing and brittle-training problems in existing methods. Extensive
experiments are conducted on a variety of tasks to evaluate the proposed model
empirically, including unconditional text generation, style transfer from
non-parallel text, and unsupervised cipher cracking. The proposed model yields
superior performance, demonstrating wide applicability and effectiveness.
| 2,020 | Computation and Language |
Commonsense for Generative Multi-Hop Question Answering Tasks | Reading comprehension QA tasks have seen a recent surge in popularity, yet
most works have focused on fact-finding extractive QA. We instead focus on a
more challenging multi-hop generative task (NarrativeQA), which requires the
model to reason, gather, and synthesize disjoint pieces of information within
the context to generate an answer. This type of multi-step reasoning also often
requires understanding implicit relations, which humans resolve via external,
background commonsense knowledge. We first present a strong generative baseline
that uses a multi-attention mechanism to perform multiple hops of reasoning and
a pointer-generator decoder to synthesize the answer. This model performs
substantially better than previous generative models, and is competitive with
current state-of-the-art span prediction models. We next introduce a novel
system for selecting grounded multi-hop relational commonsense information from
ConceptNet via a pointwise mutual information and term-frequency based scoring
function. Finally, we effectively use this extracted commonsense information to
fill in gaps of reasoning between context hops, using a selectively-gated
attention mechanism. This boosts the model's performance significantly (also
verified via human evaluation), establishing a new state-of-the-art for the
task. We also show promising initial results of the generalizability of our
background knowledge enhancements by demonstrating some improvement on
QAngaroo-WikiHop, another multi-hop reasoning dataset.
| 2,019 | Computation and Language |
DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep
Learning | Misinformation such as fake news is one of the big challenges of our society.
Research on automated fact-checking has proposed methods based on supervised
learning, but these approaches do not consider external evidence apart from
labeled training instances. Recent approaches counter this deficit by
considering external sources related to a claim. However, these methods require
substantial feature modeling and rich lexicons. This paper overcomes these
limitations of prior work with an end-to-end model for evidence-aware
credibility assessment of arbitrary textual claims, without any human
intervention. It presents a neural network model that judiciously aggregates
signals from external evidence articles, the language of these articles and the
trustworthiness of their sources. It also derives informative features for
generating user-comprehensible explanations that makes the neural network
predictions transparent to the end-user. Experiments with four datasets and
ablation studies show the strength of our method.
| 2,018 | Computation and Language |
Robust Spoken Language Understanding via Paraphrasing | Learning intents and slot labels from user utterances is a fundamental step
in all spoken language understanding (SLU) and dialog systems. State-of-the-art
neural network based methods, after deployment, often suffer from performance
degradation on encountering paraphrased utterances, and out-of-vocabulary
words, rarely observed in their training set. We address this challenging
problem by introducing a novel paraphrasing based SLU model which can be
integrated with any existing SLU model in order to improve their overall
performance. We propose two new paraphrase generators using RNN and
sequence-to-sequence based neural networks, which are suitable for our
application. Our experiments on existing benchmark and in house datasets
demonstrate the robustness of our models to rare and complex paraphrased
utterances, even under adversarial test distributions.
| 2,018 | Computation and Language |
Analysis of Bag-of-n-grams Representation's Properties Based on Textual
Reconstruction | Despite its simplicity, bag-of-n-grams sen- tence representation has been
found to excel in some NLP tasks. However, it has not re- ceived much attention
in recent years and fur- ther analysis on its properties is necessary. We
propose a framework to investigate the amount and type of information captured
in a general- purposed bag-of-n-grams sentence represen- tation. We first use
sentence reconstruction as a tool to obtain bag-of-n-grams representa- tion
that contains general information of the sentence. We then run prediction tasks
(sen- tence length, word content, phrase content and word order) using the
obtained representation to look into the specific type of information captured
in the representation. Our analysis demonstrates that bag-of-n-grams
representa- tion does contain sentence structure level in- formation. However,
incorporating n-grams with higher order n empirically helps little with
encoding more information in general, except for phrase content information.
| 2,018 | Computation and Language |
User Information Augmented Semantic Frame Parsing using Coarse-to-Fine
Neural Networks | Semantic frame parsing is a crucial component in spoken language
understanding (SLU) to build spoken dialog systems. It has two main tasks:
intent detection and slot filling. Although state-of-the-art approaches showed
good results, they require large annotated training data and long training
time. In this paper, we aim to alleviate these drawbacks for semantic frame
parsing by utilizing the ubiquitous user information. We design a novel
coarse-to-fine deep neural network model to incorporate prior knowledge of user
information intermediately to better and quickly train a semantic frame parser.
Due to the lack of benchmark dataset with real user information, we synthesize
the simplest type of user information (location and time) on ATIS benchmark
data. The results show that our approach leverages such simple user information
to outperform state-of-the-art approaches by 0.25% for intent detection and
0.31% for slot filling using standard training data. When using smaller
training data, the performance improvement on intent detection and slot filling
reaches up to 1.35% and 1.20% respectively. We also show that our approach can
achieve similar performance as state-of-the-art approaches by using less than
80% annotated training data. Moreover, the training time to achieve the similar
performance is also reduced by over 60%.
| 2,018 | Computation and Language |
Learning Universal Sentence Representations with Mean-Max Attention
Autoencoder | In order to learn universal sentence representations, previous methods focus
on complex recurrent neural networks or supervised learning. In this paper, we
propose a mean-max attention autoencoder (mean-max AAE) within the
encoder-decoder framework. Our autoencoder rely entirely on the MultiHead
self-attention mechanism to reconstruct the input sequence. In the encoding we
propose a mean-max strategy that applies both mean and max pooling operations
over the hidden vectors to capture diverse information of the input. To enable
the information to steer the reconstruction process dynamically, the decoder
performs attention over the mean-max representation. By training our model on a
large collection of unlabelled data, we obtain high-quality representations of
sentences. Experimental results on a broad range of 10 transfer tasks
demonstrate that our model outperforms the state-of-the-art unsupervised single
methods, including the classical skip-thoughts and the advanced
skip-thoughts+LN model. Furthermore, compared with the traditional recurrent
neural network, our mean-max AAE greatly reduce the training time.
| 2,018 | Computation and Language |
Talking to myself: self-dialogues as data for conversational agents | Conversational agents are gaining popularity with the increasing ubiquity of
smart devices. However, training agents in a data driven manner is challenging
due to a lack of suitable corpora. This paper presents a novel method for
gathering topical, unstructured conversational data in an efficient way:
self-dialogues through crowd-sourcing. Alongside this paper, we include a
corpus of 3.6 million words across 23 topics. We argue the utility of the
corpus by comparing self-dialogues with standard two-party conversations as
well as data from other corpora.
| 2,018 | Computation and Language |
Bidirectional Attentional Encoder-Decoder Model and Bidirectional Beam
Search for Abstractive Summarization | Sequence generative models with RNN variants, such as LSTM, GRU, show
promising performance on abstractive document summarization. However, they
still have some issues that limit their performance, especially while deal-ing
with long sequences. One of the issues is that, to the best of our knowledge,
all current models employ a unidirectional decoder, which reasons only about
the past and still limited to retain future context while giving a prediction.
This makes these models suffer on their own by generating unbalanced outputs.
Moreover, unidirec-tional attention-based document summarization can only
capture partial aspects of attentional regularities due to the inherited
challenges in document summarization. To this end, we propose an end-to-end
trainable bidirectional RNN model to tackle the aforementioned issues. The
model has a bidirectional encoder-decoder architecture; in which the encoder
and the decoder are bidirectional LSTMs. The forward decoder is initialized
with the last hidden state of the backward encoder while the backward decoder
is initialized with the last hidden state of the for-ward encoder. In addition,
a bidirectional beam search mechanism is proposed as an approximate inference
algo-rithm for generating the output summaries from the bidi-rectional model.
This enables the model to reason about the past and future and to generate
balanced outputs as a result. Experimental results on CNN / Daily Mail dataset
show that the proposed model outperforms the current abstractive
state-of-the-art models by a considerable mar-gin.
| 2,018 | Computation and Language |
RumourEval 2019: Determining Rumour Veracity and Support for Rumours | This is the proposal for RumourEval-2019, which will run in early 2019 as
part of that year's SemEval event. Since the first RumourEval shared task in
2017, interest in automated claim validation has greatly increased, as the
dangers of "fake news" have become a mainstream concern. Yet automated support
for rumour checking remains in its infancy. For this reason, it is important
that a shared task in this area continues to provide a focus for effort, which
is likely to increase. We therefore propose a continuation in which the
veracity of further rumours is determined, and as previously, supportive of
this goal, tweets discussing them are classified according to the stance they
take regarding the rumour. Scope is extended compared with the first
RumourEval, in that the dataset is substantially expanded to include Reddit as
well as Twitter data, and additional languages are also included.
| 2,018 | Computation and Language |
Document Informed Neural Autoregressive Topic Models with Distributional
Prior | We address two challenges in topic models: (1) Context information around
words helps in determining their actual meaning, e.g., "networks" used in the
contexts "artificial neural networks" vs. "biological neuron networks".
Generative topic models infer topic-word distributions, taking no or only
little context into account. Here, we extend a neural autoregressive topic
model to exploit the full context information around words in a document in a
language modeling fashion. The proposed model is named as iDocNADE. (2) Due to
the small number of word occurrences (i.e., lack of context) in short text and
data sparsity in a corpus of few documents, the application of topic models is
challenging on such texts. Therefore, we propose a simple and efficient way of
incorporating external knowledge into neural autoregressive topic models: we
use embeddings as a distributional prior. The proposed variants are named as
DocNADEe and iDocNADEe.
We present novel neural autoregressive topic model variants that consistently
outperform state-of-the-art generative topic models in terms of generalization,
interpretability (topic coherence) and applicability (retrieval and
classification) over 7 long-text and 8 short-text datasets from diverse
domains.
| 2,019 | Computation and Language |
Transfer and Multi-Task Learning for Noun-Noun Compound Interpretation | In this paper, we empirically evaluate the utility of transfer and multi-task
learning on a challenging semantic classification task: semantic interpretation
of noun--noun compounds. Through a comprehensive series of experiments and
in-depth error analysis, we show that transfer learning via parameter
initialization and multi-task learning via parameter sharing can help a neural
classification model generalize over a highly skewed distribution of relations.
Further, we demonstrate how dual annotation with two distinct sets of relations
over the same set of compounds can be exploited to improve the overall accuracy
of a neural classifier and its F1 scores on the less frequent, but more
difficult relations.
| 2,018 | Computation and Language |
FRAGE: Frequency-Agnostic Word Representation | Continuous word representation (aka word embedding) is a basic building block
in many neural network-based models used in natural language processing tasks.
Although it is widely accepted that words with similar semantics should be
close to each other in the embedding space, we find that word embeddings
learned in several tasks are biased towards word frequency: the embeddings of
high-frequency and low-frequency words lie in different subregions of the
embedding space, and the embedding of a rare word and a popular word can be far
from each other even if they are semantically similar. This makes learned word
embeddings ineffective, especially for rare words, and consequently limits the
performance of these neural network models. In this paper, we develop a neat,
simple yet effective way to learn \emph{FRequency-AGnostic word Embedding}
(FRAGE) using adversarial training. We conducted comprehensive studies on ten
datasets across four natural language processing tasks, including word
similarity, language modeling, machine translation and text classification.
Results show that with FRAGE, we achieve higher performance than the baselines
in all tasks.
| 2,020 | Computation and Language |
Better Conversations by Modeling,Filtering,and Optimizing for Coherence
and Diversity | We present three enhancements to existing encoder-decoder models for
open-domain conversational agents, aimed at effectively modeling coherence and
promoting output diversity: (1) We introduce a measure of coherence as the
GloVe embedding similarity between the dialogue context and the generated
response, (2) we filter our training corpora based on the measure of coherence
to obtain topically coherent and lexically diverse context-response pairs, (3)
we then train a response generator using a conditional variational autoencoder
model that incorporates the measure of coherence as a latent variable and uses
a context gate to guarantee topical consistency with the context and promote
lexical diversity. Experiments on the OpenSubtitles corpus show a substantial
improvement over competitive neural models in terms of BLEU score as well as
metrics of coherence and diversity.
| 2,018 | Computation and Language |
Improving Moderation of Online Discussions via Interpretable Neural
Models | Growing amount of comments make online discussions difficult to moderate by
human moderators only. Antisocial behavior is a common occurrence that often
discourages other users from participating in discussion. We propose a neural
network based method that partially automates the moderation process. It
consists of two steps. First, we detect inappropriate comments for moderators
to see. Second, we highlight inappropriate parts within these comments to make
the moderation faster. We evaluated our method on data from a major Slovak news
discussion platform.
| 2,018 | Computation and Language |
Mind Your POV: Convergence of Articles and Editors Towards Wikipedia's
Neutrality Norm | Wikipedia has a strong norm of writing in a 'neutral point of view' (NPOV).
Articles that violate this norm are tagged, and editors are encouraged to make
corrections. But the impact of this tagging system has not been quantitatively
measured. Does NPOV tagging help articles to converge to the desired style? Do
NPOV corrections encourage editors to adopt this style? We study these
questions using a corpus of NPOV-tagged articles and a set of lexicons
associated with biased language. An interrupted time series analysis shows that
after an article is tagged for NPOV, there is a significant decrease in biased
language in the article, as measured by several lexicons. However, for
individual editors, NPOV corrections and talk page discussions yield no
significant change in the usage of words in most of these lexicons, including
Wikipedia's own list of 'words to watch.' This suggests that NPOV tagging and
discussion does improve content, but has less success enculturating editors to
the site's linguistic norms.
| 2,018 | Computation and Language |
Multi-task Learning with Sample Re-weighting for Machine Reading
Comprehension | We propose a multi-task learning framework to learn a joint Machine Reading
Comprehension (MRC) model that can be applied to a wide range of MRC tasks in
different domains. Inspired by recent ideas of data selection in machine
translation, we develop a novel sample re-weighting scheme to assign
sample-specific weights to the loss. Empirical study shows that our approach
can be applied to many existing MRC models. Combined with contextual
representations from pre-trained language models (such as ELMo), we achieve new
state-of-the-art results on a set of MRC benchmark datasets. We release our
code at https://github.com/xycforgithub/MultiTask-MRC.
| 2,019 | Computation and Language |
NICT's Neural and Statistical Machine Translation Systems for the WMT18
News Translation Task | This paper presents the NICT's participation to the WMT18 shared news
translation task. We participated in the eight translation directions of four
language pairs: Estonian-English, Finnish-English, Turkish-English and
Chinese-English. For each translation direction, we prepared state-of-the-art
statistical (SMT) and neural (NMT) machine translation systems. Our NMT systems
were trained with the transformer architecture using the provided parallel data
enlarged with a large quantity of back-translated monolingual data that we
generated with a new incremental training framework. Our primary submissions to
the task are the result of a simple combination of our SMT and NMT systems. Our
systems are ranked first for the Estonian-English and Finnish-English language
pairs (constraint) according to BLEU-cased.
| 2,018 | Computation and Language |
NICT's Corpus Filtering Systems for the WMT18 Parallel Corpus Filtering
Task | This paper presents the NICT's participation in the WMT18 shared parallel
corpus filtering task. The organizers provided 1 billion words German-English
corpus crawled from the web as part of the Paracrawl project. This corpus is
too noisy to build an acceptable neural machine translation (NMT) system. Using
the clean data of the WMT18 shared news translation task, we designed several
features and trained a classifier to score each sentence pairs in the noisy
data. Finally, we sampled 100 million and 10 million words and built
corresponding NMT systems. Empirical results show that our NMT systems trained
on sampled data achieve promising performance.
| 2,018 | Computation and Language |
Latent Topic Conversational Models | Latent variable models have been a preferred choice in conversational
modeling compared to sequence-to-sequence (seq2seq) models which tend to
generate generic and repetitive responses. Despite so, training latent variable
models remains to be difficult. In this paper, we propose Latent Topic
Conversational Model (LTCM) which augments seq2seq with a neural latent topic
component to better guide response generation and make training easier. The
neural topic component encodes information from the source sentence to build a
global "topic" distribution over words, which is then consulted by the seq2seq
model at each generation step. We study in details how the latent
representation is learnt in both the vanilla model and LTCM. Our extensive
experiments contribute to better understanding and training of conditional
latent models for languages. Our results show that by sampling from the learnt
latent representations, LTCM can generate diverse and interesting responses. In
a subjective human evaluation, the judges also confirm that LTCM is the overall
preferred option.
| 2,018 | Computation and Language |
String Transduction with Target Language Models and Insertion Handling | Many character-level tasks can be framed as sequence-to-sequence
transduction, where the target is a word from a natural language. We show that
leveraging target language models derived from unannotated target corpora,
combined with a precise alignment of the training data, yields state-of-the art
results on cognate projection, inflection generation, and phoneme-to-grapheme
conversion.
| 2,018 | Computation and Language |
Unsupervised cross-lingual matching of product classifications | Unsupervised cross-lingual embeddings mapping has provided a unique tool for
completely unsupervised translation even for languages with different scripts.
In this work we use this method for the task of unsupervised cross-lingual
matching of product classifications. Our work also investigates limitations of
unsupervised vector alignment and we also suggest two other techniques for
aligning product classifications based on their descriptions: using
hierarchical information and translations.
| 2,018 | Computation and Language |
Interpretable Textual Neuron Representations for NLP | Input optimization methods, such as Google Deep Dream, create interpretable
representations of neurons for computer vision DNNs. We propose and evaluate
ways of transferring this technology to NLP. Our results suggest that gradient
ascent with a gumbel softmax layer produces n-gram representations that
outperform naive corpus search in terms of target neuron activation. The
representations highlight differences in syntax awareness between the language
and visual models of the Imaginet architecture.
| 2,018 | Computation and Language |
A Dataset for Document Grounded Conversations | This paper introduces a document grounded dataset for text conversations. We
define "Document Grounded Conversations" as conversations that are about the
contents of a specified document. In this dataset the specified documents were
Wikipedia articles about popular movies. The dataset contains 4112
conversations with an average of 21.43 turns per conversation. This positions
this dataset to not only provide a relevant chat history while generating
responses but also provide a source of information that the models could use.
We describe two neural architectures that provide benchmark performance on the
task of generating the next response. We also evaluate our models for
engagement and fluency, and find that the information from the document helps
in generating more engaging and fluent responses.
| 2,018 | Computation and Language |
Building Context-aware Clause Representations for Situation Entity Type
Classification | Capabilities to categorize a clause based on the type of situation entity
(e.g., events, states and generic statements) the clause introduces to the
discourse can benefit many NLP applications. Observing that the situation
entity type of a clause depends on discourse functions the clause plays in a
paragraph and the interpretation of discourse functions depends heavily on
paragraph-wide contexts, we propose to build context-aware clause
representations for predicting situation entity types of clauses. Specifically,
we propose a hierarchical recurrent neural network model to read a whole
paragraph at a time and jointly learn representations for all the clauses in
the paragraph by extensively modeling context influences and inter-dependencies
of clauses. Experimental results show that our model achieves the
state-of-the-art performance for clause-level situation entity classification
on the genre-rich MASC+Wiki corpus, which approaches human-level performance.
| 2,018 | Computation and Language |
A Quantitative Evaluation of Natural Language Question Interpretation
for Question Answering Systems | Systematic benchmark evaluation plays an important role in the process of
improving technologies for Question Answering (QA) systems. While currently
there are a number of existing evaluation methods for natural language (NL) QA
systems, most of them consider only the final answers, limiting their utility
within a black box style evaluation. Herein, we propose a subdivided evaluation
approach to enable finer-grained evaluation of QA systems, and present an
evaluation tool which targets the NL question (NLQ) interpretation step, an
initial step of a QA pipeline. The results of experiments using two public
benchmark datasets suggest that we can get a deeper insight about the
performance of a QA system using the proposed approach, which should provide a
better guidance for improving the systems, than using black box style
approaches.
| 2,018 | Computation and Language |
Challenges for Toxic Comment Classification: An In-Depth Error Analysis | Toxic comment classification has become an active research field with many
recently proposed approaches. However, while these approaches address some of
the task's challenges others still remain unsolved and directions for further
research are needed. To this end, we compare different deep learning and
shallow approaches on a new, large comment dataset and propose an ensemble that
outperforms all individual models. Further, we validate our findings on a
second dataset. The results of the ensemble enable us to perform an extensive
error analysis, which reveals open challenges for state-of-the-art methods and
directions towards pending future research. These challenges include missing
paradigmatic context and inconsistent dataset labels.
| 2,018 | Computation and Language |
Lessons learned in multilingual grounded language learning | Recent work has shown how to learn better visual-semantic embeddings by
leveraging image descriptions in more than one language. Here, we investigate
in detail which conditions affect the performance of this type of grounded
language learning model. We show that multilingual training improves over
bilingual training, and that low-resource languages benefit from training with
higher-resource languages. We demonstrate that a multilingual model can be
trained equally well on either translations or comparable sentence pairs, and
that annotating the same set of images in multiple language enables further
improvements via an additional caption-caption ranking objective.
| 2,018 | Computation and Language |
Investigating Linguistic Pattern Ordering in Hierarchical Natural
Language Generation | Natural language generation (NLG) is a critical component in spoken dialogue
system, which can be divided into two phases: (1) sentence planning: deciding
the overall sentence structure, (2) surface realization: determining specific
word forms and flattening the sentence structure into a string. With the rise
of deep learning, most modern NLG models are based on a sequence-to-sequence
(seq2seq) model, which basically contains an encoder-decoder structure; these
NLG models generate sentences from scratch by jointly optimizing sentence
planning and surface realization. However, such simple encoder-decoder
architecture usually fail to generate complex and long sentences, because the
decoder has difficulty learning all grammar and diction knowledge well. This
paper introduces an NLG model with a hierarchical attentional decoder, where
the hierarchy focuses on leveraging linguistic knowledge in a specific order.
The experiments show that the proposed method significantly outperforms the
traditional seq2seq model with a smaller model size, and the design of the
hierarchical attentional decoder can be applied to various NLG systems.
Furthermore, different generation strategies based on linguistic patterns are
investigated and analyzed in order to guide future NLG research work.
| 2,018 | Computation and Language |
Joint Multilingual Supervision for Cross-lingual Entity Linking | Cross-lingual Entity Linking (XEL) aims to ground entity mentions written in
any language to an English Knowledge Base (KB), such as Wikipedia. XEL for most
languages is challenging, owing to limited availability of resources as
supervision. We address this challenge by developing the first XEL approach
that combines supervision from multiple languages jointly. This enables our
approach to: (a) augment the limited supervision in the target language with
additional supervision from a high-resource language (like English), and (b)
train a single entity linking model for multiple languages, improving upon
individually trained models for each language. Extensive evaluation on three
benchmark datasets across 8 languages shows that our approach significantly
improves over the current state-of-the-art. We also provide analyses in two
limited resource settings: (a) zero-shot setting, when no supervision in the
target language is available, and in (b) low-resource setting, when some
supervision in the target language is available. Our analysis provides insights
into the limitations of zero-shot XEL approaches in realistic scenarios, and
shows the value of joint supervision in low-resource settings.
| 2,018 | Computation and Language |
Symbolic Priors for RNN-based Semantic Parsing | Seq2seq models based on Recurrent Neural Networks (RNNs) have recently
received a lot of attention in the domain of Semantic Parsing for Question
Answering. While in principle they can be trained directly on pairs (natural
language utterances, logical forms), their performance is limited by the amount
of available data. To alleviate this problem, we propose to exploit various
sources of prior knowledge: the well-formedness of the logical forms is modeled
by a weighted context-free grammar; the likelihood that certain entities
present in the input utterance are also present in the logical form is modeled
by weighted finite-state automata. The grammar and automata are combined
together through an efficient intersection algorithm to form a soft guide
("background") to the RNN. We test our method on an extension of the Overnight
dataset and show that it not only strongly improves over an RNN baseline, but
also outperforms non-RNN models based on rich sets of hand-crafted features.
| 2,018 | Computation and Language |
Rapid Customization for Event Extraction | We present a system for rapidly customizing event extraction capability to
find new event types and their arguments. The system allows a user to find,
expand and filter event triggers for a new event type by exploring an
unannotated corpus. The system will then automatically generate mention-level
event annotation automatically, and train a Neural Network model for finding
the corresponding event. Additionally, the system uses the ACE corpus to train
an argument model for extracting Actor, Place, and Time arguments for any event
types, including ones not seen in its training data. Experiments show that with
less than 10 minutes of human effort per event type, the system achieves good
performance for 67 novel event types. The code, documentation, and a
demonstration video will be released as open source on github.com.
| 2,018 | Computation and Language |
Bootstrapping Transliteration with Constrained Discovery for
Low-Resource Languages | Generating the English transliteration of a name written in a foreign script
is an important and challenging step in multilingual knowledge acquisition and
information extraction. Existing approaches to transliteration generation
require a large (>5000) number of training examples. This difficulty contrasts
with transliteration discovery, a somewhat easier task that involves picking a
plausible transliteration from a given list. In this work, we present a
bootstrapping algorithm that uses constrained discovery to improve generation,
and can be used with as few as 500 training examples, which we show can be
sourced from annotators in a matter of hours. This opens the task to languages
for which large number of training examples are unavailable. We evaluate
transliteration generation performance itself, as well the improvement it
brings to cross-lingual candidate generation for entity linking, a typical
downstream task. We present a comprehensive evaluation of our approach on nine
languages, each written in a unique script.
| 2,018 | Computation and Language |
LSTM-based Whisper Detection | This article presents a whisper speech detector in the far-field domain. The
proposed system consists of a long-short term memory (LSTM) neural network
trained on log-filterbank energy (LFBE) acoustic features. This model is
trained and evaluated on recordings of human interactions with
voice-controlled, far-field devices in whisper and normal phonation modes. We
compare multiple inference approaches for utterance-level classification by
examining trajectories of the LSTM posteriors. In addition, we engineer a set
of features based on the signal characteristics inherent to whisper speech, and
evaluate their effectiveness in further separating whisper from normal speech.
A benchmarking of these features using multilayer perceptrons (MLP) and LSTMs
suggests that the proposed features, in combination with LFBE features, can
help us further improve our classifiers. We prove that, with enough data, the
LSTM model is indeed as capable of learning whisper characteristics from LFBE
features alone compared to a simpler MLP model that uses both LFBE and features
engineered for separating whisper and normal speech. In addition, we prove that
the LSTM classifiers accuracy can be further improved with the incorporation of
the proposed engineered features.
| 2,020 | Computation and Language |
On Folding and Twisting (and whatknot): towards a characterization of
workspaces in syntax | Syntactic theory has traditionally adopted a constructivist approach, in
which a set of atomic elements are manipulated by combinatory operations to
yield derived, complex elements. Syntactic structure is thus seen as the result
or discrete recursive combinatorics over lexical items which get assembled into
phrases, which are themselves combined to form sentences. This view is common
to European and American structuralism (e.g., Benveniste, 1971; Hockett, 1958)
and different incarnations of generative grammar, transformational and
non-transformational (Chomsky, 1956, 1995; and Kaplan & Bresnan, 1982; Gazdar,
1982). Since at least Uriagereka (2002), there has been some attention paid to
the fact that syntactic operations must apply somewhere, particularly when
copying and movement operations are considered. Contemporary syntactic theory
has thus somewhat acknowledged the importance of formalizing aspects of the
spaces in which elements are manipulated, but it is still a vastly
underexplored area. In this paper we explore the consequences of
conceptualizing syntax as a set of topological operations applying over spaces
rather than over discrete elements. We argue that there are empirical
advantages in such a view for the treatment of long-distance dependencies and
cross-derivational dependencies: constraints on possible configurations emerge
from the dynamics of the system.
| 2,019 | Computation and Language |
Predicting the Argumenthood of English Prepositional Phrases | Distinguishing between arguments and adjuncts of a verb is a longstanding,
nontrivial problem. In natural language processing, argumenthood information is
important in tasks such as semantic role labeling (SRL) and prepositional
phrase (PP) attachment disambiguation. In theoretical linguistics, many
diagnostic tests for argumenthood exist but they often yield conflicting and
potentially gradient results. This is especially the case for syntactically
oblique items such as PPs. We propose two PP argumenthood prediction tasks
branching from these two motivations: (1) binary argument-adjunct
classification of PPs in VerbNet, and (2) gradient argumenthood prediction
using human judgments as gold standard, and report results from prediction
models that use pretrained word embeddings and other linguistically informed
features. Our best results on each task are (1) $acc.=0.955$, $F_1=0.954$
(ELMo+BiLSTM) and (2) Pearson's $r=0.624$ (word2vec+MLP). Furthermore, we
demonstrate the utility of argumenthood prediction in improving sentence
representations via performance gains on SRL when a sentence encoder is
pretrained with our tasks.
| 2,019 | Computation and Language |
CollaboNet: collaboration of deep neural networks for biomedical named
entity recognition | Background: Finding biomedical named entities is one of the most essential
tasks in biomedical text mining. Recently, deep learning-based approaches have
been applied to biomedical named entity recognition (BioNER) and showed
promising results. However, as deep learning approaches need an abundant amount
of training data, a lack of data can hinder performance. BioNER datasets are
scarce resources and each dataset covers only a small subset of entity types.
Furthermore, many bio entities are polysemous, which is one of the major
obstacles in named entity recognition. Results: To address the lack of data and
the entity type misclassification problem, we propose CollaboNet which utilizes
a combination of multiple NER models. In CollaboNet, models trained on a
different dataset are connected to each other so that a target model obtains
information from other collaborator models to reduce false positives. Every
model is an expert on their target entity type and takes turns serving as a
target and a collaborator model during training time. The experimental results
show that CollaboNet can be used to greatly reduce the number of false
positives and misclassified entities including polysemous words. CollaboNet
achieved state-of-the-art performance in terms of precision, recall and F1
score. Conclusions: We demonstrated the benefits of combining multiple models
for BioNER. Our model has successfully reduced the number of misclassified
entities and improved the performance by leveraging multiple datasets annotated
for different entity types. Given the state-of-the-art performance of our
model, we believe that CollaboNet can improve the accuracy of downstream
biomedical text mining applications such as bio-entity relation extraction.
| 2,019 | Computation and Language |
Paraphrase Detection on Noisy Subtitles in Six Languages | We perform automatic paraphrase detection on subtitle data from the
Opusparcus corpus comprising six European languages: German, English, Finnish,
French, Russian, and Swedish. We train two types of supervised sentence
embedding models: a word-averaging (WA) model and a gated recurrent averaging
network (GRAN) model. We find out that GRAN outperforms WA and is more robust
to noisy training data. Better results are obtained with more and noisier data
than less and cleaner data. Additionally, we experiment on other datasets,
without reaching the same level of performance, because of domain mismatch
between training and test data.
| 2,018 | Computation and Language |
Understanding Convolutional Neural Networks for Text Classification | We present an analysis into the inner workings of Convolutional Neural
Networks (CNNs) for processing text. CNNs used for computer vision can be
interpreted by projecting filters into image space, but for discrete sequence
inputs CNNs remain a mystery. We aim to understand the method by which the
networks process and classify text. We examine common hypotheses to this
problem: that filters, accompanied by global max-pooling, serve as ngram
detectors. We show that filters may capture several different semantic classes
of ngrams by using different activation patterns, and that global max-pooling
induces behavior which separates important ngrams from the rest. Finally, we
show practical use cases derived from our findings in the form of model
interpretability (explaining a trained model by deriving a concrete identity
for each filter, bridging the gap between visualization tools in vision tasks
and NLP) and prediction interpretability (explaining predictions). Code
implementation is available online at
github.com/sayaendo/interpreting-cnn-for-text.
| 2,020 | Computation and Language |
Predicting the Usefulness of Amazon Reviews Using Off-The-Shelf
Argumentation Mining | Internet users generate content at unprecedented rates. Building intelligent
systems capable of discriminating useful content within this ocean of
information is thus becoming a urgent need. In this paper, we aim to predict
the usefulness of Amazon reviews, and to do this we exploit features coming
from an off-the-shelf argumentation mining system. We argue that the usefulness
of a review, in fact, is strictly related to its argumentative content, whereas
the use of an already trained system avoids the costly need of relabeling a
novel dataset. Results obtained on a large publicly available corpus support
this hypothesis.
| 2,018 | Computation and Language |
Towards Automated Factchecking: Developing an Annotation Schema and
Benchmark for Consistent Automated Claim Detection | In an effort to assist factcheckers in the process of factchecking, we tackle
the claim detection task, one of the necessary stages prior to determining the
veracity of a claim. It consists of identifying the set of sentences, out of a
long text, deemed capable of being factchecked. This paper is a collaborative
work between Full Fact, an independent factchecking charity, and academic
partners. Leveraging the expertise of professional factcheckers, we develop an
annotation schema and a benchmark for automated claim detection that is more
consistent across time, topics and annotators than previous approaches. Our
annotation schema has been used to crowdsource the annotation of a dataset with
sentences from UK political TV shows. We introduce an approach based on
universal sentence representations to perform the classification, achieving an
F1 score of 0.83, with over 5% relative improvement over the state-of-the-art
methods ClaimBuster and ClaimRank. The system was deployed in production and
received positive user feedback.
| 2,020 | Computation and Language |
Towards Exploiting Background Knowledge for Building Conversation
Systems | Existing dialog datasets contain a sequence of utterances and responses
without any explicit background knowledge associated with them. This has
resulted in the development of models which treat conversation as a
sequence-to-sequence generation task i.e, given a sequence of utterances
generate the response sequence). This is not only an overly simplistic view of
conversation but it is also emphatically different from the way humans converse
by heavily relying on their background knowledge about the topic (as opposed to
simply relying on the previous sequence of utterances). For example, it is
common for humans to (involuntarily) produce utterances which are copied or
suitably modified from background articles they have read about the topic. To
facilitate the development of such natural conversation models which mimic the
human process of conversing, we create a new dataset containing movie chats
wherein each response is explicitly generated by copying and/or modifying
sentences from unstructured background knowledge such as plots, comments and
reviews about the movie. We establish baseline results on this dataset (90K
utterances from 9K conversations) using three different models: (i) pure
generation based models which ignore the background knowledge (ii) generation
based models which learn to copy information from the background knowledge when
required and (iii) span prediction based models which predict the appropriate
response span in the background knowledge.
| 2,018 | Computation and Language |
Neural Approaches to Conversational AI | The present paper surveys neural approaches to conversational AI that have
been developed in the last few years. We group conversational systems into
three categories: (1) question answering agents, (2) task-oriented dialogue
agents, and (3) chatbots. For each category, we present a review of
state-of-the-art neural approaches, draw the connection between them and
traditional approaches, and discuss the progress that has been made and
challenges still being faced, using specific systems and models as case
studies.
| 2,019 | Computation and Language |
Opacity, Obscurity, and the Geometry of Question-Asking | Asking questions is a pervasive human activity, but little is understood
about what makes them difficult to answer. An analysis of a pair of large
databases, of New York Times crosswords and questions from the quiz-show
Jeopardy, establishes two orthogonal dimensions of question difficulty:
obscurity (the rarity of the answer) and opacity (the indirectness of question
cues, operationalized with word2vec). The importance of opacity, and the role
of synergistic information in resolving it, suggests that accounts of
difficulty in terms of prior expectations captures only a part of the
question-asking process. A further regression analysis shows the presence of
additional dimensions to question-asking: question complexity, the answer's
local network density, cue intersection, and the presence of signal words. Our
work shows how question-askers can help their interlocutors by using contextual
cues, or, conversely, how a particular kind of unfamiliarity with the domain in
question can make it harder for individuals to learn from others. Taken
together, these results suggest how Bayesian models of question difficulty can
be supplemented by process models and accounts of the heuristics individuals
use to navigate conceptual spaces.
| 2,018 | Computation and Language |
How do you correct run-on sentences it's not as easy as it seems | Run-on sentences are common grammatical mistakes but little research has
tackled this problem to date. This work introduces two machine learning models
to correct run-on sentences that outperform leading methods for related tasks,
punctuation restoration and whole-sentence grammatical error correction. Due to
the limited annotated data for this error, we experiment with artificially
generating training data from clean newswire text. Our findings suggest
artificial training data is viable for this task. We discuss implications for
correcting run-ons and other types of mistakes that have low coverage in
error-annotated corpora.
| 2,018 | Computation and Language |
Semi-Supervised Sequence Modeling with Cross-View Training | Unsupervised representation learning algorithms such as word2vec and ELMo
improve the accuracy of many supervised NLP models, mainly because they can
take advantage of large amounts of unlabeled text. However, the supervised
models only learn from task-specific labeled data during the main training
phase. We therefore propose Cross-View Training (CVT), a semi-supervised
learning algorithm that improves the representations of a Bi-LSTM sentence
encoder using a mix of labeled and unlabeled data. On labeled examples,
standard supervised learning is used. On unlabeled examples, CVT teaches
auxiliary prediction modules that see restricted views of the input (e.g., only
part of a sentence) to match the predictions of the full model seeing the whole
input. Since the auxiliary modules and the full model share intermediate
representations, this in turn improves the full model. Moreover, we show that
CVT is particularly effective when combined with multi-task learning. We
evaluate CVT on five sequence tagging tasks, machine translation, and
dependency parsing, achieving state-of-the-art results.
| 2,018 | Computation and Language |
A Byte-sized Approach to Named Entity Recognition | In biomedical literature, it is common for entity boundaries to not align
with word boundaries. Therefore, effective identification of entity spans
requires approaches capable of considering tokens that are smaller than words.
We introduce a novel, subword approach for named entity recognition (NER) that
uses byte-pair encodings (BPE) in combination with convolutional and recurrent
neural networks to produce byte-level tags of entities. We present experimental
results on several standard biomedical datasets, namely the BioCreative VI
Bio-ID, JNLPBA, and GENETAG datasets. We demonstrate competitive performance
while bypassing the specialized domain expertise needed to create biomedical
text tokenization rules.
| 2,018 | Computation and Language |
Relating Zipf's law to textual information | Zipf's law is the main regularity of quantitative linguistics. Despite of
many works devoted to foundations of this law, it is still unclear whether it
is only a statistical regularity, or it has deeper relations with
information-carrying structures of the text. This question relates to that of
distinguishing a meaningful text (written in an unknown system) from a
meaningless set of symbols that mimics statistical features of a text. Here we
contribute to resolving these questions by comparing features of the first half
of a text (from the beginning to the middle) to its second half. This
comparison can uncover hidden effects, because the halves have the same values
of many parameters (style, genre, author's vocabulary {\it etc}). In all
studied texts we saw that for the first half Zipf's law applies from smaller
ranks than in the second half, i.e. the law applies better to the first half.
Also, words that hold Zipf's law in the first half are distributed more
homogeneously over the text. These features do allow to distinguish a
meaningful text from a random sequence of words. Our findings correlate with a
number of textual characteristics that hold in most cases we studied: the first
half is lexically richer, has longer and less repetitive words, more and
shorter sentences, more punctuation signs and more paragraphs. These
differences between the halves indicate on a higher hierarchic level of text
organization that so far went unnoticed in text linguistics. They relate the
validity of Zipf's law to textual information. A complete description of this
effect requires new models, though one existing model can account for some of
its aspects.
| 2,018 | Computation and Language |
Towards Language Agnostic Universal Representations | When a bilingual student learns to solve word problems in math, we expect the
student to be able to solve these problem in both languages the student is
fluent in,even if the math lessons were only taught in one language. However,
current representations in machine learning are language dependent. In this
work, we present a method to decouple the language from the problem by learning
language agnostic representations and therefore allowing training a model in
one language and applying to a different one in a zero shot fashion. We learn
these representations by taking inspiration from linguistics and formalizing
Universal Grammar as an optimization process (Chomsky, 2014; Montague, 1970).
We demonstrate the capabilities of these representations by showing that the
models trained on a single language using language agnostic representations
achieve very similar accuracies in other languages.
| 2,018 | Computation and Language |
Learning and Evaluating Sparse Interpretable Sentence Embeddings | Previous research on word embeddings has shown that sparse representations,
which can be either learned on top of existing dense embeddings or obtained
through model constraints during training time, have the benefit of increased
interpretability properties: to some degree, each dimension can be understood
by a human and associated with a recognizable feature in the data. In this
paper, we transfer this idea to sentence embeddings and explore several
approaches to obtain a sparse representation. We further introduce a novel,
quantitative and automated evaluation metric for sentence embedding
interpretability, based on topic coherence methods. We observe an increase in
interpretability compared to dense models, on a dataset of movie dialogs and on
the scene descriptions from the MS COCO dataset.
| 2,018 | Computation and Language |
Detecting Hate Speech and Offensive Language on Twitter using Machine
Learning: An N-gram and TFIDF based Approach | Toxic online content has become a major issue in today's world due to an
exponential increase in the use of internet by people of different cultures and
educational background. Differentiating hate speech and offensive language is a
key challenge in automatic detection of toxic text content. In this paper, we
propose an approach to automatically classify tweets on Twitter into three
classes: hateful, offensive and clean. Using Twitter dataset, we perform
experiments considering n-grams as features and passing their term
frequency-inverse document frequency (TFIDF) values to multiple machine
learning models. We perform comparative analysis of the models considering
several values of n in n-grams and TFIDF normalization methods. After tuning
the model giving the best results, we achieve 95.6% accuracy upon evaluating it
on test data. We also create a module which serves as an intermediate between
user and Twitter.
| 2,018 | Computation and Language |
Mind Your Language: Abuse and Offense Detection for Code-Switched
Languages | In multilingual societies like the Indian subcontinent, use of code-switched
languages is much popular and convenient for the users. In this paper, we study
offense and abuse detection in the code-switched pair of Hindi and English
(i.e. Hinglish), the pair that is the most spoken. The task is made difficult
due to non-fixed grammar, vocabulary, semantics and spellings of Hinglish
language. We apply transfer learning and make a LSTM based model for hate
speech classification. This model surpasses the performance shown by the
current best models to establish itself as the state-of-the-art in the
unexplored domain of Hinglish offensive text classification.We also release our
model and the embeddings trained for research purposes
| 2,018 | Computation and Language |
Textually Enriched Neural Module Networks for Visual Question Answering | Problems at the intersection of language and vision, like visual question
answering, have recently been gaining a lot of attention in the field of
multi-modal machine learning as computer vision research moves beyond
traditional recognition tasks. There has been recent success in visual question
answering using deep neural network models which use the linguistic structure
of the questions to dynamically instantiate network layouts. In the process of
converting the question to a network layout, the question is simplified, which
results in loss of information in the model. In this paper, we enrich the image
information with textual data using image captions and external knowledge bases
to generate more coherent answers. We achieve 57.1% overall accuracy on the
test-dev open-ended questions from the visual question answering (VQA 1.0) real
image dataset.
| 2,018 | Computation and Language |
Monolingual sentence matching for text simplification | This work improves monolingual sentence alignment for text simplification,
specifically for text in standard and simple Wikipedia. We introduce a
convolutional neural network structure to model similarity between two
sentences. Due to the limitation of available parallel corpora, the model is
trained in a semi-supervised way, by using the output of a knowledge-based high
performance aligning system. We apply the resulting similarity score to rescore
the knowledge-based output, and adapt the model by a small hand-aligned
dataset. Experiments show that both rescoring and adaptation improve the
performance of knowledge-based method.
| 2,018 | Computation and Language |
Context-Aware Attention for Understanding Twitter Abuse | The original goal of any social media platform is to facilitate users to
indulge in healthy and meaningful conversations. But more often than not, it
has been found that it becomes an avenue for wanton attacks. We want to
alleviate this issue and hence we try to provide a detailed analysis of how
abusive behavior can be monitored in Twitter. The complexity of the natural
language constructs makes this task challenging. We show how applying
contextual attention to Long Short Term Memory networks help us give near state
of art results on multiple benchmarks abuse detection data sets from Twitter.
| 2,019 | Computation and Language |
Deformable Stacked Structure for Named Entity Recognition | Neural architecture for named entity recognition has achieved great success
in the field of natural language processing. Currently, the dominating
architecture consists of a bi-directional recurrent neural network (RNN) as the
encoder and a conditional random field (CRF) as the decoder. In this paper, we
propose a deformable stacked structure for named entity recognition, in which
the connections between two adjacent layers are dynamically established. We
evaluate the deformable stacked structure by adapting it to different layers.
Our model achieves the state-of-the-art performances on the OntoNotes dataset.
| 2,018 | Computation and Language |
Sentence-Level Fluency Evaluation: References Help, But Can Be Spared! | Motivated by recent findings on the probabilistic modeling of acceptability
judgments, we propose syntactic log-odds ratio (SLOR), a normalized language
model score, as a metric for referenceless fluency evaluation of natural
language generation output at the sentence level. We further introduce WPSLOR,
a novel WordPiece-based version, which harnesses a more compact language model.
Even though word-overlap metrics like ROUGE are computed with the help of
hand-written references, our referenceless methods obtain a significantly
higher correlation with human fluency scores on a benchmark dataset of
compressed sentences. Finally, we present ROUGE-LM, a reference-based metric
which is a natural extension of WPSLOR to the case of available references. We
show that ROUGE-LM yields a significantly higher correlation with human
judgments than all baseline metrics, including WPSLOR on its own.
| 2,018 | Computation and Language |
Neural Transductive Learning and Beyond: Morphological Generation in the
Minimal-Resource Setting | Neural state-of-the-art sequence-to-sequence (seq2seq) models often do not
perform well for small training sets. We address paradigm completion, the
morphological task of, given a partial paradigm, generating all missing forms.
We propose two new methods for the minimal-resource setting: (i) Paradigm
transduction: Since we assume only few paradigms available for training, neural
seq2seq models are able to capture relationships between paradigm cells, but
are tied to the idiosyncracies of the training set. Paradigm transduction
mitigates this problem by exploiting the input subset of inflected forms at
test time. (ii) Source selection with high precision (SHIP): Multi-source
models which learn to automatically select one or multiple sources to predict a
target inflection do not perform well in the minimal-resource setting. SHIP is
an alternative to identify a reliable source if training data is limited. On a
52-language benchmark dataset, we outperform the previous state of the art by
up to 9.71% absolute accuracy.
| 2,019 | Computation and Language |
Speaker Naming in Movies | We propose a new model for speaker naming in movies that leverages visual,
textual, and acoustic modalities in an unified optimization framework. To
evaluate the performance of our model, we introduce a new dataset consisting of
six episodes of the Big Bang Theory TV show and eighteen full movies covering
different genres. Our experiments show that our multimodal model significantly
outperforms several competitive baselines on the average weighted F-score
metric. To demonstrate the effectiveness of our framework, we design an
end-to-end memory network model that leverages our speaker naming model and
achieves state-of-the-art results on the subtitles task of the MovieQA 2017
Challenge.
| 2,018 | Computation and Language |
Chargrid: Towards Understanding 2D Documents | We introduce a novel type of text representation that preserves the 2D layout
of a document. This is achieved by encoding each document page as a
two-dimensional grid of characters. Based on this representation, we present a
generic document understanding pipeline for structured documents. This pipeline
makes use of a fully convolutional encoder-decoder network that predicts a
segmentation mask and bounding boxes. We demonstrate its capabilities on an
information extraction task from invoices and show that it significantly
outperforms approaches based on sequential text or document images.
| 2,018 | Computation and Language |
Information-Weighted Neural Cache Language Models for ASR | Neural cache language models (LMs) extend the idea of regular cache language
models by making the cache probability dependent on the similarity between the
current context and the context of the words in the cache. We make an extensive
comparison of 'regular' cache models with neural cache models, both in terms of
perplexity and WER after rescoring first-pass ASR results. Furthermore, we
propose two extensions to this neural cache model that make use of the content
value/information weight of the word: firstly, combining the cache probability
and LM probability with an information-weighted interpolation and secondly,
selectively adding only content words to the cache. We obtain a 29.9%/32.1%
(validation/test set) relative improvement in perplexity with respect to a
baseline LSTM LM on the WikiText-2 dataset, outperforming previous work on
neural cache LMs. Additionally, we observe significant WER reductions with
respect to the baseline model on the WSJ ASR task.
| 2,018 | Computation and Language |
Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain
Semantic Parsing and Text-to-SQL Task | We present Spider, a large-scale, complex and cross-domain semantic parsing
and text-to-SQL dataset annotated by 11 college students. It consists of 10,181
questions and 5,693 unique complex SQL queries on 200 databases with multiple
tables, covering 138 different domains. We define a new complex and
cross-domain semantic parsing and text-to-SQL task where different complex SQL
queries and databases appear in train and test sets. In this way, the task
requires the model to generalize well to both new SQL queries and new database
schemas. Spider is distinct from most of the previous semantic parsing tasks
because they all use a single database and the exact same programs in the train
set and the test set. We experiment with various state-of-the-art models and
the best model achieves only 12.4% exact matching accuracy on a database split
setting. This shows that Spider presents a strong challenge for future
research. Our dataset and task are publicly available at
https://yale-lily.github.io/spider
| 2,019 | Computation and Language |
Neural Speech Synthesis with Transformer Network | Although end-to-end neural text-to-speech (TTS) methods (such as Tacotron2)
are proposed and achieve state-of-the-art performance, they still suffer from
two problems: 1) low efficiency during training and inference; 2) hard to model
long dependency using current recurrent neural networks (RNNs). Inspired by the
success of Transformer network in neural machine translation (NMT), in this
paper, we introduce and adapt the multi-head attention mechanism to replace the
RNN structures and also the original attention mechanism in Tacotron2. With the
help of multi-head self-attention, the hidden states in the encoder and decoder
are constructed in parallel, which improves the training efficiency. Meanwhile,
any two inputs at different times are connected directly by self-attention
mechanism, which solves the long range dependency problem effectively. Using
phoneme sequences as input, our Transformer TTS network generates mel
spectrograms, followed by a WaveNet vocoder to output the final audio results.
Experiments are conducted to test the efficiency and performance of our new
network. For the efficiency, our Transformer TTS network can speed up the
training about 4.25 times faster compared with Tacotron2. For the performance,
rigorous human tests show that our proposed model achieves state-of-the-art
performance (outperforms Tacotron2 with a gap of 0.048) and is very close to
human quality (4.39 vs 4.44 in MOS).
| 2,019 | Computation and Language |
Language Identification with Deep Bottleneck Features | In this paper we proposed an end-to-end short utterances speech language
identification(SLD) approach based on a Long Short Term Memory (LSTM) neural
network which is special suitable for SLD application in intelligent vehicles.
Features used for LSTM learning are generated by a transfer learning method.
Bottle-neck features of a deep neural network (DNN) which are trained for
mandarin acoustic-phonetic classification are used for LSTM training. In order
to improve the SLD accuracy of short utterances a phase vocoder based
time-scale modification(TSM) method is used to reduce and increase speech rated
of the test utterance. By splicing the normal, speech rate reduced and
increased utterances, we can extend length of test utterances so as to improved
improved the performance of the SLD system. The experimental results on
AP17-OLR database shows that the proposed methods can improve the performance
of SLD, especially on short utterance with 1s and 3s durations.
| 2,020 | Computation and Language |
Adversarial Training in Affective Computing and Sentiment Analysis:
Recent Advances and Perspectives | Over the past few years, adversarial training has become an extremely active
research topic and has been successfully applied to various Artificial
Intelligence (AI) domains. As a potentially crucial technique for the
development of the next generation of emotional AI systems, we herein provide a
comprehensive overview of the application of adversarial training to affective
computing and sentiment analysis. Various representative adversarial training
algorithms are explained and discussed accordingly, aimed at tackling diverse
challenges associated with emotional AI systems. Further, we highlight a range
of potential future research directions. We expect that this overview will help
facilitate the development of adversarial training for affective computing and
sentiment analysis in both the academic and industrial communities.
| 2,018 | Computation and Language |
Joint Multitask Learning for Community Question Answering Using
Task-Specific Embeddings | We address jointly two important tasks for Question Answering in community
forums: given a new question, (i) find related existing questions, and (ii)
find relevant answers to this new question. We further use an auxiliary task to
complement the previous two, i.e., (iii) find good answers with respect to the
thread question in a question-comment thread. We use deep neural networks
(DNNs) to learn meaningful task-specific embeddings, which we then incorporate
into a conditional random field (CRF) model for the multitask setting,
performing joint learning over a complex graph structure. While DNNs alone
achieve competitive results when trained to produce the embeddings, the CRF,
which makes use of the embeddings and the dependencies between the tasks,
improves the results significantly and consistently across a variety of
evaluation metrics, thus showing the complementarity of DNNs and structured
learning.
| 2,018 | Computation and Language |
Lexical Bias In Essay Level Prediction | Automatically predicting the level of non-native English speakers given their
written essays is an interesting machine learning problem. In this work I
present the system "balikasg" that achieved the state-of-the-art performance in
the CAp 2018 data science challenge among 14 systems. I detail the feature
extraction, feature engineering and model selection steps and I evaluate how
these decisions impact the system's performance. The paper concludes with
remarks for future work.
| 2,018 | Computation and Language |
WiRe57 : A Fine-Grained Benchmark for Open Information Extraction | We build a reference for the task of Open Information Extraction, on five
documents. We tentatively resolve a number of issues that arise, including
inference and granularity. We seek to better pinpoint the requirements for the
task. We produce our annotation guidelines specifying what is correct to
extract and what is not. In turn, we use this reference to score existing Open
IE systems. We address the non-trivial problem of evaluating the extractions
produced by systems against the reference tuples, and share our evaluation
script. Among seven compared extractors, we find the MinIE system to perform
best.
| 2,019 | Computation and Language |
Jointly Multiple Events Extraction via Attention-based Graph Information
Aggregation | Event extraction is of practical utility in natural language processing. In
the real world, it is a common phenomenon that multiple events existing in the
same sentence, where extracting them are more difficult than extracting a
single event. Previous works on modeling the associations between events by
sequential modeling methods suffer a lot from the low efficiency in capturing
very long-range dependencies. In this paper, we propose a novel Jointly
Multiple Events Extraction (JMEE) framework to jointly extract multiple event
triggers and arguments by introducing syntactic shortcut arcs to enhance
information flow and attention-based graph convolution networks to model graph
information. The experiment results demonstrate that our proposed framework
achieves competitive results compared with state-of-the-art methods.
| 2,022 | Computation and Language |
Stochastic Answer Networks for SQuAD 2.0 | This paper presents an extension of the Stochastic Answer Network (SAN), one
of the state-of-the-art machine reading comprehension models, to be able to
judge whether a question is unanswerable or not. The extended SAN contains two
components: a span detector and a binary classifier for judging whether the
question is unanswerable, and both components are jointly optimized.
Experiments show that SAN achieves the results competitive to the
state-of-the-art on Stanford Question Answering Dataset (SQuAD) 2.0. To
facilitate the research on this field, we release our code:
https://github.com/kevinduh/san_mrc.
| 2,018 | Computation and Language |
Fast and Simple Mixture of Softmaxes with BPE and Hybrid-LightRNN for
Language Generation | Mixture of Softmaxes (MoS) has been shown to be effective at addressing the
expressiveness limitation of Softmax-based models. Despite the known advantage,
MoS is practically sealed by its large consumption of memory and computational
time due to the need of computing multiple Softmaxes. In this work, we set out
to unleash the power of MoS in practical applications by investigating improved
word coding schemes, which could effectively reduce the vocabulary size and
hence relieve the memory and computation burden. We show both BPE and our
proposed Hybrid-LightRNN lead to improved encoding mechanisms that can halve
the time and memory consumption of MoS without performance losses. With MoS, we
achieve an improvement of 1.5 BLEU scores on IWSLT 2014 German-to-English
corpus and an improvement of 0.76 CIDEr score on image captioning. Moreover, on
the larger WMT 2014 machine translation dataset, our MoS-boosted Transformer
yields 29.5 BLEU score for English-to-German and 42.1 BLEU score for
English-to-French, outperforming the single-Softmax Transformer by 0.8 and 0.4
BLEU scores respectively and achieving the state-of-the-art result on WMT 2014
English-to-German task.
| 2,019 | Computation and Language |
ComQA: A Community-sourced Dataset for Complex Factoid Question
Answering with Paraphrase Clusters | To bridge the gap between the capabilities of the state-of-the-art in factoid
question answering (QA) and what users ask, we need large datasets of real user
questions that capture the various question phenomena users are interested in,
and the diverse ways in which these questions are formulated. We introduce
ComQA, a large dataset of real user questions that exhibit different
challenging aspects such as compositionality, temporal reasoning, and
comparisons. ComQA questions come from the WikiAnswers community QA platform,
which typically contains questions that are not satisfactorily answerable by
existing search engine technology. Through a large crowdsourcing effort, we
clean the question dataset, group questions into paraphrase clusters, and
annotate clusters with their answers. ComQA contains 11,214 questions grouped
into 4,834 paraphrase clusters. We detail the process of constructing ComQA,
including the measures taken to ensure its high quality while making effective
use of crowdsourcing. We also present an extensive analysis of the dataset and
the results achieved by state-of-the-art systems on ComQA, demonstrating that
our dataset can be a driver of future research on QA.
| 2,019 | Computation and Language |
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question
Answering | Existing question answering (QA) datasets fail to train QA systems to perform
complex reasoning and provide explanations for answers. We introduce HotpotQA,
a new dataset with 113k Wikipedia-based question-answer pairs with four key
features: (1) the questions require finding and reasoning over multiple
supporting documents to answer; (2) the questions are diverse and not
constrained to any pre-existing knowledge bases or knowledge schemas; (3) we
provide sentence-level supporting facts required for reasoning, allowing QA
systems to reason with strong supervision and explain the predictions; (4) we
offer a new type of factoid comparison questions to test QA systems' ability to
extract relevant facts and perform necessary comparison. We show that HotpotQA
is challenging for the latest QA systems, and the supporting facts enable
models to improve performance and make explainable predictions.
| 2,018 | Computation and Language |
A Re-ranker Scheme for Integrating Large Scale NLU models | Large scale Natural Language Understanding (NLU) systems are typically
trained on large quantities of data, requiring a fast and scalable training
strategy. A typical design for NLU systems consists of domain-level NLU modules
(domain classifier, intent classifier and named entity recognizer). Hypotheses
(NLU interpretations consisting of various intent+slot combinations) from these
domain specific modules are typically aggregated with another downstream
component. The re-ranker integrates outputs from domain-level recognizers,
returning a scored list of cross domain hypotheses. An ideal re-ranker will
exhibit the following two properties: (a) it should prefer the most relevant
hypothesis for the given input as the top hypothesis and, (b) the
interpretation scores corresponding to each hypothesis produced by the
re-ranker should be calibrated. Calibration allows the final NLU interpretation
score to be comparable across domains. We propose a novel re-ranker strategy
that addresses these aspects, while also maintaining domain specific
modularity. We design optimization loss functions for such a modularized
re-ranker and present results on decreasing the top hypothesis error rate as
well as maintaining the model calibration. We also experiment with an extension
involving training the domain specific re-rankers on datasets curated
independently by each domain to allow further asynchronization. %The proposed
re-ranker design showcases the following: (i) improved NLU performance over an
unweighted aggregation strategy, (ii) cross-domain calibrated performance and,
(iii) support for use cases involving training each re-ranker on datasets
curated by each domain independently.
| 2,018 | Computation and Language |
Non-native children speech recognition through transfer learning | This work deals with non-native children's speech and investigates both
multi-task and transfer learning approaches to adapt a multi-language Deep
Neural Network (DNN) to speakers, specifically children, learning a foreign
language. The application scenario is characterized by young students learning
English and German and reading sentences in these second-languages, as well as
in their mother language. The paper analyzes and discusses techniques for
training effective DNN-based acoustic models starting from children native
speech and performing adaptation with limited non-native audio material. A
multi-lingual model is adopted as baseline, where a common phonetic lexicon,
defined in terms of the units of the International Phonetic Alphabet (IPA), is
shared across the three languages at hand (Italian, German and English); DNN
adaptation methods based on transfer learning are evaluated on significant
non-native evaluation sets. Results show that the resulting non-native models
allow a significant improvement with respect to a mono-lingual system adapted
to speakers of the target language.
| 2,018 | Computation and Language |
BanditSum: Extractive Summarization as a Contextual Bandit | In this work, we propose a novel method for training neural networks to
perform single-document extractive summarization without
heuristically-generated extractive labels. We call our approach BanditSum as it
treats extractive summarization as a contextual bandit (CB) problem, where the
model receives a document to summarize (the context), and chooses a sequence of
sentences to include in the summary (the action). A policy gradient
reinforcement learning algorithm is used to train the model to select sequences
of sentences that maximize ROUGE score. We perform a series of experiments
demonstrating that BanditSum is able to achieve ROUGE scores that are better
than or comparable to the state-of-the-art for extractive summarization, and
converges using significantly fewer update steps than competing approaches. In
addition, we show empirically that BanditSum performs significantly better than
competing approaches when good summary sentences appear late in the source
document.
| 2,019 | Computation and Language |
Deep contextualized word representations for detecting sarcasm and irony | Predicting context-dependent and non-literal utterances like sarcastic and
ironic expressions still remains a challenging task in NLP, as it goes beyond
linguistic patterns, encompassing common sense and shared knowledge as crucial
components. To capture complex morpho-syntactic features that can usually serve
as indicators for irony or sarcasm across dynamic contexts, we propose a model
that uses character-level vector representations of words, based on ELMo. We
test our model on 7 different datasets derived from 3 different data sources,
providing state-of-the-art performance in 6 of them, and otherwise offering
competitive results.
| 2,018 | Computation and Language |
Language Modeling Teaches You More Syntax than Translation Does: Lessons
Learned Through Auxiliary Task Analysis | Recent work using auxiliary prediction task classifiers to investigate the
properties of LSTM representations has begun to shed light on why pretrained
representations, like ELMo (Peters et al., 2018) and CoVe (McCann et al.,
2017), are so beneficial for neural language understanding models. We still,
though, do not yet have a clear understanding of how the choice of pretraining
objective affects the type of linguistic information that models learn. With
this in mind, we compare four objectives---language modeling, translation,
skip-thought, and autoencoding---on their ability to induce syntactic and
part-of-speech information. We make a fair comparison between the tasks by
holding constant the quantity and genre of the training data, as well as the
LSTM architecture. We find that representations from language models
consistently perform best on our syntactic auxiliary prediction tasks, even
when trained on relatively small amounts of data. These results suggest that
language modeling may be the best data-rich pretraining task for transfer
learning applications requiring syntactic information. We also find that the
representations from randomly-initialized, frozen LSTMs perform strikingly well
on our syntactic auxiliary tasks, but this effect disappears when the amount of
training data for the auxiliary tasks is reduced.
| 2,018 | Computation and Language |
Graph Convolution over Pruned Dependency Trees Improves Relation
Extraction | Dependency trees help relation extraction models capture long-range relations
between words. However, existing dependency-based models either neglect crucial
information (e.g., negation) by pruning the dependency trees too aggressively,
or are computationally inefficient because it is difficult to parallelize over
different tree structures. We propose an extension of graph convolutional
networks that is tailored for relation extraction, which pools information over
arbitrary dependency structures efficiently in parallel. To incorporate
relevant information while maximally removing irrelevant content, we further
apply a novel pruning strategy to the input trees by keeping words immediately
around the shortest path between the two entities among which a relation might
hold. The resulting model achieves state-of-the-art performance on the
large-scale TACRED dataset, outperforming existing sequence and
dependency-based neural models. We also show through detailed analysis that
this model has complementary strengths to sequence models, and combining them
further improves the state of the art.
| 2,018 | Computation and Language |
Semantic Sentence Embeddings for Paraphrasing and Text Summarization | This paper introduces a sentence to vector encoding framework suitable for
advanced natural language processing. Our latent representation is shown to
encode sentences with common semantic information with similar vector
representations. The vector representation is extracted from an encoder-decoder
model which is trained on sentence paraphrase pairs. We demonstrate the
application of the sentence representations for two different tasks -- sentence
paraphrasing and paragraph summarization, making it attractive for commonly
used recurrent frameworks that process text. Experimental results help gain
insight how vector representations are suitable for advanced language
embedding.
| 2,018 | Computation and Language |
Adaptive Pruning of Neural Language Models for Mobile Devices | Neural language models (NLMs) exist in an accuracy-efficiency tradeoff space
where better perplexity typically comes at the cost of greater computation
complexity. In a software keyboard application on mobile devices, this
translates into higher power consumption and shorter battery life. This paper
represents the first attempt, to our knowledge, in exploring
accuracy-efficiency tradeoffs for NLMs. Building on quasi-recurrent neural
networks (QRNNs), we apply pruning techniques to provide a "knob" to select
different operating points. In addition, we propose a simple technique to
recover some perplexity using a negligible amount of memory. Our empirical
evaluations consider both perplexity as well as energy consumption on a
Raspberry Pi, where we demonstrate which methods provide the best
perplexity-power consumption operating point. At one operating point, one of
the techniques is able to provide energy savings of 40% over the state of the
art with only a 17% relative increase in perplexity.
| 2,018 | Computation and Language |
Iterative Document Representation Learning Towards Summarization with
Polishing | In this paper, we introduce Iterative Text Summarization (ITS), an
iteration-based model for supervised extractive text summarization, inspired by
the observation that it is often necessary for a human to read an article
multiple times in order to fully understand and summarize its contents. Current
summarization approaches read through a document only once to generate a
document representation, resulting in a sub-optimal representation. To address
this issue we introduce a model which iteratively polishes the document
representation on many passes through the document. As part of our model, we
also introduce a selective reading mechanism that decides more accurately the
extent to which each sentence in the model should be updated. Experimental
results on the CNN/DailyMail and DUC2002 datasets demonstrate that our model
significantly outperforms state-of-the-art extractive systems when evaluated by
machines and by humans.
| 2,019 | Computation and Language |
Enabling FAIR Research in Earth Science through Research Objects | Data-intensive science communities are progressively adopting FAIR practices
that enhance the visibility of scientific breakthroughs and enable reuse. At
the core of this movement, research objects contain and describe scientific
information and resources in a way compliant with the FAIR principles and
sustain the development of key infrastructure and tools. This paper provides an
account of the challenges, experiences and solutions involved in the adoption
of FAIR around research objects over several Earth Science disciplines. During
this journey, our work has been comprehensive, with outcomes including: an
extended research object model adapted to the needs of earth scientists; the
provisioning of digital object identifiers (DOI) to enable persistent
identification and to give due credit to authors; the generation of
content-based, semantically rich, research object metadata through natural
language processing, enhancing visibility and reuse through recommendation
systems and third-party search engines; and various types of checklists that
provide a compact representation of research object quality as a key enabler of
scientific reuse. All these results have been integrated in ROHub, a platform
that provides research object management functionality to a wealth of
applications and interfaces across different scientific communities. To monitor
and quantify the community uptake of research objects, we have defined
indicators and obtained measures via ROHub that are also discussed herein.
| 2,018 | Computation and Language |
Predictive Embeddings for Hate Speech Detection on Twitter | We present a neural-network based approach to classifying online hate speech
in general, as well as racist and sexist speech in particular. Using
pre-trained word embeddings and max/mean pooling from simple, fully-connected
transformations of these embeddings, we are able to predict the occurrence of
hate speech on three commonly used publicly available datasets. Our models
match or outperform state of the art F1 performance on all three datasets using
significantly fewer parameters and minimal feature preprocessing compared to
previous methods.
| 2,018 | Computation and Language |
A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC | We compare three new datasets for question answering: SQuAD 2.0, QuAC, and
CoQA, along several of their new features: (1) unanswerable questions, (2)
multi-turn interactions, and (3) abstractive answers. We show that the datasets
provide complementary coverage of the first two aspects, but weak coverage of
the third. Because of the datasets' structural similarity, a single extractive
model can be easily adapted to any of the datasets and we show improved
baseline results on both SQuAD 2.0 and CoQA. Despite the similarity, models
trained on one dataset are ineffective on another dataset, but we find moderate
performance improvement through pretraining. To encourage cross-evaluation, we
release code for conversion between datasets at
https://github.com/my89/co-squac .
| 2,019 | Computation and Language |
Controllable Neural Story Plot Generation via Reward Shaping | Language-modeling--based approaches to story plot generation attempt to
construct a plot by sampling from a language model (LM) to predict the next
character, word, or sentence to add to the story. LM techniques lack the
ability to receive guidance from the user to achieve a specific goal, resulting
in stories that don't have a clear sense of progression and lack coherence. We
present a reward-shaping technique that analyzes a story corpus and produces
intermediate rewards that are backpropagated into a pre-trained LM in order to
guide the model towards a given goal. Automated evaluations show our technique
can create a model that generates story plots which consistently achieve a
specified goal. Human-subject studies show that the generated stories have more
plausible event ordering than baseline plot generation techniques.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.