Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Towards a Robust Deep Neural Network in Texts: A Survey | Deep neural networks (DNNs) have achieved remarkable success in various tasks
(e.g., image classification, speech recognition, and natural language
processing (NLP)). However, researchers have demonstrated that DNN-based models
are vulnerable to adversarial examples, which cause erroneous predictions by
adding imperceptible perturbations into legitimate inputs. Recently, studies
have revealed adversarial examples in the text domain, which could effectively
evade various DNN-based text analyzers and further bring the threats of the
proliferation of disinformation. In this paper, we give a comprehensive survey
on the existing studies of adversarial techniques for generating adversarial
texts written by both English and Chinese characters and the corresponding
defense methods. More importantly, we hope that our work could inspire future
studies to develop more robust DNN-based text analyzers against known and
unknown adversarial techniques.
We classify the existing adversarial techniques for crafting adversarial
texts based on the perturbation units, helping to better understand the
generation of adversarial texts and build robust models for defense. In
presenting the taxonomy of adversarial attacks and defenses in the text domain,
we introduce the adversarial techniques from the perspective of different NLP
tasks. Finally, we discuss the existing challenges of adversarial attacks and
defenses in texts and present the future research directions in this emerging
and challenging field.
| 2,021 | Computation and Language |
Phoneme Level Language Models for Sequence Based Low Resource ASR | Building multilingual and crosslingual models help bring different languages
together in a language universal space. It allows models to share parameters
and transfer knowledge across languages, enabling faster and better adaptation
to a new language. These approaches are particularly useful for low resource
languages. In this paper, we propose a phoneme-level language model that can be
used multilingually and for crosslingual adaptation to a target language. We
show that our model performs almost as well as the monolingual models by using
six times fewer parameters, and is capable of better adaptation to languages
not seen during training in a low resource scenario. We show that these
phoneme-level language models can be used to decode sequence based
Connectionist Temporal Classification (CTC) acoustic model outputs to obtain
comparable word error rates with Weighted Finite State Transducer (WFST) based
decoding in Babel languages. We also show that these phoneme-level language
models outperform WFST decoding in various low-resource conditions like
adapting to a new language and domain mismatch between training and testing
data.
| 2,019 | Computation and Language |
ScispaCy: Fast and Robust Models for Biomedical Natural Language
Processing | Despite recent advances in natural language processing, many statistical
models for processing text perform extremely poorly under domain shift.
Processing biomedical and clinical text is a critically important application
area of natural language processing, for which there are few robust, practical,
publicly available models. This paper describes scispaCy, a new tool for
practical biomedical/scientific text processing, which heavily leverages the
spaCy library. We detail the performance of two packages of models released in
scispaCy and demonstrate their robustness on several tasks and datasets. Models
and code are available at https://allenai.github.io/scispacy/
| 2,019 | Computation and Language |
Learning Dual Retrieval Module for Semi-supervised Relation Extraction | Relation extraction is an important task in structuring content of text data,
and becomes especially challenging when learning with weak supervision---where
only a limited number of labeled sentences are given and a large number of
unlabeled sentences are available. Most existing work exploits unlabeled data
based on the ideas of self-training (i.e., bootstrapping a model) and
multi-view learning (e.g., ensembling multiple model variants). However, these
methods either suffer from the issue of semantic drift, or do not fully capture
the problem characteristics of relation extraction. In this paper, we leverage
a key insight that retrieving sentences expressing a relation is a dual task of
predicting relation label for a given sentence---two tasks are complementary to
each other and can be optimized jointly for mutual enhancement. To model this
intuition, we propose DualRE, a principled framework that introduces a
retrieval module which is jointly trained with the original relation prediction
module. In this way, high-quality samples selected by retrieval module from
unlabeled data can be used to improve prediction module, and vice versa.
Experimental results\footnote{\small Code and data can be found at
\url{https://github.com/INK-USC/DualRE}.} on two public datasets as well as
case studies demonstrate the effectiveness of the DualRE approach.
| 2,019 | Computation and Language |
Mixture Models for Diverse Machine Translation: Tricks of the Trade | Mixture models trained via EM are among the simplest, most widely used and
well understood latent variable models in the machine learning literature.
Surprisingly, these models have been hardly explored in text generation
applications such as machine translation. In principle, they provide a latent
variable to control generation and produce a diverse set of hypotheses. In
practice, however, mixture models are prone to degeneracies---often only one
component gets trained or the latent variable is simply ignored. We find that
disabling dropout noise in responsibility computation is critical to successful
training. In addition, the design choices of parameterization, prior
distribution, hard versus soft EM and online versus offline assignment can
dramatically affect model performance. We develop an evaluation protocol to
assess both quality and diversity of generations against multiple references,
and provide an extensive empirical study of several mixture model variants. Our
analysis shows that certain types of mixture models are more robust and offer
the best trade-off between translation quality and diversity compared to
variational models and diverse decoding approaches.\footnote{Code to reproduce
the results in this paper is available at
\url{https://github.com/pytorch/fairseq}}
| 2,019 | Computation and Language |
Deep Speaker Embedding Learning with Multi-Level Pooling for
Text-Independent Speaker Verification | This paper aims to improve the widely used deep speaker embedding x-vector
model. We propose the following improvements: (1) a hybrid neural network
structure using both time delay neural network (TDNN) and long short-term
memory neural networks (LSTM) to generate complementary speaker information at
different levels; (2) a multi-level pooling strategy to collect speaker
information from both TDNN and LSTM layers; (3) a regularization scheme on the
speaker embedding extraction layer to make the extracted embeddings suitable
for the following fusion step. The synergy of these improvements are shown on
the NIST SRE 2016 eval test (with a 19% EER reduction) and SRE 2018 dev test
(with a 9% EER reduction), as well as more than 10% DCF scores reduction on
these two test sets over the x-vector baseline.
| 2,019 | Computation and Language |
Predicting ConceptNet Path Quality Using Crowdsourced Assessments of
Naturalness | In many applications, it is important to characterize the way in which two
concepts are semantically related. Knowledge graphs such as ConceptNet provide
a rich source of information for such characterizations by encoding relations
between concepts as edges in a graph. When two concepts are not directly
connected by an edge, their relationship can still be described in terms of the
paths that connect them. Unfortunately, many of these paths are uninformative
and noisy, which means that the success of applications that use such path
features crucially relies on their ability to select high-quality paths. In
existing applications, this path selection process is based on relatively
simple heuristics. In this paper we instead propose to learn to predict path
quality from crowdsourced human assessments. Since we are interested in a
generic task-independent notion of quality, we simply ask human participants to
rank paths according to their subjective assessment of the paths' naturalness,
without attempting to define naturalness or steering the participants towards
particular indicators of quality. We show that a neural network model trained
on these assessments is able to predict human judgments on unseen paths with
near optimal performance. Most notably, we find that the resulting path
selection method is substantially better than the current heuristic approaches
at identifying meaningful paths.
| 2,019 | Computation and Language |
ntuer at SemEval-2019 Task 3: Emotion Classification with Word and
Sentence Representations in RCNN | In this paper we present our model on the task of emotion detection in
textual conversations in SemEval-2019. Our model extends the Recurrent
Convolutional Neural Network (RCNN) by using external fine-tuned word
representations and DeepMoji sentence representations. We also explored several
other competitive pre-trained word and sentence representations including ELMo,
BERT and InferSent but found inferior performance. In addition, we conducted
extensive sensitivity analysis, which empirically shows that our model is
relatively robust to hyper-parameters. Our model requires no handcrafted
features or emotion lexicons but achieved good performance with a micro-F1
score of 0.7463.
| 2,019 | Computation and Language |
Pretrained language model transfer on neural named entity recognition in
Indonesian conversational texts | Named entity recognition (NER) is an important task in NLP, which is all the
more challenging in conversational domain with their noisy facets. Moreover,
conversational texts are often available in limited amount, making supervised
tasks infeasible. To learn from small data, strong inductive biases are
required. Previous work relied on hand-crafted features to encode these biases
until transfer learning emerges. Here, we explore a transfer learning method,
namely language model pretraining, on NER task in Indonesian conversational
texts. We utilize large unlabeled data (generic domain) to be transferred to
conversational texts, enabling supervised training on limited in-domain data.
We report two transfer learning variants, namely supervised model fine-tuning
and unsupervised pretrained LM fine-tuning. Our experiments show that both
variants outperform baseline neural models when trained on small data (100
sentences), yielding an absolute improvement of 32 points of test F1 score.
Furthermore, we find that the pretrained LM encodes part-of-speech information
which is a strong predictor for NER.
| 2,019 | Computation and Language |
Deep Short Text Classification with Knowledge Powered Attention | Short text classification is one of important tasks in Natural Language
Processing (NLP). Unlike paragraphs or documents, short texts are more
ambiguous since they have not enough contextual information, which poses a
great challenge for classification. In this paper, we retrieve knowledge from
external knowledge source to enhance the semantic representation of short
texts. We take conceptual information as a kind of knowledge and incorporate it
into deep neural networks. For the purpose of measuring the importance of
knowledge, we introduce attention mechanisms and propose deep Short Text
Classification with Knowledge powered Attention (STCKA). We utilize Concept
towards Short Text (C- ST) attention and Concept towards Concept Set (C-CS)
attention to acquire the weight of concepts from two aspects. And we classify a
short text with the help of conceptual information. Unlike traditional
approaches, our model acts like a human being who has intrinsic ability to make
decisions based on observation (i.e., training data for machines) and pays more
attention to important knowledge. We also conduct extensive experiments on four
public datasets for different tasks. The experimental results and case studies
show that our model outperforms the state-of-the-art methods, justifying the
effectiveness of knowledge powered attention.
| 2,019 | Computation and Language |
Development of a classifiers/quantifiers dictionary towards
French-Japanese MT | Although classifiers/quantifiers (CQs) expressions appear frequently in
everyday communications or written documents, they are described neither in
classical bilingual paper dictionaries , nor in machine-readable dictionaries.
The paper describes a CQs dictionary, edited from the corpus we have annotated,
and its usage in the framework of French-Japanese machine translation (MT). CQs
treatment in MT often causes problems of lexical ambiguity, polylexical phrase
recognition difficulties in analysis and doubtful output in
transfer-generation, in particular for distant languages pairs like French and
Japanese. Our basic treatment of CQs is to annotate the corpus by UNL-UWs
(Universal Networking Language-Universal words) 1 , and then to produce a
bilingual or multilingual dictionary of CQs, based on synonymy through identity
of UWs.
| 2,017 | Computation and Language |
Towards Visually Grounded Sub-Word Speech Unit Discovery | In this paper, we investigate the manner in which interpretable sub-word
speech units emerge within a convolutional neural network model trained to
associate raw speech waveforms with semantically related natural image scenes.
We show how diphone boundaries can be superficially extracted from the
activation patterns of intermediate layers of the model, suggesting that the
model may be leveraging these events for the purpose of word recognition. We
present a series of experiments investigating the information encoded by these
events.
| 2,019 | Computation and Language |
Aspect-Sentiment Embeddings for Company Profiling and Employee Opinion
Mining | With the multitude of companies and organizations abound today, ranking them
and choosing one out of the many is a difficult and cumbersome task. Although
there are many available metrics that rank companies, there is an inherent need
for a generalized metric that takes into account the different aspects that
constitute employee opinions of the companies. In this work, we aim to overcome
the aforementioned problem by generating aspect-sentiment based embedding for
the companies by looking into reliable employee reviews of them. We created a
comprehensive dataset of company reviews from the famous website Glassdoor.com
and employed a novel ensemble approach to perform aspect-level sentiment
analysis. Although a relevant amount of work has been done on reviews centered
on subjects like movies, music, etc., this work is the first of its kind. We
also provide several insights from the collated embeddings, thus helping users
gain a better understanding of their options as well as select companies using
customized preferences.
| 2,019 | Computation and Language |
Large-Scale Answerer in Questioner's Mind for Visual Dialog Question
Generation | Answerer in Questioner's Mind (AQM) is an information-theoretic framework
that has been recently proposed for task-oriented dialog systems. AQM benefits
from asking a question that would maximize the information gain when it is
asked. However, due to its intrinsic nature of explicitly calculating the
information gain, AQM has a limitation when the solution space is very large.
To address this, we propose AQM+ that can deal with a large-scale problem and
ask a question that is more coherent to the current context of the dialog. We
evaluate our method on GuessWhich, a challenging task-oriented visual dialog
problem, where the number of candidate classes is near 10K. Our experimental
results and ablation studies show that AQM+ outperforms the state-of-the-art
models by a remarkable margin with a reasonable approximation. In particular,
the proposed AQM+ reduces more than 60% of error as the dialog proceeds, while
the comparative algorithms diminish the error by less than 6%. Based on our
results, we argue that AQM+ is a general task-oriented dialog algorithm that
can be applied for non-yes-or-no responses.
| 2,019 | Computation and Language |
Learning to Learn Semantic Parsers from Natural Language Supervision | As humans, we often rely on language to learn language. For example, when
corrected in a conversation, we may learn from that correction, over time
improving our language fluency. Inspired by this observation, we propose a
learning algorithm for training semantic parsers from supervision (feedback)
expressed in natural language. Our algorithm learns a semantic parser from
users' corrections such as "no, what I really meant was before his job, not
after", by also simultaneously learning to parse this natural language feedback
in order to leverage it as a form of supervision. Unlike supervision with
gold-standard logical forms, our method does not require the user to be
familiar with the underlying logical formalism, and unlike supervision from
denotation, it does not require the user to know the correct answer to their
query. This makes our learning algorithm naturally scalable in settings where
existing conversational logs are available and can be leveraged as training
data. We construct a novel dataset of natural language feedback in a
conversational setting, and show that our method is effective at learning a
semantic parser from such natural language supervision.
| 2,019 | Computation and Language |
Improving Multilingual Sentence Embedding using Bi-directional Dual
Encoder with Additive Margin Softmax | In this paper, we present an approach to learn multilingual sentence
embeddings using a bi-directional dual-encoder with additive margin softmax.
The embeddings are able to achieve state-of-the-art results on the United
Nations (UN) parallel corpus retrieval task. In all the languages tested, the
system achieves P@1 of 86% or higher. We use pairs retrieved by our approach to
train NMT models that achieve similar performance to models trained on gold
pairs. We explore simple document-level embeddings constructed by averaging our
sentence embeddings. On the UN document-level retrieval task, document
embeddings achieve around 97% on P@1 for all experimented language pairs.
Lastly, we evaluate the proposed model on the BUCC mining task. The learned
embeddings with raw cosine similarity scores achieve competitive results
compared to current state-of-the-art models, and with a second-stage scorer we
achieve a new state-of-the-art level on this task.
| 2,019 | Computation and Language |
OpenKiwi: An Open Source Framework for Quality Estimation | We introduce OpenKiwi, a PyTorch-based open source framework for translation
quality estimation. OpenKiwi supports training and testing of word-level and
sentence-level quality estimation systems, implementing the winning systems of
the WMT 2015-18 quality estimation campaigns. We benchmark OpenKiwi on two
datasets from WMT 2018 (English-German SMT and NMT), yielding state-of-the-art
performance on the word-level tasks and near state-of-the-art in the
sentence-level tasks.
| 2,019 | Computation and Language |
Saliency Learning: Teaching the Model Where to Pay Attention | Deep learning has emerged as a compelling solution to many NLP tasks with
remarkable performances. However, due to their opacity, such models are hard to
interpret and trust. Recent work on explaining deep models has introduced
approaches to provide insights toward the model's behaviour and predictions,
which are helpful for assessing the reliability of the model's predictions.
However, such methods do not improve the model's reliability. In this paper, we
aim to teach the model to make the right prediction for the right reason by
providing explanation training and ensuring the alignment of the model's
explanation with the ground truth explanation. Our experimental results on
multiple tasks and datasets demonstrate the effectiveness of the proposed
method, which produces more reliable predictions while delivering better
results compared to traditionally trained models.
| 2,019 | Computation and Language |
What makes a good conversation? How controllable attributes affect human
judgments | A good conversation requires balance -- between simplicity and detail;
staying on topic and changing it; asking questions and answering them. Although
dialogue agents are commonly evaluated via human judgments of overall quality,
the relationship between quality and these individual factors is less
well-studied. In this work, we examine two controllable neural text generation
methods, conditional training and weighted decoding, in order to control four
important attributes for chitchat dialogue: repetition, specificity,
response-relatedness and question-asking. We conduct a large-scale human
evaluation to measure the effect of these control parameters on multi-turn
interactive conversations on the PersonaChat task. We provide a detailed
analysis of their relationship to high-level aspects of conversation, and show
that by controlling combinations of these variables our models obtain clear
improvements in human quality judgments.
| 2,019 | Computation and Language |
Enhancing Clinical Concept Extraction with Contextual Embeddings | Neural network-based representations ("embeddings") have dramatically
advanced natural language processing (NLP) tasks, including clinical NLP tasks
such as concept extraction. Recently, however, more advanced embedding methods
and representations (e.g., ELMo, BERT) have further pushed the state-of-the-art
in NLP, yet there are no common best practices for how to integrate these
representations into clinical tasks. The purpose of this study, then, is to
explore the space of possible options in utilizing these new models for
clinical concept extraction, including comparing these to traditional word
embedding methods (word2vec, GloVe, fastText). Both off-the-shelf open-domain
embeddings and pre-trained clinical embeddings from MIMIC-III are evaluated. We
explore a battery of embedding methods consisting of traditional word
embeddings and contextual embeddings, and compare these on four concept
extraction corpora: i2b2 2010, i2b2 2012, SemEval 2014, and SemEval 2015. We
also analyze the impact of the pre-training time of a large language model like
ELMo or BERT on the extraction performance. Last, we present an intuitive way
to understand the semantic information encoded by contextual embeddings.
Contextual embeddings pre-trained on a large clinical corpus achieves new
state-of-the-art performances across all concept extraction tasks. The
best-performing model outperforms all state-of-the-art methods with respective
F1-measures of 90.25, 93.18 (partial), 80.74, and 81.65. We demonstrate the
potential of contextual embeddings through the state-of-the-art performance
these methods achieve on clinical concept extraction. Additionally, we
demonstrate contextual embeddings encode valuable semantic information not
accounted for in traditional word representations.
| 2,019 | Computation and Language |
VCWE: Visual Character-Enhanced Word Embeddings | Chinese is a logographic writing system, and the shape of Chinese characters
contain rich syntactic and semantic information. In this paper, we propose a
model to learn Chinese word embeddings via three-level composition: (1) a
convolutional neural network to extract the intra-character compositionality
from the visual shape of a character; (2) a recurrent neural network with
self-attention to compose character representation into word embeddings; (3)
the Skip-Gram framework to capture non-compositionality directly from the
contextual information. Evaluations demonstrate the superior performance of our
model on four tasks: word similarity, sentiment analysis, named entity
recognition and part-of-speech tagging.
| 2,019 | Computation and Language |
Augmenting Neural Machine Translation with Knowledge Graphs | While neural networks have been used extensively to make substantial progress
in the machine translation task, they are known for being heavily dependent on
the availability of large amounts of training data. Recent efforts have tried
to alleviate the data sparsity problem by augmenting the training data using
different strategies, such as back-translation. Along with the data scarcity,
the out-of-vocabulary words, mostly entities and terminological expressions,
pose a difficult challenge to Neural Machine Translation systems. In this
paper, we hypothesize that knowledge graphs enhance the semantic feature
extraction of neural models, thus optimizing the translation of entities and
terminological expressions in texts and consequently leading to a better
translation quality. We hence investigate two different strategies for
incorporating knowledge graphs into neural models without modifying the neural
network architectures. We also examine the effectiveness of our augmentation
method to recurrent and non-recurrent (self-attentional) neural architectures.
Our knowledge graph augmented neural translation model, dubbed KG-NMT, achieves
significant and consistent improvements of +3 BLEU, METEOR and chrF3 on average
on the newstest datasets between 2014 and 2018 for WMT English-German
translation task.
| 2,019 | Computation and Language |
Categorization in the Wild: Generalizing Cognitive Models to
Naturalistic Data across Languages | Categories such as animal or furniture are acquired at an early age and play
an important role in processing, organizing, and communicating world knowledge.
Categories exist across cultures: they allow to efficiently represent the
complexity of the world, and members of a community strongly agree on their
nature, revealing a shared mental representation. Models of category learning
and representation, however, are typically tested on data from small-scale
experiments involving small sets of concepts with artificially restricted
features; and experiments predominantly involve participants of selected
cultural and socio-economical groups (very often involving western native
speakers of English such as U.S. college students) . This work investigates
whether models of categorization generalize (a) to rich and noisy data
approximating the environment humans live in; and (b) across languages and
cultures. We present a Bayesian cognitive model designed to jointly learn
categories and their structured representation from natural language text which
allows us to (a) evaluate performance on a large scale, and (b) apply our model
to a diverse set of languages. We show that meaningful categories comprising
hundreds of concepts and richly structured featural representations emerge
across languages. Our work illustrates the potential of recent advances in
computational modeling and large scale naturalistic datasets for cognitive
science research.
| 2,019 | Computation and Language |
Re-evaluating ADEM: A Deeper Look at Scoring Dialogue Responses | Automatically evaluating the quality of dialogue responses for unstructured
domains is a challenging problem. ADEM(Lowe et al. 2017) formulated the
automatic evaluation of dialogue systems as a learning problem and showed that
such a model was able to predict responses which correlate significantly with
human judgements, both at utterance and system level. Their system was shown to
have beaten word-overlap metrics such as BLEU with large margins. We start with
the question of whether an adversary can game the ADEM model. We design a
battery of targeted attacks at the neural network based ADEM evaluation system
and show that automatic evaluation of dialogue systems still has a long way to
go. ADEM can get confused with a variation as simple as reversing the word
order in the text! We report experiments on several such adversarial scenarios
that draw out counterintuitive scores on the dialogue responses. We take a
systematic look at the scoring function proposed by ADEM and connect it to
linear system theory to predict the shortcomings evident in the system. We also
devise an attack that can fool such a system to rate a response generation
system as favorable. Finally, we allude to future research directions of using
the adversarial attacks to design a truly automated dialogue evaluation system.
| 2,019 | Computation and Language |
Vector of Locally-Aggregated Word Embeddings (VLAWE): A Novel
Document-level Representation | In this paper, we propose a novel representation for text documents based on
aggregating word embedding vectors into document embeddings. Our approach is
inspired by the Vector of Locally-Aggregated Descriptors used for image
representation, and it works as follows. First, the word embeddings gathered
from a collection of documents are clustered by k-means in order to learn a
codebook of semnatically-related word embeddings. Each word embedding is then
associated to its nearest cluster centroid (codeword). The Vector of
Locally-Aggregated Word Embeddings (VLAWE) representation of a document is then
computed by accumulating the differences between each codeword vector and each
word vector (from the document) associated to the respective codeword. We plug
the VLAWE representation, which is learned in an unsupervised manner, into a
classifier and show that it is useful for a diverse set of text classification
tasks. We compare our approach with a broad range of recent state-of-the-art
methods, demonstrating the effectiveness of our approach. Furthermore, we
obtain a considerable improvement on the Movie Review data set, reporting an
accuracy of 93.3%, which represents an absolute gain of 10% over the
state-of-the-art approach. Our code is available at
https://github.com/raduionescu/vlawe-boswe/.
| 2,019 | Computation and Language |
Evidence Sentence Extraction for Machine Reading Comprehension | Remarkable success has been achieved in the last few years on some limited
machine reading comprehension (MRC) tasks. However, it is still difficult to
interpret the predictions of existing MRC models. In this paper, we focus on
extracting evidence sentences that can explain or support the answers of
multiple-choice MRC tasks, where the majority of answer options cannot be
directly extracted from reference documents.
Due to the lack of ground truth evidence sentence labels in most cases, we
apply distant supervision to generate imperfect labels and then use them to
train an evidence sentence extractor. To denoise the noisy labels, we apply a
recently proposed deep probabilistic logic learning framework to incorporate
both sentence-level and cross-sentence linguistic indicators for indirect
supervision. We feed the extracted evidence sentences into existing MRC models
and evaluate the end-to-end performance on three challenging multiple-choice
MRC datasets: MultiRC, RACE, and DREAM, achieving comparable or better
performance than the same models that take as input the full reference
document. To the best of our knowledge, this is the first work extracting
evidence sentences for multiple-choice MRC.
| 2,019 | Computation and Language |
ABI Neural Ensemble Model for Gender Prediction Adapt Bar-Ilan
Submission for the CLIN29 Shared Task on Gender Prediction | We present our system for the CLIN29 shared task on cross-genre gender
detection for Dutch. We experimented with a multitude of neural models (CNN,
RNN, LSTM, etc.), more "traditional" models (SVM, RF, LogReg, etc.), different
feature sets as well as data pre-processing. The final results suggested that
using tokenized, non-lowercased data works best for most of the neural models,
while a combination of word clusters, character trigrams and word lists showed
to be most beneficial for the majority of the more "traditional" (that is,
non-neural) models, beating features used in previous tasks such as n-grams,
character n-grams, part-of-speech tags and combinations thereof. In
contradiction with the results described in previous comparable shared tasks,
our neural models performed better than our best traditional approaches with
our best feature set-up. Our final model consisted of a weighted ensemble model
combining the top 25 models. Our final model won both the in-domain gender
prediction task and the cross-genre challenge, achieving an average accuracy of
64.93% on the in-domain gender prediction task, and 56.26% on cross-genre
gender prediction.
| 2,019 | Computation and Language |
Rethinking Action Spaces for Reinforcement Learning in End-to-end Dialog
Agents with Latent Variable Models | Defining action spaces for conversational agents and optimizing their
decision-making process with reinforcement learning is an enduring challenge.
Common practice has been to use handcrafted dialog acts, or the output
vocabulary, e.g. in neural encoder decoders, as the action spaces. Both have
their own limitations. This paper proposes a novel latent action framework that
treats the action spaces of an end-to-end dialog agent as latent variables and
develops unsupervised methods in order to induce its own action space from the
data. Comprehensive experiments are conducted examining both continuous and
discrete action types and two different optimization methods based on
stochastic variational inference. Results show that the proposed latent actions
achieve superior empirical performance improvement over previous word-level
policy gradient methods on both DealOrNoDeal and MultiWoz dialogs. Our detailed
analysis also provides insights about various latent variable approaches for
policy learning and can serve as a foundation for developing better latent
actions in future research.
| 2,019 | Computation and Language |
The ARIEL-CMU Systems for LoReHLT18 | This paper describes the ARIEL-CMU submissions to the Low Resource Human
Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine
Translation (MT), Entity Discovery and Linking (EDL), and detection of
Situation Frames in Text and Speech (SF Text and Speech).
| 2,019 | Computation and Language |
On the Use of Emojis to Train Emotion Classifiers | Nowadays, the automatic detection of emotions is employed by many
applications in different fields like security informatics, e-learning, humor
detection, targeted advertising, etc. Many of these applications focus on
social media and treat this problem as a classification problem, which requires
preparing training data. The typical method for annotating the training data by
human experts is considered time consuming, labor intensive and sometimes prone
to error. Moreover, such an approach is not easily extensible to new
domains/languages since such extensions require annotating new training data.
In this study, we propose a distant supervised learning approach where the
training sentences are automatically annotated based on the emojis they have.
Such training data would be very cheap to produce compared with the manually
created training data, thus, much larger training data can be easily obtained.
On the other hand, this training data would naturally have lower quality as it
may contain some errors in the annotation. Nonetheless, we experimentally show
that training classifiers on cheap, large and possibly erroneous data annotated
using this approach leads to more accurate results compared with training the
same classifiers on the more expensive, much smaller and error-free manually
annotated training data. Our experiments are conducted on an in-house dataset
of emotional Arabic tweets and the classifiers we consider are: Support Vector
Machine (SVM), Multinomial Naive Bayes (MNB) and Random Forest (RF). In
addition to experimenting with single classifiers, we also consider using an
ensemble of classifiers. The results show that using an automatically annotated
training data (that is only one order of magnitude larger than the manually
annotated one) gives better results in almost all settings considered.
| 2,019 | Computation and Language |
Unlexicalized Transition-based Discontinuous Constituency Parsing | Lexicalized parsing models are based on the assumptions that (i) constituents
are organized around a lexical head (ii) bilexical statistics are crucial to
solve ambiguities. In this paper, we introduce an unlexicalized
transition-based parser for discontinuous constituency structures, based on a
structure-label transition system and a bi-LSTM scoring system. We compare it
to lexicalized parsing models in order to address the question of
lexicalization in the context of discontinuous constituency parsing. Our
experiments show that unlexicalized models systematically achieve higher
results than lexicalized models, and provide additional empirical evidence that
lexicalization is not necessary to achieve strong parsing results. Our best
unlexicalized model sets a new state of the art on English and German
discontinuous constituency treebanks. We further provide a per-phenomenon
analysis of its errors on discontinuous constituents.
| 2,019 | Computation and Language |
Text Analysis in Adversarial Settings: Does Deception Leave a Stylistic
Trace? | Textual deception constitutes a major problem for online security. Many
studies have argued that deceptiveness leaves traces in writing style, which
could be detected using text classification techniques. By conducting an
extensive literature review of existing empirical work, we demonstrate that
while certain linguistic features have been indicative of deception in certain
corpora, they fail to generalize across divergent semantic domains. We suggest
that deceptiveness as such leaves no content-invariant stylistic trace, and
textual similarity measures provide superior means of classifying texts as
potentially deceptive. Additionally, we discuss forms of deception beyond
semantic content, focusing on hiding author identity by writing style
obfuscation. Surveying the literature on both author identification and
obfuscation techniques, we conclude that current style transformation methods
fail to achieve reliable obfuscation while simultaneously ensuring semantic
faithfulness to the original text. We propose that future work in style
transformation should pay particular attention to disallowing semantically
drastic changes.
| 2,019 | Computation and Language |
Synchronous Bidirectional Inference for Neural Sequence Generation | In sequence to sequence generation tasks (e.g. machine translation and
abstractive summarization), inference is generally performed in a left-to-right
manner to produce the result token by token. The neural approaches, such as
LSTM and self-attention networks, are now able to make full use of all the
predicted history hypotheses from left side during inference, but cannot
meanwhile access any future (right side) information and usually generate
unbalanced outputs in which left parts are much more accurate than right ones.
In this work, we propose a synchronous bidirectional inference model to
generate outputs using both left-to-right and right-to-left decoding
simultaneously and interactively. First, we introduce a novel beam search
algorithm that facilitates synchronous bidirectional decoding. Then, we present
the core approach which enables left-to-right and right-to-left decoding to
interact with each other, so as to utilize both the history and future
predictions simultaneously during inference. We apply the proposed model to
both LSTM and self-attention networks. In addition, we propose two strategies
for parameter optimization. The extensive experiments on machine translation
and abstractive summarization demonstrate that our synchronous bidirectional
inference model can achieve remarkable improvements over the strong baselines.
| 2,019 | Computation and Language |
Lattice CNNs for Matching Based Chinese Question Answering | Short text matching often faces the challenges that there are great word
mismatch and expression diversity between the two texts, which would be further
aggravated in languages like Chinese where there is no natural space to segment
words explicitly. In this paper, we propose a novel lattice based CNN model
(LCNs) to utilize multi-granularity information inherent in the word lattice
while maintaining strong ability to deal with the introduced noisy information
for matching based question answering in Chinese. We conduct extensive
experiments on both document based question answering and knowledge based
question answering tasks, and experimental results show that the LCNs models
can significantly outperform the state-of-the-art matching models and strong
baselines by taking advantages of better ability to distill rich but
discriminative information from the word lattice input.
| 2,019 | Computation and Language |
Leveraging Knowledge Bases in LSTMs for Improving Machine Reading | This paper focuses on how to take advantage of external knowledge bases (KBs)
to improve recurrent neural networks for machine reading. Traditional methods
that exploit knowledge from KBs encode knowledge as discrete indicator
features. Not only do these features generalize poorly, but they require
task-specific feature engineering to achieve good performance. We propose
KBLSTM, a novel neural model that leverages continuous representations of KBs
to enhance the learning of recurrent neural networks for machine reading. To
effectively integrate background knowledge with information from the currently
processed text, our model employs an attention mechanism with a sentinel to
adaptively decide whether to attend to background knowledge and which
information from KBs is useful. Experimental results show that our model
achieves accuracies that surpass the previous state-of-the-art results for both
entity extraction and event extraction on the widely used ACE2005 dataset.
| 2,017 | Computation and Language |
Transfer Learning for Sequences via Learning to Collocate | Transfer learning aims to solve the data sparsity for a target domain by
applying information of the source domain. Given a sequence (e.g. a natural
language sentence), the transfer learning, usually enabled by recurrent neural
network (RNN), represents the sequential information transfer. RNN uses a chain
of repeating cells to model the sequence data. However, previous studies of
neural network based transfer learning simply represents the whole sentence by
a single vector, which is unfeasible for seq2seq and sequence labeling.
Meanwhile, such layer-wise transfer learning mechanisms lose the fine-grained
cell-level information from the source domain.
In this paper, we proposed the aligned recurrent transfer, ART, to achieve
cell-level information transfer. ART is under the pre-training framework. Each
cell attentively accepts transferred information from a set of positions in the
source domain. Therefore, ART learns the cross-domain word collocations in a
more flexible way. We conducted extensive experiments on both sequence labeling
tasks (POS tagging, NER) and sentence classification (sentiment analysis). ART
outperforms the state-of-the-arts over all experiments.
| 2,019 | Computation and Language |
Multi-Relational Question Answering from Narratives: Machine Reading and
Reasoning in Simulated Worlds | Question Answering (QA), as a research field, has primarily focused on either
knowledge bases (KBs) or free text as a source of knowledge. These two sources
have historically shaped the kinds of questions that are asked over these
sources, and the methods developed to answer them. In this work, we look
towards a practical use-case of QA over user-instructed knowledge that uniquely
combines elements of both structured QA over knowledge bases, and unstructured
QA over narrative, introducing the task of multi-relational QA over personal
narrative. As a first step towards this goal, we make three key contributions:
(i) we generate and release TextWorldsQA, a set of five diverse datasets, where
each dataset contains dynamic narrative that describes entities and relations
in a simulated world, paired with variably compositional questions over that
knowledge, (ii) we perform a thorough evaluation and analysis of several
state-of-the-art QA models and their variants at this task, and (iii) we
release a lightweight Python-based framework we call TextWorlds for easily
generating arbitrary additional worlds and narrative, with the goal of allowing
the community to create and share a growing collection of diverse worlds as a
test-bed for this task.
| 2,018 | Computation and Language |
Star-Transformer | Although Transformer has achieved great successes on many NLP tasks, its
heavy structure with fully-connected attention connections leads to
dependencies on large training data. In this paper, we present
Star-Transformer, a lightweight alternative by careful sparsification. To
reduce model complexity, we replace the fully-connected structure with a
star-shaped topology, in which every two non-adjacent nodes are connected
through a shared relay node. Thus, complexity is reduced from quadratic to
linear, while preserving capacity to capture both local composition and
long-range dependency. The experiments on four tasks (22 datasets) show that
Star-Transformer achieved significant improvements against the standard
Transformer for the modestly sized datasets.
| 2,022 | Computation and Language |
Joint Multi-Domain Learning for Automatic Short Answer Grading | One of the fundamental challenges towards building any intelligent tutoring
system is its ability to automatically grade short student answers. A typical
automatic short answer grading system (ASAG) grades student answers across
multiple domains (or subjects). Grading student answers requires building a
supervised machine learning model that evaluates the similarity of the student
answer with the reference answer(s). We observe that unlike typical textual
similarity or entailment tasks, the notion of similarity is not universal here.
On one hand, para-phrasal constructs of the language can indicate similarity
independent of the domain. On the other hand, two words, or phrases, that are
not strict synonyms of each other, might mean the same in certain domains.
Building on this observation, we propose JMD-ASAG, the first joint multidomain
deep learning architecture for automatic short answer grading that performs
domain adaptation by learning generic and domain-specific aspects from the
limited domain-wise training data. JMD-ASAG not only learns the domain-specific
characteristics but also overcomes the dependence on a large corpus by learning
the generic characteristics from the task-specific data itself. On a
large-scale industry dataset and a benchmarking dataset, we show that our model
performs significantly better than existing techniques which either learn
domain-specific models or adapt a generic similarity scoring model from a large
corpus. Further, on the benchmarking dataset, we report state-of-the-art
results against all existing non-neural and neural models.
| 2,019 | Computation and Language |
Pretraining-Based Natural Language Generation for Text Summarization | In this paper, we propose a novel pretraining-based encoder-decoder
framework, which can generate the output sequence based on the input sequence
in a two-stage manner. For the encoder of our model, we encode the input
sequence into context representations using BERT. For the decoder, there are
two stages in our model, in the first stage, we use a Transformer-based decoder
to generate a draft output sequence. In the second stage, we mask each word of
the draft sequence and feed it to BERT, then by combining the input sequence
and the draft representation generated by BERT, we use a Transformer-based
decoder to predict the refined word for each masked position. To the best of
our knowledge, our approach is the first method which applies the BERT into
text generation tasks. As the first step in this direction, we evaluate our
proposed method on the text summarization task. Experimental results show that
our model achieves new state-of-the-art on both CNN/Daily Mail and New York
Times datasets.
| 2,019 | Computation and Language |
Relation Extraction using Explicit Context Conditioning | Relation Extraction (RE) aims to label relations between groups of marked
entities in raw text. Most current RE models learn context-aware
representations of the target entities that are then used to establish relation
between them. This works well for intra-sentence RE and we call them
first-order relations. However, this methodology can sometimes fail to capture
complex and long dependencies. To address this, we hypothesize that at times
two target entities can be explicitly connected via a context token. We refer
to such indirect relations as second-order relations and describe an efficient
implementation for computing them. These second-order relation scores are then
combined with first-order relation scores. Our empirical results show that the
proposed method leads to state-of-the-art performance over two biomedical
datasets.
| 2,019 | Computation and Language |
Attentional Encoder Network for Targeted Sentiment Classification | Targeted sentiment classification aims at determining the sentimental
tendency towards specific targets. Most of the previous approaches model
context and target words with RNN and attention. However, RNNs are difficult to
parallelize and truncated backpropagation through time brings difficulty in
remembering long-term patterns. To address this issue, this paper proposes an
Attentional Encoder Network (AEN) which eschews recurrence and employs
attention based encoders for the modeling between context and target. We raise
the label unreliability issue and introduce label smoothing regularization. We
also apply pre-trained BERT to this task and obtain new state-of-the-art
results. Experiments and analysis demonstrate the effectiveness and lightweight
of our model.
| 2,019 | Computation and Language |
EAT: a simple and versatile semantic representation format for
multi-purpose NLP | Semantic representations are central in many NLP tasks that require
human-interpretable data. The conjunctivist framework - primarily developed by
Pietroski (2005, 2018) - obtains expressive representations with only a few
basic semantic types and relations systematically linked to syntactic
positions. While representational simplicity is crucial for computational
applications, such findings have not yet had major influence on NLP. We present
the first generic semantic representation format for NLP directly based on
these insights. We name the format EAT due to its basis in the Event-, Agent-,
and Theme arguments in Neo-Davidsonian logical forms. It builds on the idea
that similar tripartite argument relations are ubiquitous across categories,
and can be constructed from grammatical structure without additional lexical
information. We present a detailed exposition of EAT and how it relates to
other prevalent formats used in prior work, such as Abstract Meaning
Representation (AMR) and Minimal Recursion Semantics (MRS). EAT stands out in
two respects: simplicity and versatility. Uniquely, EAT discards semantic
metapredicates, and instead represents semantic roles entirely via positional
encoding. This is made possible by limiting the number of roles to only three;
a major decrease from the many dozens recognized in e.g. AMR and MRS. EAT's
simplicity makes it exceptionally versatile in application. First, we show that
drastically reducing semantic roles based on EAT benefits text generation from
MRS in the test settings of Hajdik et al. (2019). Second, we implement the
derivation of EAT from a syntactic parse, and apply this for parallel corpus
generation between grammatical classes. Third, we train an encoder-decoder LSTM
network to map EAT to English. Finally, we use both the encoder-decoder network
and a rule-based alternative to conduct grammatical transformation from
EAT-input.
| 2,021 | Computation and Language |
Cooperative Learning of Disjoint Syntax and Semantics | There has been considerable attention devoted to models that learn to jointly
infer an expression's syntactic structure and its semantics. Yet,
\citet{NangiaB18} has recently shown that the current best systems fail to
learn the correct parsing strategy on mathematical expressions generated from a
simple context-free grammar. In this work, we present a recursive model
inspired by \newcite{ChoiYL18} that reaches near perfect accuracy on this task.
Our model is composed of two separated modules for syntax and semantics. They
are cooperatively trained with standard continuous and discrete optimization
schemes. Our model does not require any linguistic structure for supervision
and its recursive nature allows for out-of-domain generalization with little
loss in performance. Additionally, our approach performs competitively on
several natural language tasks, such as Natural Language Inference or Sentiment
Analysis.
| 2,019 | Computation and Language |
MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts | This paper presents the formal release of MedMentions, a new manually
annotated resource for the recognition of biomedical concepts. What
distinguishes MedMentions from other annotated biomedical corpora is its size
(over 4,000 abstracts and over 350,000 linked mentions), as well as the size of
the concept ontology (over 3 million concepts from UMLS 2017) and its broad
coverage of biomedical disciplines. In addition to the full corpus, a
sub-corpus of MedMentions is also presented, comprising annotations for a
subset of UMLS 2017 targeted towards document retrieval. To encourage research
in Biomedical Named Entity Recognition and Linking, data splits for training
and testing are included in the release, and a baseline model and its metrics
for entity linking are also described.
| 2,019 | Computation and Language |
Cross-Lingual Alignment of Contextual Word Embeddings, with Applications
to Zero-shot Dependency Parsing | We introduce a novel method for multilingual transfer that utilizes deep
contextual embeddings, pretrained in an unsupervised fashion. While contextual
embeddings have been shown to yield richer representations of meaning compared
to their static counterparts, aligning them poses a challenge due to their
dynamic nature. To this end, we construct context-independent variants of the
original monolingual spaces and utilize their mapping to derive an alignment
for the context-dependent spaces. This mapping readily supports processing of a
target language, improving transfer by context-aware embeddings. Our
experimental results demonstrate the effectiveness of this approach for
zero-shot and few-shot learning of dependency parsing. Specifically, our method
consistently outperforms the previous state-of-the-art on 6 tested languages,
yielding an improvement of 6.8 LAS points on average.
| 2,019 | Computation and Language |
GQA: A New Dataset for Real-World Visual Reasoning and Compositional
Question Answering | We introduce GQA, a new dataset for real-world visual reasoning and
compositional question answering, seeking to address key shortcomings of
previous VQA datasets. We have developed a strong and robust question engine
that leverages scene graph structures to create 22M diverse reasoning
questions, all come with functional programs that represent their semantics. We
use the programs to gain tight control over the answer distribution and present
a new tunable smoothing technique to mitigate question biases. Accompanying the
dataset is a suite of new metrics that evaluate essential qualities such as
consistency, grounding and plausibility. An extensive analysis is performed for
baselines as well as state-of-the-art models, providing fine-grained results
for different question types and topologies. Whereas a blind LSTM obtains mere
42.1%, and strong VQA models achieve 54.1%, human performance tops at 89.3%,
offering ample opportunity for new research to explore. We strongly hope GQA
will provide an enabling resource for the next generation of models with
enhanced robustness, improved consistency, and deeper semantic understanding
for images and language.
| 2,019 | Computation and Language |
Improving Robustness of Machine Translation with Synthetic Noise | Modern Machine Translation (MT) systems perform consistently well on clean,
in-domain text. However most human generated text, particularly in the realm of
social media, is full of typos, slang, dialect, idiolect and other noise which
can have a disastrous impact on the accuracy of output translation. In this
paper we leverage the Machine Translation of Noisy Text (MTNT) dataset to
enhance the robustness of MT systems by emulating naturally occurring noise in
otherwise clean data. Synthesizing noise in this manner we are ultimately able
to make a vanilla MT system resilient to naturally occurring noise and
partially mitigate loss in accuracy resulting therefrom.
| 2,019 | Computation and Language |
Lost in Machine Translation: A Method to Reduce Meaning Loss | A desideratum of high-quality translation systems is that they preserve
meaning, in the sense that two sentences with different meanings should not
translate to one and the same sentence in another language. However,
state-of-the-art systems often fail in this regard, particularly in cases where
the source and target languages partition the "meaning space" in different
ways. For instance, "I cut my finger." and "I cut my finger off." describe
different states of the world but are translated to French (by both Fairseq and
Google Translate) as "Je me suis coupe le doigt.", which is ambiguous as to
whether the finger is detached. More generally, translation systems are
typically many-to-one (non-injective) functions from source to target language,
which in many cases results in important distinctions in meaning being lost in
translation. Building on Bayesian models of informative utterance production,
we present a method to define a less ambiguous translation system in terms of
an underlying pre-trained neural sequence-to-sequence model. This method
increases injectivity, resulting in greater preservation of meaning as measured
by improvement in cycle-consistency, without impeding translation quality
(measured by BLEU score).
| 2,019 | Computation and Language |
Predicting the Type and Target of Offensive Posts in Social Media | As offensive content has become pervasive in social media, there has been
much research in identifying potentially offensive messages. However, previous
work on this topic did not consider the problem as a whole, but rather focused
on detecting very specific types of offensive content, e.g., hate speech,
cyberbulling, or cyber-aggression. In contrast, here we target several
different kinds of offensive content. In particular, we model the task
hierarchically, identifying the type and the target of offensive messages in
social media. For this purpose, we complied the Offensive Language
Identification Dataset (OLID), a new dataset with tweets annotated for
offensive content using a fine-grained three-layer annotation scheme, which we
make publicly available. We discuss the main similarities and differences
between OLID and pre-existing datasets for hate speech identification,
aggression detection, and similar tasks. We further experiment with and we
compare the performance of different machine learning models on OLID.
| 2,019 | Computation and Language |
Developing and Using Special-Purpose Lexicons for Cohort Selection from
Clinical Notes | Background and Significance: Selecting cohorts for a clinical trial typically
requires costly and time-consuming manual chart reviews resulting in poor
participation. To help automate the process, National NLP Clinical Challenges
(N2C2) conducted a shared challenge by defining 13 criteria for clinical trial
cohort selection and by providing training and test datasets. This research was
motivated by the N2C2 challenge.
Methods: We broke down the task into 13 independent subtasks corresponding to
each criterion and implemented subtasks using rules or a supervised machine
learning model. Each task critically depended on knowledge resources in the
form of task-specific lexicons, for which we developed a novel model-driven
approach. The approach allowed us to first expand the lexicon from a seed set
and then remove noise from the list, thus improving the accuracy.
Results: Our system achieved an overall F measure of 0.9003 at the challenge,
and was statistically tied for the first place out of 45 participants. The
model-driven lexicon development and further debugging the rules/code on the
training set improved overall F measure to 0.9140, overtaking the best
numerical result at the challenge.
Discussion: Cohort selection, like phenotype extraction and classification,
is amenable to rule-based or simple machine learning methods, however, the
lexicons involved, such as medication names or medical terms referring to a
medical problem, critically determine the overall accuracy. Automated lexicon
development has the potential for scalability and accuracy.
| 2,019 | Computation and Language |
Polyglot Contextual Representations Improve Crosslingual Transfer | We introduce Rosita, a method to produce multilingual contextual word
representations by training a single language model on text from multiple
languages. Our method combines the advantages of contextual word
representations with those of multilingual representation learning. We produce
language models from dissimilar language pairs (English/Arabic and
English/Chinese) and use them in dependency parsing, semantic role labeling,
and named entity recognition, with comparisons to monolingual and
non-contextual variants. Our results provide further evidence for the benefits
of polyglot learning, in which representations are shared across multiple
languages.
| 2,019 | Computation and Language |
Interpretable Structure-aware Document Encoders with Hierarchical
Attention | We propose a method to create document representations that reflect their
internal structure. We modify Tree-LSTMs to hierarchically merge basic elements
such as words and sentences into blocks of increasing complexity. Our Structure
Tree-LSTM implements a hierarchical attention mechanism over individual
components and combinations thereof. We thus emphasize the usefulness of
Tree-LSTMs for texts larger than a sentence. We show that structure-aware
encoders can be used to improve the performance of document classification. We
demonstrate that our method is resilient to changes to the basic building
blocks, as it performs well with both sentence and word embeddings. The
Structure Tree-LSTM outperforms all the baselines on two datasets by leveraging
structural clues. We show our model's interpretability by visualizing how our
model distributes attention inside a document. On a third dataset from the
medical domain, our model achieves competitive performance with the state of
the art. This result shows the Structure Tree-LSTM can leverage dependency
relations other than text structure, such as a set of reports on the same
patient.
| 2,019 | Computation and Language |
Syntactic Recurrent Neural Network for Authorship Attribution | Writing style is a combination of consistent decisions at different levels of
language production including lexical, syntactic, and structural associated to
a specific author (or author groups). While lexical-based models have been
widely explored in style-based text classification, relying on content makes
the model less scalable when dealing with heterogeneous data comprised of
various topics. On the other hand, syntactic models which are
content-independent, are more robust against topic variance. In this paper, we
introduce a syntactic recurrent neural network to encode the syntactic patterns
of a document in a hierarchical structure. The model first learns the syntactic
representation of sentences from the sequence of part-of-speech tags. For this
purpose, we exploit both convolutional filters and long short-term memories to
investigate the short-term and long-term dependencies of part-of-speech tags in
the sentences. Subsequently, the syntactic representations of sentences are
aggregated into document representation using recurrent neural networks. Our
experimental results on PAN 2012 dataset for authorship attribution task shows
that syntactic recurrent neural network outperforms the lexical model with the
identical architecture by approximately 14% in terms of accuracy.
| 2,019 | Computation and Language |
Image-Question-Answer Synergistic Network for Visual Dialog | The image, question (combined with the history for de-referencing), and the
corresponding answer are three vital components of visual dialog. Classical
visual dialog systems integrate the image, question, and history to search for
or generate the best matched answer, and so, this approach significantly
ignores the role of the answer. In this paper, we devise a novel
image-question-answer synergistic network to value the role of the answer for
precise visual dialog. We extend the traditional one-stage solution to a
two-stage solution. In the first stage, candidate answers are coarsely scored
according to their relevance to the image and question pair. Afterward, in the
second stage, answers with high probability of being correct are re-ranked by
synergizing with image and question. On the Visual Dialog v1.0 dataset, the
proposed synergistic network boosts the discriminative visual dialog model to
achieve a new state-of-the-art of 57.88\% normalized discounted cumulative
gain. A generative visual dialog model equipped with the proposed technique
also shows promising improvements.
| 2,019 | Computation and Language |
Recursive Subtree Composition in LSTM-Based Dependency Parsing | The need for tree structure modelling on top of sequence modelling is an open
issue in neural dependency parsing. We investigate the impact of adding a tree
layer on top of a sequential model by recursively composing subtree
representations (composition) in a transition-based parser that uses features
extracted by a BiLSTM. Composition seems superfluous with such a model,
suggesting that BiLSTMs capture information about subtrees. We perform model
ablations to tease out the conditions under which composition helps. When
ablating the backward LSTM, performance drops and composition does not recover
much of the gap. When ablating the forward LSTM, performance drops less
dramatically and composition recovers a substantial part of the gap, indicating
that a forward LSTM and composition capture similar information. We take the
backward LSTM to be related to lookahead features and the forward LSTM to the
rich history-based features both crucial for transition-based parsers. To
capture history-based information, composition is better than a forward LSTM on
its own, but it is even better to have a forward LSTM as part of a BiLSTM. We
correlate results with language properties, showing that the improved lookahead
of a backward LSTM is especially important for head-final languages.
| 2,019 | Computation and Language |
Semantic Hilbert Space for Text Representation Learning | Capturing the meaning of sentences has long been a challenging task. Current
models tend to apply linear combinations of word features to conduct semantic
composition for bigger-granularity units e.g. phrases, sentences, and
documents. However, the semantic linearity does not always hold in human
language. For instance, the meaning of the phrase `ivory tower' can not be
deduced by linearly combining the meanings of `ivory' and `tower'. To address
this issue, we propose a new framework that models different levels of semantic
units (e.g. sememe, word, sentence, and semantic abstraction) on a single
\textit{Semantic Hilbert Space}, which naturally admits a non-linear semantic
composition by means of a complex-valued vector word representation. An
end-to-end neural network~\footnote{https://github.com/wabyking/qnn} is
proposed to implement the framework in the text classification task, and
evaluation results on six benchmarking text classification datasets demonstrate
the effectiveness, robustness and self-explanation power of the proposed model.
Furthermore, intuitive case studies are conducted to help end users to
understand how the framework works.
| 2,019 | Computation and Language |
Improving a tf-idf weighted document vector embedding | We examine a number of methods to compute a dense vector embedding for a
document in a corpus, given a set of word vectors such as those from word2vec
or GloVe. We describe two methods that can improve upon a simple weighted sum,
that are optimal in the sense that they maximizes a particular weighted cosine
similarity measure.
We consider several weighting functions, including inverse document frequency
(idf), smooth inverse frequency (SIF), and the sub-sampling function used in
word2vec. We find that idf works best for our applications. We also use common
component removal proposed by Arora et al. as a post-process and find it is
helpful in most cases.
We compare these embeddings variations to the doc2vec embedding on a new
evaluation task using TripAdvisor reviews, and also on the CQADupStack
benchmark from the literature.
| 2,019 | Computation and Language |
A framework for information extraction from tables in biomedical
literature | The scientific literature is growing exponentially, and professionals are no
more able to cope with the current amount of publications. Text mining provided
in the past methods to retrieve and extract information from text; however,
most of these approaches ignored tables and figures. The research done in
mining table data still does not have an integrated approach for mining that
would consider all complexities and challenges of a table. Our research is
examining the methods for extracting numerical (number of patients, age, gender
distribution) and textual (adverse reactions) information from tables in the
clinical literature. We present a requirement analysis template and an integral
methodology for information extraction from tables in clinical domain that
contains 7 steps: (1) table detection, (2) functional processing, (3)
structural processing, (4) semantic tagging, (5) pragmatic processing, (6) cell
selection and (7) syntactic processing and extraction. Our approach performed
with the F-measure ranged between 82 and 92%, depending on the variable, task
and its complexity.
| 2,019 | Computation and Language |
Entity Recognition at First Sight: Improving NER with Eye Movement
Information | Previous research shows that eye-tracking data contains information about the
lexical and syntactic properties of text, which can be used to improve natural
language processing models. In this work, we leverage eye movement features
from three corpora with recorded gaze information to augment a state-of-the-art
neural model for named entity recognition (NER) with gaze embeddings. These
corpora were manually annotated with named entity labels. Moreover, we show how
gaze features, generalized on word type level, eliminate the need for recorded
eye-tracking data at test time. The gaze-augmented models for NER using
token-level and type-level features outperform the baselines. We present the
benefits of eye-tracking features by evaluating the NER models on both
individual datasets as well as in cross-domain settings.
| 2,019 | Computation and Language |
Multi-Task Learning with Contextualized Word Representations for
Extented Named Entity Recognition | Fine-Grained Named Entity Recognition (FG-NER) is critical for many NLP
applications. While classical named entity recognition (NER) has attracted a
substantial amount of research, FG-NER is still an open research domain. The
current state-of-the-art (SOTA) model for FG-NER relies heavily on manual
efforts for building a dictionary and designing hand-crafted features. The
end-to-end framework which achieved the SOTA result for NER did not get the
competitive result compared to SOTA model for FG-NER. In this paper, we
investigate how effective multi-task learning approaches are in an end-to-end
framework for FG-NER in different aspects. Our experiments show that using
multi-task learning approaches with contextualized word representation can help
an end-to-end neural network model achieve SOTA results without using any
additional manual effort for creating data and designing features.
| 2,019 | Computation and Language |
BUT-FIT at SemEval-2019 Task 7: Determining the Rumour Stance with
Pre-Trained Deep Bidirectional Transformers | This paper describes our system submitted to SemEval 2019 Task 7: RumourEval
2019: Determining Rumour Veracity and Support for Rumours, Subtask A (Gorrell
et al., 2019). The challenge focused on classifying whether posts from Twitter
and Reddit support, deny, query, or comment a hidden rumour, truthfulness of
which is the topic of an underlying discussion thread. We formulate the problem
as a stance classification, determining the rumour stance of a post with
respect to the previous thread post and the source thread post. The recent BERT
architecture was employed to build an end-to-end system which has reached the
F1 score of 61.67% on the provided test data. It finished at the 2nd place in
the competition, without any hand-crafted features, only 0.2% behind the
winner.
| 2,019 | Computation and Language |
Attention is not Explanation | Attention mechanisms have seen wide adoption in neural NLP models. In
addition to improving predictive performance, these are often touted as
affording transparency: models equipped with attention provide a distribution
over attended-to input units, and this is often presented (at least implicitly)
as communicating the relative importance of inputs. However, it is unclear what
relationship exists between attention weights and model outputs. In this work,
we perform extensive experiments across a variety of NLP tasks that aim to
assess the degree to which attention weights provide meaningful `explanations'
for predictions. We find that they largely do not. For example, learned
attention weights are frequently uncorrelated with gradient-based measures of
feature importance, and one can identify very different attention distributions
that nonetheless yield equivalent predictions. Our findings show that standard
attention modules do not provide meaningful explanations and should not be
treated as though they do. Code for all experiments is available at
https://github.com/successar/AttentionExplanation.
| 2,019 | Computation and Language |
On the Idiosyncrasies of the Mandarin Chinese Classifier System | While idiosyncrasies of the Chinese classifier system have been a richly
studied topic among linguists (Adams and Conklin, 1973; Erbaugh, 1986; Lakoff,
1986), not much work has been done to quantify them with statistical methods.
In this paper, we introduce an information-theoretic approach to measuring
idiosyncrasy; we examine how much the uncertainty in Mandarin Chinese
classifiers can be reduced by knowing semantic information about the nouns that
the classifiers modify. Using the empirical distribution of classifiers from
the parsed Chinese Gigaword corpus (Graff et al., 2005), we compute the mutual
information (in bits) between the distribution over classifiers and
distributions over other linguistic quantities. We investigate whether semantic
classes of nouns and adjectives differ in how much they reduce uncertainty in
classifier choice, and find that it is not fully idiosyncratic; while there are
no obvious trends for the majority of semantic classes, shape nouns reduce
uncertainty in classifier choice the most.
| 2,020 | Computation and Language |
Learning When Not to Answer: A Ternary Reward Structure for
Reinforcement Learning based Question Answering | In this paper, we investigate the challenges of using reinforcement learning
agents for question-answering over knowledge graphs for real-world
applications. We examine the performance metrics used by state-of-the-art
systems and determine that they are inadequate for such settings. More
specifically, they do not evaluate the systems correctly for situations when
there is no answer available and thus agents optimized for these metrics are
poor at modeling confidence. We introduce a simple new performance metric for
evaluating question-answering agents that is more representative of practical
usage conditions, and optimize for this metric by extending the binary reward
structure used in prior work to a ternary reward structure which also rewards
an agent for not answering a question rather than giving an incorrect answer.
We show that this can drastically improve the precision of answered questions
while only not answering a limited number of previously correctly answered
questions. Employing a supervised learning strategy using depth-first-search
paths to bootstrap the reinforcement learning algorithm further improves
performance.
| 2,019 | Computation and Language |
Non-Autoregressive Machine Translation with Auxiliary Regularization | As a new neural machine translation approach, Non-Autoregressive machine
Translation (NAT) has attracted attention recently due to its high efficiency
in inference. However, the high efficiency has come at the cost of not
capturing the sequential dependency on the target side of translation, which
causes NAT to suffer from two kinds of translation errors: 1) repeated
translations (due to indistinguishable adjacent decoder hidden states), and 2)
incomplete translations (due to incomplete transfer of source side information
via the decoder hidden states).
In this paper, we propose to address these two problems by improving the
quality of decoder hidden representations via two auxiliary regularization
terms in the training process of an NAT model. First, to make the hidden states
more distinguishable, we regularize the similarity between consecutive hidden
states based on the corresponding target tokens. Second, to force the hidden
states to contain all the information in the source sentence, we leverage the
dual nature of translation tasks (e.g., English to German and German to
English) and minimize a backward reconstruction error to ensure that the hidden
states of the NAT decoder are able to recover the source side sentence.
Extensive experiments conducted on several benchmark datasets show that both
regularization strategies are effective and can alleviate the issues of
repeated translations and incomplete translations in NAT models. The accuracy
of NAT models is therefore improved significantly over the state-of-the-art NAT
models with even better efficiency for inference.
| 2,019 | Computation and Language |
Fixed-Size Ordinally Forgetting Encoding Based Word Sense Disambiguation | In this paper, we present our method of using fixed-size ordinally forgetting
encoding (FOFE) to solve the word sense disambiguation (WSD) problem. FOFE
enables us to encode variable-length sequence of words into a theoretically
unique fixed-size representation that can be fed into a feed forward neural
network (FFNN), while keeping the positional information between words. In our
method, a FOFE-based FFNN is used to train a pseudo language model over
unlabelled corpus, then the pre-trained language model is capable of
abstracting the surrounding context of polyseme instances in labelled corpus
into context embeddings. Next, we take advantage of these context embeddings
towards WSD classification. We conducted experiments on several WSD data sets,
which demonstrates that our proposed method can achieve comparable performance
to that of the state-of-the-art approach at the expense of much lower
computational cost.
| 2,019 | Computation and Language |
Leveraging Deep Graph-Based Text Representation for Sentiment Polarity
Applications | Over the last few years, machine learning over graph structures has
manifested a significant enhancement in text mining applications such as event
detection, opinion mining, and news recommendation. One of the primary
challenges in this regard is structuring a graph that encodes and encompasses
the features of textual data for the effective machine learning algorithm.
Besides, exploration and exploiting of semantic relations is regarded as a
principal step in text mining applications. However, most of the traditional
text mining methods perform somewhat poor in terms of employing such relations.
In this paper, we propose a sentence-level graph-based text representation
which includes stop words to consider semantic and term relations. Then, we
employ a representation learning approach on the combined graphs of sentences
to extract the latent and continuous features of the documents. Eventually, the
learned features of the documents are fed into a deep neural network for the
sentiment classification task. The experimental results demonstrate that the
proposed method substantially outperforms the related sentiment analysis
approaches based on several benchmark datasets. Furthermore, our method can be
generalized on different datasets without any dependency on pre-trained word
embeddings.
| 2,020 | Computation and Language |
A Framework for Decoding Event-Related Potentials from Text | We propose a novel framework for modeling event-related potentials (ERPs)
collected during reading that couples pre-trained convolutional decoders with a
language model. Using this framework, we compare the abilities of a variety of
existing and novel sentence processing models to reconstruct ERPs. We find that
modern contextual word embeddings underperform surprisal-based models but that,
combined, the two outperform either on its own.
| 2,019 | Computation and Language |
CN-Probase: A Data-driven Approach for Large-scale Chinese Taxonomy
Construction | Taxonomies play an important role in machine intelligence. However, most
well-known taxonomies are in English, and non-English taxonomies, especially
Chinese ones, are still very rare. In this paper, we focus on automatic Chinese
taxonomy construction and propose an effective generation and verification
framework to build a large-scale and high-quality Chinese taxonomy. In the
generation module, we extract isA relations from multiple sources of Chinese
encyclopedia, which ensures the coverage. To further improve the precision of
taxonomy, we apply three heuristic approaches in verification module. As a
result, we construct the largest Chinese taxonomy with high precision about 95%
called CN-Probase. Our taxonomy has been deployed on Aliyun, with over 82
million API calls in six months.
| 2,019 | Computation and Language |
How Large a Vocabulary Does Text Classification Need? A Variational
Approach to Vocabulary Selection | With the rapid development in deep learning, deep neural networks have been
widely adopted in many real-life natural language applications. Under deep
neural networks, a pre-defined vocabulary is required to vectorize text inputs.
The canonical approach to select pre-defined vocabulary is based on the word
frequency, where a threshold is selected to cut off the long tail distribution.
However, we observed that such simple approach could easily lead to under-sized
vocabulary or over-sized vocabulary issues. Therefore, we are interested in
understanding how the end-task classification accuracy is related to the
vocabulary size and what is the minimum required vocabulary size to achieve a
specific performance. In this paper, we provide a more sophisticated
variational vocabulary dropout (VVD) based on variational dropout to perform
vocabulary selection, which can intelligently select the subset of the
vocabulary to achieve the required performance. To evaluate different
algorithms on the newly proposed vocabulary selection problem, we propose two
new metrics: Area Under Accuracy-Vocab Curve and Vocab Size under X\% Accuracy
Drop. Through extensive experiments on various NLP classification tasks, our
variational framework is shown to significantly outperform the frequency-based
and other selection baselines on these metrics.
| 2,019 | Computation and Language |
An Editorial Network for Enhanced Document Summarization | We suggest a new idea of Editorial Network - a mixed extractive-abstractive
summarization approach, which is applied as a post-processing step over a given
sequence of extracted sentences. Our network tries to imitate the decision
process of a human editor during summarization. Within such a process, each
extracted sentence may be either kept untouched, rephrased or completely
rejected. We further suggest an effective way for training the "editor" based
on a novel soft-labeling approach. Using the CNN/DailyMail dataset we
demonstrate the effectiveness of our approach compared to state-of-the-art
extractive-only or abstractive-only baseline methods.
| 2,019 | Computation and Language |
Domain-Constrained Advertising Keyword Generation | Advertising (ad for short) keyword suggestion is important for sponsored
search to improve online advertising and increase search revenue. There are two
common challenges in this task. First, the keyword bidding problem: hot ad
keywords are very expensive for most of the advertisers because more
advertisers are bidding on more popular keywords, while unpopular keywords are
difficult to discover. As a result, most ads have few chances to be presented
to the users. Second, the inefficient ad impression issue: a large proportion
of search queries, which are unpopular yet relevant to many ad keywords, have
no ads presented on their search result pages. Existing retrieval-based or
matching-based methods either deteriorate the bidding competition or are unable
to suggest novel keywords to cover more queries, which leads to inefficient ad
impressions. To address the above issues, this work investigates to use
generative neural networks for keyword generation in sponsored search. Given a
purchased keyword (a word sequence) as input, our model can generate a set of
keywords that are not only relevant to the input but also satisfy the domain
constraint which enforces that the domain category of a generated keyword is as
expected. Furthermore, a reinforcement learning algorithm is proposed to
adaptively utilize domain-specific information in keyword generation. Offline
evaluation shows that the proposed model can generate keywords that are
diverse, novel, relevant to the source keyword, and accordant with the domain
constraint. Online evaluation shows that generative models can improve coverage
(COV), click-through rate (CTR), and revenue per mille (RPM) substantially in
sponsored search.
| 2,019 | Computation and Language |
Learning to Generate Questions by Learning What not to Generate | Automatic question generation is an important technique that can improve the
training of question answering, help chatbots to start or continue a
conversation with humans, and provide assessment materials for educational
purposes. Existing neural question generation models are not sufficient mainly
due to their inability to properly model the process of how each word in the
question is selected, i.e., whether repeating the given passage or being
generated from a vocabulary. In this paper, we propose our Clue Guided Copy
Network for Question Generation (CGC-QG), which is a sequence-to-sequence
generative model with copying mechanism, yet employing a variety of novel
components and techniques to boost the performance of question generation. In
CGC-QG, we design a multi-task labeling strategy to identify whether a question
word should be copied from the input passage or be generated instead, guiding
the model to learn the accurate boundaries between copying and generation.
Furthermore, our input passage encoder takes as input, among a diverse range of
other features, the prediction made by a clue word predictor, which helps
identify whether each word in the input passage is a potential clue to be
copied into the target question. The clue word predictor is designed based on a
novel application of Graph Convolutional Networks onto a syntactic dependency
tree representation of each passage, thus being able to predict clue words only
based on their context in the passage and their relative positions to the
answer in the tree. We jointly train the clue prediction as well as question
generation with multi-task learning and a number of practical strategies to
reduce the complexity. Extensive evaluations show that our model significantly
improves the performance of question generation and out-performs all previous
state-of-the-art neural question generation models by a substantial margin.
| 2,019 | Computation and Language |
Multilingual Neural Machine Translation with Knowledge Distillation | Multilingual machine translation, which translates multiple languages with a
single model, has attracted much attention due to its efficiency of offline
training and online serving. However, traditional multilingual translation
usually yields inferior accuracy compared with the counterpart using individual
models for each language pair, due to language diversity and model capacity
limitations. In this paper, we propose a distillation-based approach to boost
the accuracy of multilingual machine translation. Specifically, individual
models are first trained and regarded as teachers, and then the multilingual
model is trained to fit the training data and match the outputs of individual
models simultaneously through knowledge distillation. Experiments on IWSLT, WMT
and Ted talk translation datasets demonstrate the effectiveness of our method.
Particularly, we show that one model is enough to handle multiple languages (up
to 44 languages in our experiment), with comparable or even better accuracy
than individual models.
| 2,019 | Computation and Language |
Induction Networks for Few-Shot Text Classification | Text classification tends to struggle when data is deficient or when it needs
to adapt to unseen classes. In such challenging scenarios, recent studies have
used meta-learning to simulate the few-shot task, in which new queries are
compared to a small support set at the sample-wise level. However, this
sample-wise comparison may be severely disturbed by the various expressions in
the same class. Therefore, we should be able to learn a general representation
of each class in the support set and then compare it to new queries. In this
paper, we propose a novel Induction Network to learn such a generalized
class-wise representation, by innovatively leveraging the dynamic routing
algorithm in meta-learning. In this way, we find the model is able to induce
and generalize better. We evaluate the proposed model on a well-studied
sentiment classification dataset (English) and a real-world dialogue intent
classification dataset (Chinese). Experiment results show that on both
datasets, the proposed model significantly outperforms the existing
state-of-the-art approaches, proving the effectiveness of class-wise
generalization in few-shot text classification.
| 2,019 | Computation and Language |
Viable Dependency Parsing as Sequence Labeling | We recast dependency parsing as a sequence labeling problem, exploring
several encodings of dependency trees as labels. While dependency parsing by
means of sequence labeling had been attempted in existing work, results
suggested that the technique was impractical. We show instead that with a
conventional BiLSTM-based model it is possible to obtain fast and accurate
parsers. These parsers are conceptually simple, not needing traditional parsing
algorithms or auxiliary structures. However, experiments on the PTB and a
sample of UD treebanks show that they provide a good speed-accuracy tradeoff,
with results competitive with more complex approaches.
| 2,019 | Computation and Language |
Fast Multi-language LSTM-based Online Handwriting Recognition | We describe an online handwriting system that is able to support 102
languages using a deep neural network architecture. This new system has
completely replaced our previous Segment-and-Decode-based system and reduced
the error rate by 20%-40% relative for most languages. Further, we report new
state-of-the-art results on IAM-OnDB for both the open and closed dataset
setting. The system combines methods from sequence recognition with a new input
encoding using B\'ezier curves. This leads to up to 10x faster recognition
times compared to our previous system. Through a series of experiments we
determine the optimal configuration of our models and report the results of our
setup on a number of additional public datasets.
| 2,020 | Computation and Language |
DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion | Sentence fusion is the task of joining several independent sentences into a
single coherent text. Current datasets for sentence fusion are small and
insufficient for training modern neural models. In this paper, we propose a
method for automatically-generating fusion examples from raw text and present
DiscoFuse, a large scale dataset for discourse-based sentence fusion. We author
a set of rules for identifying a diverse set of discourse phenomena in raw
text, and decomposing the text into two independent sentences. We apply our
approach on two document collections: Wikipedia and Sports articles, yielding
60 million fusion examples annotated with discourse information required to
reconstruct the fused text. We develop a sequence-to-sequence model on
DiscoFuse and thoroughly analyze its strengths and weaknesses with respect to
the various discourse phenomena, using both automatic as well as human
evaluation. Finally, we conduct transfer learning experiments with WebSplit, a
recent dataset for text simplification. We show that pretraining on DiscoFuse
substantially improves performance on WebSplit when viewed as a sentence fusion
task.
| 2,019 | Computation and Language |
An Embarrassingly Simple Approach for Transfer Learning from Pretrained
Language Models | A growing number of state-of-the-art transfer learning methods employ
language models pretrained on large generic corpora. In this paper we present a
conceptually simple and effective transfer learning approach that addresses the
problem of catastrophic forgetting. Specifically, we combine the task-specific
optimization function with an auxiliary language model objective, which is
adjusted during the training process. This preserves language regularities
captured by language models, while enabling sufficient adaptation for solving
the target task. Our method does not require pretraining or finetuning separate
components of the network and we train our models end-to-end in a single step.
We present results on a variety of challenging affective and text
classification tasks, surpassing well established transfer learning methods
with greater level of complexity.
| 2,019 | Computation and Language |
Multiresolution Graph Attention Networks for Relevance Matching | A large number of deep learning models have been proposed for the text
matching problem, which is at the core of various typical natural language
processing (NLP) tasks. However, existing deep models are mainly designed for
the semantic matching between a pair of short texts, such as paraphrase
identification and question answering, and do not perform well on the task of
relevance matching between short-long text pairs. This is partially due to the
fact that the essential characteristics of short-long text matching have not
been well considered in these deep models. More specifically, these methods
fail to handle extreme length discrepancy between text pieces and neither can
they fully characterize the underlying structural information in long text
documents. In this paper, we are especially interested in relevance matching
between a piece of short text and a long document, which is critical to
problems like query-document matching in information retrieval and web
searching. To extract the structural information of documents, an undirected
graph is constructed, with each vertex representing a keyword and the weight of
an edge indicating the degree of interaction between keywords. Based on the
keyword graph, we further propose a Multiresolution Graph Attention Network to
learn multi-layered representations of vertices through a Graph Convolutional
Network (GCN), and then match the short text snippet with the graphical
representation of the document with the attention mechanisms applied over each
layer of the GCN. Experimental results on two datasets demonstrate that our
graph approach outperforms other state-of-the-art deep matching models.
| 2,019 | Computation and Language |
When a Tweet is Actually Sexist. A more Comprehensive Classification of
Different Online Harassment Categories and The Challenges in NLP | Sexism is very common in social media and makes the boundaries of freedom
tighter for feminist and female users. There is still no comprehensive
classification of sexism attracting natural language processing techniques.
Categorizing sexism in social media in the categories of hostile or benevolent
sexism are so general that simply ignores the other types of sexism happening
in these media. This paper proposes a more comprehensive and in-depth
categories of online harassment in social media e.g. twitter into the following
categories, "Indirect harassment", "Information threat", "sexual harassment",
"Physical harassment" and "Not sexist" and address the challenge of labeling
them along with presenting the classification result of the categories. It is
preliminary work applying machine learning to learn the concept of sexism and
distinguishes itself by looking at more precise categories of sexism in social
media.
| 2,019 | Computation and Language |
Still a Pain in the Neck: Evaluating Text Representations on Lexical
Composition | Building meaningful phrase representations is challenging because phrase
meanings are not simply the sum of their constituent meanings. Lexical
composition can shift the meanings of the constituent words and introduce
implicit information. We tested a broad range of textual representations for
their capacity to address these issues. We found that as expected,
contextualized word representations perform better than static word embeddings,
more so on detecting meaning shift than in recovering implicit information, in
which their performance is still far from that of humans. Our evaluation suite,
including 5 tasks related to lexical composition effects, can serve future
research aiming to improve such representations.
| 2,019 | Computation and Language |
Zoho at SemEval-2019 Task 9: Semi-supervised Domain Adaptation using
Tri-training for Suggestion Mining | This paper describes our submission for the SemEval-2019 Suggestion Mining
task. A simple Convolutional Neural Network (CNN) classifier with contextual
word representations from a pre-trained language model was used for sentence
classification. The model is trained using tri-training, a semi-supervised
bootstrapping mechanism for labelling unseen data. Tri-training proved to be an
effective technique to accommodate domain shift for cross-domain suggestion
mining (Subtask B) where there is no hand labelled training data. For in-domain
evaluation (Subtask A), we use the same technique to augment the training set.
Our system ranks thirteenth in Subtask A with an $F_1$-score of 68.07 and third
in Subtask B with an $F_1$-score of 81.94.
| 2,019 | Computation and Language |
F10-SGD: Fast Training of Elastic-net Linear Models for Text
Classification and Named-entity Recognition | Voice-assistants text classification and named-entity recognition (NER)
models are trained on millions of example utterances. Because of the large
datasets, long training time is one of the bottlenecks for releasing improved
models. In this work, we develop F10-SGD, a fast optimizer for text
classification and NER elastic-net linear models. On internal datasets, F10-SGD
provides 4x reduction in training time compared to the OWL-QN optimizer without
loss of accuracy or increase in model size. Furthermore, we incorporate biased
sampling that prioritizes harder examples towards the end of the training. As a
result, in addition to faster training, we were able to obtain statistically
significant accuracy improvements for NER.
On public datasets, F10-SGD obtains 22% faster training time compared to
FastText for text classification. And, 4x reduction in training time compared
to CRFSuite OWL-QN for NER.
| 2,019 | Computation and Language |
Bridging the Gap: Attending to Discontinuity in Identification of
Multiword Expressions | We introduce a new method to tag Multiword Expressions (MWEs) using a
linguistically interpretable language-independent deep learning architecture.
We specifically target discontinuity, an under-explored aspect that poses a
significant challenge to computational treatment of MWEs. Two neural
architectures are explored: Graph Convolutional Network (GCN) and multi-head
self-attention. GCN leverages dependency parse information, and self-attention
attends to long-range relations. We finally propose a combined model that
integrates complementary information from both through a gating mechanism. The
experiments on a standard multilingual dataset for verbal MWEs show that our
model outperforms the baselines not only in the case of discontinuous MWEs but
also in overall F-score.
| 2,019 | Computation and Language |
Analyzing the Perceived Severity of Cybersecurity Threats Reported on
Social Media | Breaking cybersecurity events are shared across a range of websites,
including security blogs (FireEye, Kaspersky, etc.), in addition to social
media platforms such as Facebook and Twitter. In this paper, we investigate
methods to analyze the severity of cybersecurity threats based on the language
that is used to describe them online. A corpus of 6,000 tweets describing
software vulnerabilities is annotated with authors' opinions toward their
severity. We show that our corpus supports the development of automatic
classifiers with high precision for this task. Furthermore, we demonstrate the
value of analyzing users' opinions about the severity of threats reported
online as an early indicator of important software vulnerabilities. We present
a simple, yet effective method for linking software vulnerabilities reported in
tweets to Common Vulnerabilities and Exposures (CVEs) in the National
Vulnerability Database (NVD). Using our predicted severity scores, we show that
it is possible to achieve a Precision@50 of 0.86 when forecasting high severity
vulnerabilities, significantly outperforming a baseline that is based on tweet
volume. Finally we show how reports of severe vulnerabilities online are
predictive of real-world exploits.
| 2,019 | Computation and Language |
BERT for Joint Intent Classification and Slot Filling | Intent classification and slot filling are two essential tasks for natural
language understanding. They often suffer from small-scale human-labeled
training data, resulting in poor generalization capability, especially for rare
words. Recently a new language representation model, BERT (Bidirectional
Encoder Representations from Transformers), facilitates pre-training deep
bidirectional representations on large-scale unlabeled corpora, and has created
state-of-the-art models for a wide variety of natural language processing tasks
after simple fine-tuning. However, there has not been much effort on exploring
BERT for natural language understanding. In this work, we propose a joint
intent classification and slot filling model based on BERT. Experimental
results demonstrate that our proposed model achieves significant improvement on
intent classification accuracy, slot filling F1, and sentence-level semantic
frame accuracy on several public benchmark datasets, compared to the
attention-based recurrent neural network models and slot-gated models.
| 2,019 | Computation and Language |
Better, Faster, Stronger Sequence Tagging Constituent Parsers | Sequence tagging models for constituent parsing are faster, but less accurate
than other types of parsers. In this work, we address the following weaknesses
of such constituent parsers: (a) high error rates around closing brackets of
long constituents, (b) large label sets, leading to sparsity, and (c) error
propagation arising from greedy decoding. To effectively close brackets, we
train a model that learns to switch between tagging schemes. To reduce
sparsity, we decompose the label set and use multi-task learning to jointly
learn to predict sublabels. Finally, we mitigate issues from greedy decoding
through auxiliary losses and sentence-level fine-tuning with policy gradient.
Combining these techniques, we clearly surpass the performance of sequence
tagging constituent parsers on the English and Chinese Penn Treebanks, and
reduce their parsing time even further. On the SPMRL datasets, we observe even
greater improvements across the board, including a new state of the art on
Basque, Hebrew, Polish and Swedish.
| 2,019 | Computation and Language |
Global Vectors for Node Representations | Most network embedding algorithms consist in measuring co-occurrences of
nodes via random walks then learning the embeddings using Skip-Gram with
Negative Sampling. While it has proven to be a relevant choice, there are
alternatives, such as GloVe, which has not been investigated yet for network
embedding. Even though SGNS better handles non co-occurrence than GloVe, it has
a worse time-complexity. In this paper, we propose a matrix factorization
approach for network embedding, inspired by GloVe, that better handles non
co-occurrence with a competitive time-complexity. We also show how to extend
this model to deal with networks where nodes are documents, by simultaneously
learning word, node and document representations. Quantitative evaluations show
that our model achieves state-of-the-art performance, while not being so
sensitive to the choice of hyper-parameters. Qualitatively speaking, we show
how our model helps exploring a network of documents by generating
complementary network-oriented and content-oriented keywords.
| 2,019 | Computation and Language |
Evaluating Rewards for Question Generation Models | Recent approaches to question generation have used modifications to a Seq2Seq
architecture inspired by advances in machine translation. Models are trained
using teacher forcing to optimise only the one-step-ahead prediction. However,
at test time, the model is asked to generate a whole sequence, causing errors
to propagate through the generation process (exposure bias). A number of
authors have proposed countering this bias by optimising for a reward that is
less tightly coupled to the training data, using reinforcement learning. We
optimise directly for quality metrics, including a novel approach using a
discriminator learned directly from the training data. We confirm that policy
gradient methods can be used to decouple training from the ground truth,
leading to increases in the metrics used as rewards. We perform a human
evaluation, and show that although these metrics have previously been assumed
to be good proxies for question quality, they are poorly aligned with human
judgement and the model simply learns to exploit the weaknesses of the reward
source.
| 2,019 | Computation and Language |
Link Prediction with Mutual Attention for Text-Attributed Networks | In this extended abstract, we present an algorithm that learns a similarity
measure between documents from the network topology of a structured corpus. We
leverage the Scaled Dot-Product Attention, a recently proposed attention
mechanism, to design a mutual attention mechanism between pairs of documents.
To train its parameters, we use the network links as supervision. We provide
preliminary experiment results with a citation dataset on two prediction tasks,
demonstrating the capacity of our model to learn a meaningful textual
similarity.
| 2,019 | Computation and Language |
Representation Learning for Recommender Systems with Application to the
Scientific Literature | The scientific literature is a large information network linking various
actors (laboratories, companies, institutions, etc.). The vast amount of data
generated by this network constitutes a dynamic heterogeneous attributed
network (HAN), in which new information is constantly produced and from which
it is increasingly difficult to extract content of interest. In this article, I
present my first thesis works in partnership with an industrial company,
Digital Scientific Research Technology. This later offers a scientific watch
tool, Peerus, addressing various issues, such as the real time recommendation
of newly published papers or the search for active experts to start new
collaborations. To tackle this diversity of applications, a common approach
consists in learning representations of the nodes and attributes of this HAN
and use them as features for a variety of recommendation tasks. However, most
works on attributed network embedding pay too little attention to textual
attributes and do not fully take advantage of recent natural language
processing techniques. Moreover, proposed methods that jointly learn node and
document representations do not provide a way to effectively infer
representations for new documents for which network information is missing,
which happens to be crucial in real time recommender systems. Finally, the
interplay between textual and graph data in text-attributed heterogeneous
networks remains an open research direction.
| 2,019 | Computation and Language |
Context-aware Neural-based Dialog Act Classification on Automatically
Generated Transcriptions | This paper presents our latest investigations on dialog act (DA)
classification on automatically generated transcriptions. We propose a novel
approach that combines convolutional neural networks (CNNs) and conditional
random fields (CRFs) for context modeling in DA classification. We explore the
impact of transcriptions generated from different automatic speech recognition
systems such as hybrid TDNN/HMM and End-to-End systems on the final
performance. Experimental results on two benchmark datasets (MRDA and SwDA)
show that the combination CNN and CRF improves consistently the accuracy.
Furthermore, they show that although the word error rates are comparable,
End-to-End ASR system seems to be more suitable for DA classification.
| 2,019 | Computation and Language |
Adversarial Training for Satire Detection: Controlling for Confounding
Variables | The automatic detection of satire vs. regular news is relevant for downstream
applications (for instance, knowledge base population) and to improve the
understanding of linguistic characteristics of satire. Recent approaches build
upon corpora which have been labeled automatically based on article sources. We
hypothesize that this encourages the models to learn characteristics for
different publication sources (e.g., "The Onion" vs. "The Guardian") rather
than characteristics of satire, leading to poor generalization performance to
unseen publication sources. We therefore propose a novel model for satire
detection with an adversarial component to control for the confounding variable
of publication source. On a large novel data set collected from German news
(which we make available to the research community), we observe comparable
satire classification performance and, as desired, a considerable drop in
publication classification performance with adversarial training. Our analysis
shows that the adversarial component is crucial for the model to learn to pay
attention to linguistic properties of satire.
| 2,019 | Computation and Language |
Jointly Optimizing Diversity and Relevance in Neural Response Generation | Although recent neural conversation models have shown great potential, they
often generate bland and generic responses. While various approaches have been
explored to diversify the output of the conversation model, the improvement
often comes at the cost of decreased relevance. In this paper, we propose a
SpaceFusion model to jointly optimize diversity and relevance that essentially
fuses the latent space of a sequence-to-sequence model and that of an
autoencoder model by leveraging novel regularization terms. As a result, our
approach induces a latent space in which the distance and direction from the
predicted response vector roughly match the relevance and diversity,
respectively. This property also lends itself well to an intuitive
visualization of the latent space. Both automatic and human evaluation results
demonstrate that the proposed approach brings significant improvement compared
to strong baselines in both diversity and relevance.
| 2,019 | Computation and Language |
Incorporating End-to-End Speech Recognition Models for Sentiment
Analysis | Previous work on emotion recognition demonstrated a synergistic effect of
combining several modalities such as auditory, visual, and transcribed text to
estimate the affective state of a speaker. Among these, the linguistic modality
is crucial for the evaluation of an expressed emotion. However, manually
transcribed spoken text cannot be given as input to a system practically. We
argue that using ground-truth transcriptions during training and evaluation
phases leads to a significant discrepancy in performance compared to real-world
conditions, as the spoken text has to be recognized on the fly and can contain
speech recognition mistakes. In this paper, we propose a method of integrating
an automatic speech recognition (ASR) output with a character-level recurrent
neural network for sentiment recognition. In addition, we conduct several
experiments investigating sentiment recognition for human-robot interaction in
a noise-realistic scenario which is challenging for the ASR systems. We
quantify the improvement compared to using only the acoustic modality in
sentiment recognition. We demonstrate the effectiveness of this approach on the
Multimodal Corpus of Sentiment Intensity (MOSI) by achieving 73,6% accuracy in
a binary sentiment classification task, exceeding previously reported results
that use only acoustic input. In addition, we set a new state-of-the-art
performance on the MOSI dataset (80.4% accuracy, 2% absolute improvement).
| 2,019 | Computation and Language |
Efficient Contextual Representation Learning Without Softmax Layer | Contextual representation models have achieved great success in improving
various downstream tasks. However, these language-model-based encoders are
difficult to train due to the large parameter sizes and high computational
complexity. By carefully examining the training procedure, we find that the
softmax layer (the output layer) causes significant inefficiency due to the
large vocabulary size. Therefore, we redesign the learning objective and
propose an efficient framework for training contextual representation models.
Specifically, the proposed approach bypasses the softmax layer by performing
language modeling with dimension reduction, and allows the models to leverage
pre-trained word embeddings. Our framework reduces the time spent on the output
layer to a negligible level, eliminates almost all the trainable parameters of
the softmax layer and performs language modeling without truncating the
vocabulary. When applied to ELMo, our method achieves a 4 times speedup and
eliminates 80% trainable parameters while achieving competitive performance on
downstream tasks.
| 2,019 | Computation and Language |
FastFusionNet: New State-of-the-Art for DAWNBench SQuAD | In this technical report, we introduce FastFusionNet, an efficient variant of
FusionNet [12]. FusionNet is a high performing reading comprehension
architecture, which was designed primarily for maximum retrieval accuracy with
less regard towards computational requirements. For FastFusionNets we remove
the expensive CoVe layers [21] and substitute the BiLSTMs with far more
efficient SRU layers [19]. The resulting architecture obtains state-of-the-art
results on DAWNBench [5] while achieving the lowest training and inference time
on SQuAD [25] to-date. The code is available at
https://github.com/felixgwu/FastFusionNet.
| 2,019 | Computation and Language |
Reinforcement Learning based Curriculum Optimization for Neural Machine
Translation | We consider the problem of making efficient use of heterogeneous training
data in neural machine translation (NMT). Specifically, given a training
dataset with a sentence-level feature such as noise, we seek an optimal
curriculum, or order for presenting examples to the system during training. Our
curriculum framework allows examples to appear an arbitrary number of times,
and thus generalizes data weighting, filtering, and fine-tuning schemes. Rather
than relying on prior knowledge to design a curriculum, we use reinforcement
learning to learn one automatically, jointly with the NMT system, in the course
of a single training run. We show that this approach can beat uniform and
filtering baselines on Paracrawl and WMT English-to-French datasets by up to
+3.4 BLEU, and match the performance of a hand-designed, state-of-the-art
curriculum.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.