Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Cyberbullying Detection in Social Networks Using Deep Learning Based
Models; A Reproducibility Study | Cyberbullying is a disturbing online misbehaviour with troubling
consequences. It appears in different forms, and in most of the social
networks, it is in textual format. Automatic detection of such incidents
requires intelligent systems. Most of the existing studies have approached this
problem with conventional machine learning models and the majority of the
developed models in these studies are adaptable to a single social network at a
time. In recent studies, deep learning based models have found their way in the
detection of cyberbullying incidents, claiming that they can overcome the
limitations of the conventional models, and improve the detection performance.
In this paper, we investigate the findings of a recent literature in this
regard. We successfully reproduced the findings of this literature and
validated their findings using the same datasets, namely Wikipedia, Twitter,
and Formspring, used by the authors. Then we expanded our work by applying the
developed methods on a new YouTube dataset (~54k posts by ~4k users) and
investigated the performance of the models in new social media platforms. We
also transferred and evaluated the performance of the models trained on one
platform to another platform. Our findings show that the deep learning based
models outperform the machine learning models previously applied to the same
YouTube dataset. We believe that the deep learning based models can also
benefit from integrating other sources of information and looking into the
impact of profile information of the users in social networks.
| 2,018 | Computation and Language |
A standardized Project Gutenberg corpus for statistical analysis of
natural language and quantitative linguistics | The use of Project Gutenberg (PG) as a text corpus has been extremely popular
in statistical analysis of language for more than 25 years. However, in
contrast to other major linguistic datasets of similar importance, no
consensual full version of PG exists to date. In fact, most PG studies so far
either consider only a small number of manually selected books, leading to
potential biased subsets, or employ vastly different pre-processing strategies
(often specified in insufficient details), raising concerns regarding the
reproducibility of published results. In order to address these shortcomings,
here we present the Standardized Project Gutenberg Corpus (SPGC), an open
science approach to a curated version of the complete PG data containing more
than 50,000 books and more than $3 \times 10^9$ word-tokens. Using different
sources of annotated metadata, we not only provide a broad characterization of
the content of PG, but also show different examples highlighting the potential
of SPGC for investigating language variability across time, subjects, and
authors. We publish our methodology in detail, the code to download and process
the data, as well as the obtained corpus itself on 3 different levels of
granularity (raw text, timeseries of word tokens, and counts of words). In this
way, we provide a reproducible, pre-processed, full-size version of Project
Gutenberg as a new scientific resource for corpus linguistics, natural language
processing, and information retrieval.
| 2,018 | Computation and Language |
Generating lyrics with variational autoencoder and multi-modal artist
embeddings | We present a system for generating song lyrics lines conditioned on the style
of a specified artist. The system uses a variational autoencoder with artist
embeddings. We propose the pre-training of artist embeddings with the
representations learned by a CNN classifier, which is trained to predict
artists based on MEL spectrograms of their song clips. This work is the first
step towards combining audio and text modalities of songs for generating lyrics
conditioned on the artist's style. Our preliminary results suggest that there
is a benefit in initializing artists' embeddings with the representations
learned by a spectrogram classifier.
| 2,018 | Computation and Language |
Context, Attention and Audio Feature Explorations for Audio Visual
Scene-Aware Dialog | With the recent advancements in AI, Intelligent Virtual Assistants (IVA) have
become a ubiquitous part of every home. Going forward, we are witnessing a
confluence of vision, speech and dialog system technologies that are enabling
the IVAs to learn audio-visual groundings of utterances and have conversations
with users about the objects, activities and events surrounding them. As a part
of the 7th Dialog System Technology Challenges (DSTC7), for Audio Visual
Scene-Aware Dialog (AVSD) track, We explore `topics' of the dialog as an
important contextual feature into the architecture along with explorations
around multimodal Attention. We also incorporate an end-to-end audio
classification ConvNet, AclNet, into our models. We present detailed analysis
of the experiments and show that some of our model variations outperform the
baseline system presented for this task.
| 2,018 | Computation and Language |
A Survey of Hierarchy Identification in Social Networks | Humans are social by nature. Throughout history, people have formed
communities and built relationships. Most relationships with coworkers,
friends, and family are developed during face-to-face interactions. These
relationships are established through explicit means of communications such as
words and implicit such as intonation, body language, etc. By analyzing human
interactions we can derive information about the relationships and influence
among conversation participants. However, with the development of the Internet,
people started to communicate through text in online social networks.
Interestingly, they brought their communicational habits to the Internet. Many
social network users form relationships with each other and establish
communities with leaders and followers. Recognizing these hierarchical
relationships is an important task because it will help to understand social
networks and predict future trends, improve recommendations, better target
advertisement, and improve national security by identifying leaders of
anonymous terror groups. In this work, I provide an overview of current
research in this area and present the state-of-the-art approaches to deal with
the problem of identifying hierarchical relationships in social networks.
| 2,018 | Computation and Language |
How Much Does Tokenization Affect Neural Machine Translation? | Tokenization or segmentation is a wide concept that covers simple processes
such as separating punctuation from words, or more sophisticated processes such
as applying morphological knowledge. Neural Machine Translation (NMT) requires
a limited-size vocabulary for computational cost and enough examples to
estimate word embeddings. Separating punctuation and splitting tokens into
words or subwords has proven to be helpful to reduce vocabulary and increase
the number of examples of each word, improving the translation quality.
Tokenization is more challenging when dealing with languages with no separator
between words. In order to assess the impact of the tokenization in the quality
of the final translation on NMT, we experimented on five tokenizers over ten
language pairs. We reached the conclusion that the tokenization significantly
affects the final translation quality and that the best tokenizer differs for
different language pairs.
| 2,019 | Computation and Language |
RNNs Implicitly Implement Tensor Product Representations | Recurrent neural networks (RNNs) can learn continuous vector representations
of symbolic structures such as sequences and sentences; these representations
often exhibit linear regularities (analogies). Such regularities motivate our
hypothesis that RNNs that show such regularities implicitly compile symbolic
structures into tensor product representations (TPRs; Smolensky, 1990), which
additively combine tensor products of vectors representing roles (e.g.,
sequence positions) and vectors representing fillers (e.g., particular words).
To test this hypothesis, we introduce Tensor Product Decomposition Networks
(TPDNs), which use TPRs to approximate existing vector representations. We
demonstrate using synthetic data that TPDNs can successfully approximate linear
and tree-based RNN autoencoder representations, suggesting that these
representations exhibit interpretable compositional structure; we explore the
settings that lead RNNs to induce such structure-sensitive representations. By
contrast, further TPDN experiments show that the representations of four models
trained to encode naturally-occurring sentences can be largely approximated
with a bag of words, with only marginal improvements from more sophisticated
structures. We conclude that TPDNs provide a powerful method for interpreting
vector representations, and that standard RNNs can induce compositional
sequence representations that are remarkably well approximated by TPRs; at the
same time, existing training tasks for sentence representation learning may not
be sufficient for inducing robust structural representations.
| 2,019 | Computation and Language |
PyText: A Seamless Path from NLP research to production | We introduce PyText - a deep learning based NLP modeling framework built on
PyTorch. PyText addresses the often-conflicting requirements of enabling rapid
experimentation and of serving models at scale. It achieves this by providing
simple and extensible interfaces for model components, and by using PyTorch's
capabilities of exporting models for inference via the optimized Caffe2
execution engine. We report our own experience of migrating experimentation and
production workflows to PyText, which enabled us to iterate faster on novel
modeling ideas and then seamlessly ship them at industrial scale.
| 2,018 | Computation and Language |
What are the biases in my word embedding? | This paper presents an algorithm for enumerating biases in word embeddings.
The algorithm exposes a large number of offensive associations related to
sensitive features such as race and gender on publicly available embeddings,
including a supposedly "debiased" embedding. These biases are concerning in
light of the widespread use of word embeddings. The associations are identified
by geometric patterns in word embeddings that run parallel between people's
names and common lower-case tokens. The algorithm is highly unsupervised: it
does not even require the sensitive features to be pre-specified. This is
desirable because: (a) many forms of discrimination--such as racial
discrimination--are linked to social constructs that may vary depending on the
context, rather than to categories with fixed definitions; and (b) it makes it
easier to identify biases against intersectional groups, which depend on
combinations of sensitive features. The inputs to our algorithm are a list of
target tokens, e.g. names, and a word embedding. It outputs a number of Word
Embedding Association Tests (WEATs) that capture various biases present in the
data. We illustrate the utility of our approach on publicly available word
embeddings and lists of names, and evaluate its output using crowdsourcing. We
also show how removing names may not remove potential proxy bias.
| 2,019 | Computation and Language |
Variational Cross-domain Natural Language Generation for Spoken Dialogue
Systems | Cross-domain natural language generation (NLG) is still a difficult task
within spoken dialogue modelling. Given a semantic representation provided by
the dialogue manager, the language generator should generate sentences that
convey desired information. Traditional template-based generators can produce
sentences with all necessary information, but these sentences are not
sufficiently diverse. With RNN-based models, the diversity of the generated
sentences can be high, however, in the process some information is lost. In
this work, we improve an RNN-based generator by considering latent information
at the sentence level during generation using the conditional variational
autoencoder architecture. We demonstrate that our model outperforms the
original RNN-based generator, while yielding highly diverse sentences. In
addition, our model performs better when the training data is limited.
| 2,018 | Computation and Language |
Analysis Methods in Neural Language Processing: A Survey | The field of natural language processing has seen impressive progress in
recent years, with neural network models replacing many of the traditional
systems. A plethora of new models have been proposed, many of which are thought
to be opaque compared to their feature-rich counterparts. This has led
researchers to analyze, interpret, and evaluate neural networks in novel and
more fine-grained ways. In this survey paper, we review analysis methods in
neural language processing, categorize them according to prominent research
trends, highlight existing limitations, and point to potential directions for
future work.
| 2,019 | Computation and Language |
Sources of Complexity in Semantic Frame Parsing for Information
Extraction | This paper describes a Semantic Frame parsing System based on sequence
labeling methods, precisely BiLSTM models with highway connections, for
performing information extraction on a corpus of French encyclopedic history
texts annotated according to the Berkeley FrameNet formalism. The approach
proposed in this study relies on an integrated sequence labeling model which
jointly optimizes frame identification and semantic role segmentation and
identification. The purpose of this study is to analyze the task complexity, to
highlight the factors that make Semantic Frame parsing a difficult task and to
provide detailed evaluations of the performance on different types of frames
and sentences.
| 2,018 | Computation and Language |
Symbolic inductive bias for visually grounded learning of spoken
language | A widespread approach to processing spoken language is to first automatically
transcribe it into text. An alternative is to use an end-to-end approach:
recent works have proposed to learn semantic embeddings of spoken language from
images with spoken captions, without an intermediate transcription step. We
propose to use multitask learning to exploit existing transcribed speech within
the end-to-end setting. We describe a three-task architecture which combines
the objectives of matching spoken captions with corresponding images, speech
with text, and text with images. We show that the addition of the speech/text
task leads to substantial performance improvements on image retrieval when
compared to training the speech/image task in isolation. We conjecture that
this is due to a strong inductive bias transcribed speech provides to the
model, and offer supporting evidence for this.
| 2,023 | Computation and Language |
Multiple topic identification in telephone conversations | This paper deals with the automatic analysis of conversations between a
customer and an agent in a call centre of a customer care service. The purpose
of the analysis is to hypothesize themes about problems and complaints
discussed in the conversation. Themes are defined by the application
documentation topics. A conversation may contain mentions that are irrelevant
for the application purpose and multiple themes whose mentions may be
interleaved portions of a conversation that cannot be well defined. Two methods
are proposed for multiple theme hypothesization. One of them is based on a
cosine similarity measure using a bag of features extracted from the entire
conversation. The other method introduces the concept of thematic density
distributed around specific word positions in a conversation. In addition to
automatically selected words, word bi-grams with possible gaps between
successive words are also considered and selected. Experimental results show
that the results obtained with the proposed methods outperform the results
obtained with support vector machines on the same data. Furthermore, using the
theme skeleton of a conversation from which thematic densities are derived, it
will be possible to extract components of an automatic conversation report to
be used for improving the service performance. Index Terms: multi-topic audio
document classification, hu-man/human conversation analysis, speech analytics,
distance bigrams
| 2,013 | Computation and Language |
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in
Deep NLP Models | Despite the remarkable evolution of deep neural networks in natural language
processing (NLP), their interpretability remains a challenge. Previous work
largely focused on what these models learn at the representation level. We
break this analysis down further and study individual dimensions (neurons) in
the vector representation learned by end-to-end neural models in NLP tasks. We
propose two methods: Linguistic Correlation Analysis, based on a supervised
method to extract the most relevant neurons with respect to an extrinsic task,
and Cross-model Correlation Analysis, an unsupervised method to extract salient
neurons w.r.t. the model itself. We evaluate the effectiveness of our
techniques by ablating the identified neurons and reevaluating the network's
performance for two tasks: neural machine translation (NMT) and neural language
modeling (NLM). We further present a comprehensive analysis of neurons with the
aim to address the following questions: i) how localized or distributed are
different linguistic properties in the models? ii) are certain neurons
exclusive to some properties and not others? iii) is the information more or
less distributed in NMT vs. NLM? and iv) how important are the neurons
identified through the linguistic correlation method to the overall task? Our
code is publicly available as part of the NeuroX toolkit (Dalvi et al. 2019).
| 2,018 | Computation and Language |
NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks | We present a toolkit to facilitate the interpretation and understanding of
neural network models. The toolkit provides several methods to identify salient
neurons with respect to the model itself or an external task. A user can
visualize selected neurons, ablate them to measure their effect on the model
accuracy, and manipulate them to control the behavior of the model at the test
time. Such an analysis has a potential to serve as a springboard in various
research directions, such as understanding the model, better architectural
choices, model distillation and controlling data biases.
| 2,018 | Computation and Language |
A Survey on Deep Learning for Named Entity Recognition | Named entity recognition (NER) is the task to identify mentions of rigid
designators from text belonging to predefined semantic types such as person,
location, organization etc. NER always serves as the foundation for many
natural language applications such as question answering, text summarization,
and machine translation. Early NER systems got a huge success in achieving good
performance with the cost of human engineering in designing domain-specific
features and rules. In recent years, deep learning, empowered by continuous
real-valued vector representations and semantic composition through nonlinear
processing, has been employed in NER systems, yielding stat-of-the-art
performance. In this paper, we provide a comprehensive review on existing deep
learning techniques for NER. We first introduce NER resources, including tagged
NER corpora and off-the-shelf NER tools. Then, we systematically categorize
existing works based on a taxonomy along three axes: distributed
representations for input, context encoder, and tag decoder. Next, we survey
the most representative methods for recent applied techniques of deep learning
in new NER problem settings and applications. Finally, we present readers with
the challenges faced by NER systems and outline future directions in this area.
| 2,023 | Computation and Language |
Joint Slot Filling and Intent Detection via Capsule Neural Networks | Being able to recognize words as slots and detect the intent of an utterance
has been a keen issue in natural language understanding. The existing works
either treat slot filling and intent detection separately in a pipeline manner,
or adopt joint models which sequentially label slots while summarizing the
utterance-level intent without explicitly preserving the hierarchical
relationship among words, slots, and intents. To exploit the semantic hierarchy
for effective modeling, we propose a capsule-based neural network model which
accomplishes slot filling and intent detection via a dynamic
routing-by-agreement schema. A re-routing schema is proposed to further
synergize the slot filling performance using the inferred intent
representation. Experiments on two real-world datasets show the effectiveness
of our model when compared with other alternative model architectures, as well
as existing natural language understanding services.
| 2,019 | Computation and Language |
Distant Supervision for Relation Extraction with Linear Attenuation
Simulation and Non-IID Relevance Embedding | Distant supervision for relation extraction is an efficient method to reduce
labor costs and has been widely used to seek novel relational facts in large
corpora, which can be identified as a multi-instance multi-label problem.
However, existing distant supervision methods suffer from selecting important
words in the sentence and extracting valid sentences in the bag. Towards this
end, we propose a novel approach to address these problems in this paper.
Firstly, we propose a linear attenuation simulation to reflect the importance
of words in the sentence with respect to the distances between entities and
words. Secondly, we propose a non-independent and identically distributed
(non-IID) relevance embedding to capture the relevance of sentences in the bag.
Our method can not only capture complex information of words about hidden
relations, but also express the mutual information of instances in the bag.
Extensive experiments on a benchmark dataset have well-validated the
effectiveness of the proposed method.
| 2,018 | Computation and Language |
Exploiting Cross-Lingual Subword Similarities in Low-Resource Document
Classification | Text classification must sometimes be applied in a low-resource language with
no labeled training data. However, training data may be available in a related
language. We investigate whether character-level knowledge transfer from a
related language helps text classification. We present a cross-lingual document
classification framework (CACO) that exploits cross-lingual subword similarity
by jointly training a character-based embedder and a word-based classifier. The
embedder derives vector representations for input words from their written
forms, and the classifier makes predictions based on the word vectors. We use a
joint character representation for both the source language and the target
language, which allows the embedder to generalize knowledge about source
language words to target language words with similar forms. We propose a
multi-task objective that can further improve the model if additional
cross-lingual or monolingual resources are available. Experiments confirm that
character-level knowledge transfer is more data-efficient than word-level
transfer between related languages.
| 2,020 | Computation and Language |
Improving Context-Aware Semantic Relationships in Sparse Mobile Datasets | Traditional semantic similarity models often fail to encapsulate the external
context in which texts are situated. However, textual datasets generated on
mobile platforms can help us build a truer representation of semantic
similarity by introducing multimodal data. This is especially important in
sparse datasets, making solely text-driven interpretation of context more
difficult. In this paper, we develop new algorithms for building external
features into sentence embeddings and semantic similarity scores. Then, we test
them on embedding spaces on data from Twitter, using each tweet's time and
geolocation to better understand its context. Ultimately, we show that applying
PCA with eight components to the embedding space and appending multimodal
features yields the best outcomes. This yields a considerable improvement over
pure text-based approaches for discovering similar tweets. Our results suggest
that our new algorithm can help improve semantic understanding in various
settings.
| 2,018 | Computation and Language |
Supervised Sentiment Classification with CNNs for Diverse SE Datasets | Sentiment analysis, a popular technique for opinion mining, has been used by
the software engineering research community for tasks such as assessing app
reviews, developer emotions in issue trackers and developer opinions on APIs.
Past research indicates that state-of-the-art sentiment analysis techniques
have poor performance on SE data. This is because sentiment analysis tools are
often designed to work on non-technical documents such as movie reviews. In
this study, we attempt to solve the issues with existing sentiment analysis
techniques for SE texts by proposing a hierarchical model based on
convolutional neural networks (CNN) and long short-term memory (LSTM) trained
on top of pre-trained word vectors. We assessed our model's performance and
reliability by comparing it with a number of frequently used sentiment analysis
tools on five gold standard datasets. Our results show that our model pushes
the state of the art further on all datasets in terms of accuracy. We also show
that it is possible to get better accuracy after labelling a small sample of
the dataset and re-training our model rather than using an unsupervised
classifier.
| 2,018 | Computation and Language |
Non-Autoregressive Neural Machine Translation with Enhanced Decoder
Input | Non-autoregressive translation (NAT) models, which remove the dependence on
previous target tokens from the inputs of the decoder, achieve significantly
inference speedup but at the cost of inferior accuracy compared to
autoregressive translation (AT) models. Previous work shows that the quality of
the inputs of the decoder is important and largely impacts the model accuracy.
In this paper, we propose two methods to enhance the decoder inputs so as to
improve NAT models. The first one directly leverages a phrase table generated
by conventional SMT approaches to translate source tokens to target tokens,
which are then fed into the decoder as inputs. The second one transforms
source-side word embeddings to target-side word embeddings through
sentence-level alignment and word-level adversary learning, and then feeds the
transformed word embeddings into the decoder as inputs. Experimental results
show our method largely outperforms the NAT baseline~\citep{gu2017non} by
$5.11$ BLEU scores on WMT14 English-German task and $4.72$ BLEU scores on WMT16
English-Romanian task.
| 2,018 | Computation and Language |
Moment Matching Training for Neural Machine Translation: A Preliminary
Study | In previous works, neural sequence models have been shown to improve
significantly if external prior knowledge can be provided, for instance by
allowing the model to access the embeddings of explicit features during both
training and inference. In this work, we propose a different point of view on
how to incorporate prior knowledge in a principled way, using a moment matching
framework. In this approach, the standard local cross-entropy training of the
sequential model is combined with a moment matching training mode that
encourages the equality of the expectations of certain predefined features
between the model distribution and the empirical distribution. In particular,
we show how to derive unbiased estimates of some stochastic gradients that are
central to the training, and compare our framework with a formally related one:
policy gradient training in reinforcement learning, pointing out some important
differences in terms of the kinds of prior assumptions in both approaches. Our
initial results are promising, showing the effectiveness of our proposed
framework.
| 2,018 | Computation and Language |
Building a Neural Semantic Parser from a Domain Ontology | Semantic parsing is the task of converting natural language utterances into
machine interpretable meaning representations which can be executed against a
real-world environment such as a database. Scaling semantic parsing to
arbitrary domains faces two interrelated challenges: obtaining broad coverage
training data effectively and cheaply; and developing a model that generalizes
to compositional utterances and complex intentions. We address these challenges
with a framework which allows to elicit training data from a domain ontology
and bootstrap a neural parser which recursively builds derivations of logical
forms. In our framework meaning representations are described by sequences of
natural language templates, where each template corresponds to a decomposed
fragment of the underlying meaning representation. Although artificial,
templates can be understood and paraphrased by humans to create natural
utterances, resulting in parallel triples of utterances, meaning
representations, and their decompositions. These allow us to train a neural
semantic parser which learns to compose rules in deriving meaning
representations. We crowdsource training data on six domains, covering both
single-turn utterances which exhibit rich compositionality, and sequential
utterances where a complex task is procedurally performed in steps. We then
develop neural semantic parsers which perform such compositional tasks. In
general, our approach allows to deploy neural semantic parsers quickly and
cheaply from a given domain ontology.
| 2,018 | Computation and Language |
Learning to Refine Source Representations for Neural Machine Translation | Neural machine translation (NMT) models generally adopt an encoder-decoder
architecture for modeling the entire translation process. The encoder
summarizes the representation of input sentence from scratch, which is
potentially a problem if the sentence is ambiguous. When translating a text,
humans often create an initial understanding of the source sentence and then
incrementally refine it along the translation on the target side. Starting from
this intuition, we propose a novel encoder-refiner-decoder framework, which
dynamically refines the source representations based on the generated
target-side information at each decoding step. Since the refining operations
are time-consuming, we propose a strategy, leveraging the power of
reinforcement learning models, to decide when to refine at specific decoding
steps. Experimental results on both Chinese-English and English-German
translation tasks show that the proposed approach significantly and
consistently improves translation performance over the standard encoder-decoder
framework. Furthermore, when refining strategy is applied, results still show
reasonable improvement over the baseline without much decrease in decoding
speed.
| 2,018 | Computation and Language |
An Investigation of Few-Shot Learning in Spoken Term Classification | In this paper, we investigate the feasibility of applying few-shot learning
algorithms to a speech task. We formulate a user-defined scenario of spoken
term classification as a few-shot learning problem. In most few-shot learning
studies, it is assumed that all the N classes are new in a N-way problem. We
suggest that this assumption can be relaxed and define a N+M-way problem where
N and M are the number of new classes and fixed classes respectively. We
propose a modification to the Model-Agnostic Meta-Learning (MAML) algorithm to
solve the problem. Experiments on the Google Speech Commands dataset show that
our approach outperforms the conventional supervised learning approach and the
original MAML.
| 2,020 | Computation and Language |
A Bi-model based RNN Semantic Frame Parsing Model for Intent Detection
and Slot Filling | Intent detection and slot filling are two main tasks for building a spoken
language understanding(SLU) system. Multiple deep learning based models have
demonstrated good results on these tasks . The most effective algorithms are
based on the structures of sequence to sequence models (or "encoder-decoder"
models), and generate the intents and semantic tags either using separate
models or a joint model. Most of the previous studies, however, either treat
the intent detection and slot filling as two separate parallel tasks, or use a
sequence to sequence model to generate both semantic tags and intent. Most of
these approaches use one (joint) NN based model (including encoder-decoder
structure) to model two tasks, hence may not fully take advantage of the
cross-impact between them. In this paper, new Bi-model based RNN semantic frame
parsing network structures are designed to perform the intent detection and
slot filling tasks jointly, by considering their cross-impact to each other
using two correlated bidirectional LSTMs (BLSTM). Our Bi-model structure with a
decoder achieves state-of-the-art result on the benchmark ATIS data, with about
0.5$\%$ intent accuracy improvement and 0.9 $\%$ slot filling improvement.
| 2,018 | Computation and Language |
An Investigation of Supervised Learning Methods for Authorship
Attribution in Short Hinglish Texts using Char & Word N-grams | The writing style of a person can be affirmed as a unique identity indicator;
the words used, and the structuring of the sentences are clear measures which
can identify the author of a specific work. Stylometry and its subset -
Authorship Attribution, have a long history beginning from the 19th century,
and we can still find their use in modern times. The emergence of the Internet
has shifted the application of attribution studies towards non-standard texts
that are comparatively shorter to and different from the long texts on which
most research has been done. The aim of this paper focuses on the study of
short online texts, retrieved from messaging application called WhatsApp and
studying the distinctive features of a macaronic language (Hinglish), using
supervised learning methods and then comparing the models. Various features
such as word n-gram and character n-gram are compared via methods viz., Naive
Bayes Classifier, Support Vector Machine, Conditional Tree, and Random Forest,
to find the best discriminator for such corpora. Our results showed that SVM
attained a test accuracy of up to 95.079% while similarly, Naive Bayes attained
an accuracy of up to 94.455% for the dataset. Conditional Tree & Random Forest
failed to perform as well as expected. We also found that word unigram and
character 3-grams features were more likely to distinguish authors accurately
than other features.
| 2,018 | Computation and Language |
DBpedia NIF: Open, Large-Scale and Multilingual Knowledge Extraction
Corpus | In the past decade, the DBpedia community has put significant amount of
effort on developing technical infrastructure and methods for efficient
extraction of structured information from Wikipedia. These efforts have been
primarily focused on harvesting, refinement and publishing semi-structured
information found in Wikipedia articles, such as information from infoboxes,
categorization information, images, wikilinks and citations. Nevertheless,
still vast amount of valuable information is contained in the unstructured
Wikipedia article texts. In this paper, we present DBpedia NIF - a large-scale
and multilingual knowledge extraction corpus. The aim of the dataset is
two-fold: to dramatically broaden and deepen the amount of structured
information in DBpedia, and to provide large-scale and multilingual language
resource for development of various NLP and IR task. The dataset provides the
content of all articles for 128 Wikipedia languages. We describe the dataset
creation process and the NLP Interchange Format (NIF) used to model the
content, links and the structure the information of the Wikipedia articles. The
dataset has been further enriched with about 25% more links and selected
partitions published as Linked Data. Finally, we describe the maintenance and
sustainability plans, and selected use cases of the dataset from the TextExt
knowledge extraction challenge.
| 2,018 | Computation and Language |
Quantized-Dialog Language Model for Goal-Oriented Conversational Systems | We propose a novel methodology to address dialog learning in the context of
goal-oriented conversational systems. The key idea is to quantize the dialog
space into clusters and create a language model across the clusters, thus
allowing for an accurate choice of the next utterance in the conversation. The
language model relies on n-grams associated with clusters of utterances. This
quantized-dialog language model methodology has been applied to the end-to-end
goal-oriented track of the latest Dialog System Technology Challenges (DSTC6).
The objective is to find the correct system utterance from a pool of candidates
in order to complete a dialog between a user and an automated
restaurant-reservation system. Our results show that the technique proposed in
this paper achieves high accuracy regarding selection of the correct candidate
utterance, and outperforms other state-of-the-art approaches based on neural
networks.
| 2,018 | Computation and Language |
The Global Anchor Method for Quantifying Linguistic Shifts and Domain
Adaptation | Language is dynamic, constantly evolving and adapting with respect to time,
domain or topic. The adaptability of language is an active research area, where
researchers discover social, cultural and domain-specific changes in language
using distributional tools such as word embeddings. In this paper, we introduce
the global anchor method for detecting corpus-level language shifts. We show
both theoretically and empirically that the global anchor method is equivalent
to the alignment method, a widely-used method for comparing word embeddings, in
terms of detecting corpus-level language shifts. Despite their equivalence in
terms of detection abilities, we demonstrate that the global anchor method is
superior in terms of applicability as it can compare embeddings of different
dimensionalities. Furthermore, the global anchor method has implementation and
parallelization advantages. We show that the global anchor method reveals fine
structures in the evolution of language and domain adaptation. When combined
with the graph Laplacian technique, the global anchor method recovers the
evolution trajectory and domain clustering of disparate text corpora.
| 2,018 | Computation and Language |
Same but Different: Distant Supervision for Predicting and Understanding
Entity Linking Difficulty | Entity Linking (EL) is the task of automatically identifying entity mentions
in a piece of text and resolving them to a corresponding entity in a reference
knowledge base like Wikipedia. There is a large number of EL tools available
for different types of documents and domains, yet EL remains a challenging task
where the lack of precision on particularly ambiguous mentions often spoils the
usefulness of automated disambiguation results in real applications. A priori
approximations of the difficulty to link a particular entity mention can
facilitate flagging of critical cases as part of semi-automated EL systems,
while detecting latent factors that affect the EL performance, like
corpus-specific features, can provide insights on how to improve a system based
on the special characteristics of the underlying corpus. In this paper, we
first introduce a consensus-based method to generate difficulty labels for
entity mentions on arbitrary corpora. The difficulty labels are then exploited
as training data for a supervised classification task able to predict the EL
difficulty of entity mentions using a variety of features. Experiments over a
corpus of news articles show that EL difficulty can be estimated with high
accuracy, revealing also latent features that affect EL performance. Finally,
evaluation results demonstrate the effectiveness of the proposed method to
inform semi-automated EL pipelines.
| 2,021 | Computation and Language |
Detecting weak and strong Islamophobic hate speech on social media | Islamophobic hate speech on social media inflicts considerable harm on both
targeted individuals and wider society, and also risks reputational damage for
the host platforms. Accordingly, there is a pressing need for robust tools to
detect and classify Islamophobic hate speech at scale. Previous research has
largely approached the detection of Islamophobic hate speech on social media as
a binary task. However, the varied nature of Islamophobia means that this is
often inappropriate for both theoretically-informed social science and
effectively monitoring social media. Drawing on in-depth conceptual work we
build a multi-class classifier which distinguishes between non-Islamophobic,
weak Islamophobic and strong Islamophobic content. Accuracy is 77.6% and
balanced accuracy is 83%. We apply the classifier to a dataset of 109,488
tweets produced by far right Twitter accounts during 2017. Whilst most tweets
are not Islamophobic, weak Islamophobia is considerably more prevalent (36,963
tweets) than strong (14,895 tweets). Our main input feature is a gloVe word
embeddings model trained on a newly collected corpus of 140 million tweets. It
outperforms a generic word embeddings model by 5.9 percentage points,
demonstrating the importan4ce of context. Unexpectedly, we also find that a
one-against-one multi class SVM outperforms a deep learning algorithm.
| 2,018 | Computation and Language |
Word Embedding based on Low-Rank Doubly Stochastic Matrix Decomposition | Word embedding, which encodes words into vectors, is an important starting
point in natural language processing and commonly used in many text-based
machine learning tasks. However, in most current word embedding approaches, the
similarity in embedding space is not optimized in the learning. In this paper
we propose a novel neighbor embedding method which directly learns an embedding
simplex where the similarities between the mapped words are optimal in terms of
minimal discrepancy to the input neighborhoods. Our method is built upon
two-step random walks between words via topics and thus able to better reveal
the topics among the words. Experiment results indicate that our method,
compared with another existing word embedding approach, is more favorable for
various queries.
| 2,018 | Computation and Language |
Hyperbolic Deep Learning for Chinese Natural Language Understanding | Recently hyperbolic geometry has proven to be effective in building
embeddings that encode hierarchical and entailment information. This makes it
particularly suited to modelling the complex asymmetrical relationships between
Chinese characters and words. In this paper we first train a large scale
hyperboloid skip-gram model on a Chinese corpus, then apply the character
embeddings to a downstream hyperbolic Transformer model derived from the
principles of gyrovector space for Poincare disk model. In our experiments the
character-based Transformer outperformed its word-based Euclidean equivalent.
To the best of our knowledge, this is the first time in Chinese NLP that a
character-based model outperformed its word-based counterpart, allowing the
circumvention of the challenging and domain-dependent task of Chinese Word
Segmentation (CWS).
| 2,018 | Computation and Language |
Cross Lingual Speech Emotion Recognition: Urdu vs. Western Languages | Cross-lingual speech emotion recognition is an important task for practical
applications. The performance of automatic speech emotion recognition systems
degrades in cross-corpus scenarios, particularly in scenarios involving
multiple languages or a previously unseen language such as Urdu for which
limited or no data is available. In this study, we investigate the problem of
cross-lingual emotion recognition for Urdu language and contribute URDU---the
first ever spontaneous Urdu-language speech emotion database. Evaluations are
performed using three different Western languages against Urdu and experimental
results on different possible scenarios suggest various interesting aspects for
designing more adaptive emotion recognition system for such limited languages.
In results, selecting training instances of multiple languages can deliver
comparable results to baseline and augmentation a fraction of testing language
data while training can help to boost accuracy for speech emotion recognition.
URDU data is publicly available for further research.
| 2,020 | Computation and Language |
Measuring Societal Biases from Text Corpora with Smoothed First-Order
Co-occurrence | Text corpora are widely used resources for measuring societal biases and
stereotypes. The common approach to measuring such biases using a corpus is by
calculating the similarities between the embedding vector of a word (like
nurse) and the vectors of the representative words of the concepts of interest
(such as genders). In this study, we show that, depending on what one aims to
quantify as bias, this commonly-used approach can introduce non-relevant
concepts into bias measurement. We propose an alternative approach to bias
measurement utilizing the smoothed first-order co-occurrence relations between
the word and the representative concept words, which we derive by
reconstructing the co-occurrence estimates inherent in word embedding models.
We compare these approaches by conducting several experiments on the scenario
of measuring gender bias of occupational words, according to an English
Wikipedia corpus. Our experiments show higher correlations of the measured
gender bias with the actual gender bias statistics of the U.S. job market - on
two collections and with a variety of word embedding models - using the
first-order approach in comparison with the vector similarity-based approaches.
The first-order approach also suggests a more severe bias towards female in a
few specific occupations than the other approaches.
| 2,021 | Computation and Language |
Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual
Transfer and Beyond | We introduce an architecture to learn joint multilingual sentence
representations for 93 languages, belonging to more than 30 different families
and written in 28 different scripts. Our system uses a single BiLSTM encoder
with a shared BPE vocabulary for all languages, which is coupled with an
auxiliary decoder and trained on publicly available parallel corpora. This
enables us to learn a classifier on top of the resulting embeddings using
English annotated data only, and transfer it to any of the 93 languages without
any modification. Our experiments in cross-lingual natural language inference
(XNLI dataset), cross-lingual document classification (MLDoc dataset) and
parallel corpus mining (BUCC dataset) show the effectiveness of our approach.
We also introduce a new test set of aligned sentences in 112 languages, and
show that our sentence embeddings obtain strong results in multilingual
similarity search even for low-resource languages. Our implementation, the
pre-trained encoder and the multilingual test set are available at
https://github.com/facebookresearch/LASER
| 2,021 | Computation and Language |
Automatic Summarization of Natural Language | Automatic summarization of natural language is a current topic in computer
science research and industry, studied for decades because of its usefulness
across multiple domains. For example, summarization is necessary to create
reviews such as this one. Research and applications have achieved some success
in extractive summarization (where key sentences are curated), however,
abstractive summarization (synthesis and re-stating) is a hard problem and
generally unsolved in computer science. This literature review contrasts
historical progress up through current state of the art, comparing dimensions
such as: extractive vs. abstractive, supervised vs. unsupervised, NLP (Natural
Language Processing) vs Knowledge-based, deep learning vs algorithms,
structured vs. unstructured sources, and measurement metrics such as Rouge and
BLEU. Multiple dimensions are contrasted since current research uses
combinations of approaches as seen in the review matrix. Throughout this
summary, synthesis and critique is provided. This review concludes with
insights for improved abstractive summarization measurement, with surprising
implications for detecting understanding and comprehension in general.
| 2,018 | Computation and Language |
Cross-relation Cross-bag Attention for Distantly-supervised Relation
Extraction | Distant supervision leverages knowledge bases to automatically label
instances, thus allowing us to train relation extractor without human
annotations. However, the generated training data typically contain massive
noise, and may result in poor performances with the vanilla supervised
learning. In this paper, we propose to conduct multi-instance learning with a
novel Cross-relation Cross-bag Selective Attention (C$^2$SA), which leads to
noise-robust training for distant supervised relation extractor. Specifically,
we employ the sentence-level selective attention to reduce the effect of noisy
or mismatched sentences, while the correlation among relations were captured to
improve the quality of attention weights. Moreover, instead of treating all
entity-pairs equally, we try to pay more attention to entity-pairs with a
higher quality. Similarly, we adopt the selective attention mechanism to
achieve this goal. Experiments with two types of relation extractor demonstrate
the superiority of the proposed approach over the state-of-the-art, while
further ablation studies verify our intuitions and demonstrate the
effectiveness of our proposed two techniques.
| 2,018 | Computation and Language |
Intent Detection and Slots Prompt in a Closed-Domain Chatbot | In this paper, we introduce a methodology for predicting intent and slots of
a query for a chatbot that answers career-related queries. We take a
multi-staged approach where both the processes (intent-classification and
slot-tagging) inform each other's decision-making in different stages. The
model breaks down the problem into stages, solving one problem at a time and
passing on relevant results of the current stage to the next, thereby reducing
search space for subsequent stages, and eventually making classification and
tagging more viable after each stage. We also observe that relaxing rules for a
fuzzy entity-matching in slot-tagging after each stage (by maintaining a
separate Named Entity Tagger per stage) helps us improve performance, although
at a slight cost of false-positives. Our model has achieved state-of-the-art
performance with F1-score of 77.63% for intent-classification and 82.24% for
slot-tagging on our dataset that we would publicly release along with the
paper.
| 2,019 | Computation and Language |
CAN: Constrained Attention Networks for Multi-Aspect Sentiment Analysis | Aspect level sentiment classification is a fine-grained sentiment analysis
task. To detect the sentiment towards a particular aspect in a sentence,
previous studies have developed various attention-based methods for generating
aspect-specific sentence representations. However, the attention may inherently
introduce noise and downgrade the performance. In this paper, we propose
constrained attention networks (CAN), a simple yet effective solution, to
regularize the attention for multi-aspect sentiment analysis, which alleviates
the drawback of the attention mechanism. Specifically, we introduce orthogonal
regularization on multiple aspects and sparse regularization on each single
aspect. Experimental results on two public datasets demonstrate the
effectiveness of our approach. We further extend our approach to multi-task
settings and outperform the state-of-the-art methods.
| 2,019 | Computation and Language |
Advancing the State of the Art in Open Domain Dialog Systems through the
Alexa Prize | Building open domain conversational systems that allow users to have engaging
conversations on topics of their choice is a challenging task. Alexa Prize was
launched in 2016 to tackle the problem of achieving natural, sustained,
coherent and engaging open-domain dialogs. In the second iteration of the
competition in 2018, university teams advanced the state of the art by using
context in dialog models, leveraging knowledge graphs for language
understanding, handling complex utterances, building statistical and
hierarchical dialog managers, and leveraging model-driven signals from user
responses. The 2018 competition also included the provision of a suite of tools
and models to the competitors including the CoBot (conversational bot) toolkit,
topic and dialog act detection models, conversation evaluators, and a sensitive
content detection model so that the competing teams could focus on building
knowledge-rich, coherent and engaging multi-turn dialog systems. This paper
outlines the advances developed by the university teams as well as the Alexa
Prize team to achieve the common goal of advancing the science of
Conversational AI. We address several key open-ended problems such as
conversational speech recognition, open domain natural language understanding,
commonsense reasoning, statistical dialog management, and dialog evaluation.
These collaborative efforts have driven improved experiences by Alexa users to
an average rating of 3.61, the median duration of 2 mins 18 seconds, and
average turns to 14.6, increases of 14%, 92%, 54% respectively since the launch
of the 2018 competition. For conversational speech recognition, we have
improved our relative Word Error Rate by 55% and our relative Entity Error Rate
by 34% since the launch of the Alexa Prize. Socialbots improved in quality
significantly more rapidly in 2018, in part due to the release of the CoBot
toolkit.
| 2,018 | Computation and Language |
The Clickbait Challenge 2017: Towards a Regression Model for Clickbait
Strength | Clickbait has grown to become a nuisance to social media users and social
media operators alike. Malicious content publishers misuse social media to
manipulate as many users as possible to visit their websites using clickbait
messages. Machine learning technology may help to handle this problem, giving
rise to automatic clickbait detection. To accelerate progress in this
direction, we organized the Clickbait Challenge 2017, a shared task inviting
the submission of clickbait detectors for a comparative evaluation. A total of
13 detectors have been submitted, achieving significant improvements over the
previous state of the art in terms of detection performance. Also, many of the
submitted approaches have been published open source, rendering them
reproducible, and a good starting point for newcomers. While the 2017 challenge
has passed, we maintain the evaluation system and answer to new registrations
in support of the ongoing research on better clickbait detectors.
| 2,018 | Computation and Language |
Can You Tell Me How to Get Past Sesame Street? Sentence-Level
Pretraining Beyond Language Modeling | Natural language understanding has recently seen a surge of progress with the
use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et
al., 2019) which are pretrained on variants of language modeling. We conduct
the first large-scale systematic study of candidate pretraining tasks,
comparing 19 different tasks both as alternatives and complements to language
modeling. Our primary results support the use language modeling, especially
when combined with pretraining on additional labeled-data tasks. However, our
results are mixed across pretraining tasks and show some concerning trends: In
ELMo's pretrain-then-freeze paradigm, random baselines are worryingly strong
and results vary strikingly across target tasks. In addition, fine-tuning BERT
on an intermediate task often negatively impacts downstream transfer. In a more
positive trend, we see modest gains from multitask training, suggesting the
development of more sophisticated multitask and transfer learning techniques as
an avenue for further research.
| 2,019 | Computation and Language |
Identifying Computer-Translated Paragraphs using Coherence Features | We have developed a method for extracting the coherence features from a
paragraph by matching similar words in its sentences. We conducted an
experiment with a parallel German corpus containing 2000 human-created and 2000
machine-translated paragraphs. The result showed that our method achieved the
best performance (accuracy = 72.3%, equal error rate = 29.8%) when it is
compared with previous methods on various computer-generated text including
translation and paper generation (best accuracy = 67.9%, equal error rate =
32.0%). Experiments on Dutch, another rich resource language, and a low
resource one (Japanese) attained similar performances. It demonstrated the
efficiency of the coherence features at distinguishing computer-translated from
human-created paragraphs on diverse languages.
| 2,018 | Computation and Language |
Knowledge Representation Learning: A Quantitative Review | Knowledge representation learning (KRL) aims to represent entities and
relations in knowledge graph in low-dimensional semantic space, which have been
widely used in massive knowledge-driven tasks. In this article, we introduce
the reader to the motivations for KRL, and overview existing approaches for
KRL. Afterwards, we extensively conduct and quantitative comparison and
analysis of several typical KRL methods on three evaluation tasks of knowledge
acquisition including knowledge graph completion, triple classification, and
relation extraction. We also review the real-world applications of KRL, such as
language modeling, question answering, information retrieval, and recommender
systems. Finally, we discuss the remaining challenges and outlook the future
directions for KRL. The codes and datasets used in the experiments can be found
in https://github.com/thunlp/OpenKE.
| 2,018 | Computation and Language |
The role of grammar in transition-probabilities of subsequent words in
English text | Sentence formation is a highly structured, history-dependent, and
sample-space reducing (SSR) process. While the first word in a sentence can be
chosen from the entire vocabulary, typically, the freedom of choosing
subsequent words gets more and more constrained by grammar and context, as the
sentence progresses. This sample-space reducing property offers a natural
explanation of Zipf's law in word frequencies, however, it fails to capture the
structure of the word-to-word transition probability matrices of English text.
Here we adopt the view that grammatical constraints (such as
subject--predicate--object) locally re-order the word order in sentences that
are sampled with a SSR word generation process. We demonstrate that
superimposing grammatical structure -- as a local word re-ordering
(permutation) process -- on a sample-space reducing process is sufficient to
explain both, word frequencies and word-to-word transition probabilities. We
compare the quality of the grammatically ordered SSR model in reproducing
several test statistics of real texts with other text generation models, such
as the Bernoulli model, the Simon model, and the Monkey typewriting model.
| 2,018 | Computation and Language |
Weakly-Supervised Hierarchical Text Classification | Hierarchical text classification, which aims to classify text documents into
a given hierarchy, is an important task in many real-world applications.
Recently, deep neural models are gaining increasing popularity for text
classification due to their expressive power and minimum requirement for
feature engineering. However, applying deep neural networks for hierarchical
text classification remains challenging, because they heavily rely on a large
amount of training data and meanwhile cannot easily determine appropriate
levels of documents in the hierarchical setting. In this paper, we propose a
weakly-supervised neural method for hierarchical text classification. Our
method does not require a large amount of training data but requires only
easy-to-provide weak supervision signals such as a few class-related documents
or keywords. Our method effectively leverages such weak supervision signals to
generate pseudo documents for model pre-training, and then performs
self-training on real unlabeled data to iteratively refine the model. During
the training process, our model features a hierarchical neural structure, which
mimics the given hierarchy and is capable of determining the proper levels for
documents with a blocking mechanism. Experiments on three datasets from
different domains demonstrate the efficacy of our method compared with a
comprehensive set of baselines.
| 2,019 | Computation and Language |
End-to-end neural relation extraction using deep biaffine attention | We propose a neural network model for joint extraction of named entities and
relations between them, without any hand-crafted features. The key contribution
of our model is to extend a BiLSTM-CRF-based entity recognition model with a
deep biaffine attention layer to model second-order interactions between latent
features for relation classification, specifically attending to the role of an
entity in a directional relationship. On the benchmark "relation and entity
recognition" dataset CoNLL04, experimental results show that our model
outperforms previous models, producing new state-of-the-art performances.
| 2,019 | Computation and Language |
A neural joint model for Vietnamese word segmentation, POS tagging and
dependency parsing | We propose the first multi-task learning model for joint Vietnamese word
segmentation, part-of-speech (POS) tagging and dependency parsing. In
particular, our model extends the BIST graph-based dependency parser
(Kiperwasser and Goldberg, 2016) with BiLSTM-CRF-based neural layers (Huang et
al., 2015) for word segmentation and POS tagging. On Vietnamese benchmark
datasets, experimental results show that our joint model obtains
state-of-the-art or competitive performances.
| 2,019 | Computation and Language |
Variational Self-attention Model for Sentence Representation | This paper proposes a variational self-attention model (VSAM) that employs
variational inference to derive self-attention. We model the self-attention
vector as random variables by imposing a probabilistic distribution. The
self-attention mechanism summarizes source information as an attention vector
by weighted sum, where the weights are a learned probabilistic distribution.
Compared with conventional deterministic counterpart, the stochastic units
incorporated by VSAM allow multi-modal attention distributions. Furthermore, by
marginalizing over the latent variables, VSAM is more robust against
overfitting. Experiments on the stance detection task demonstrate the
superiority of our method.
| 2,020 | Computation and Language |
Sentiment Classification of Customer Reviews about Automobiles in Roman
Urdu | Text mining is a broad field having sentiment mining as its important
constituent in which we try to deduce the behavior of people towards a specific
item, merchandise, politics, sports, social media comments, review sites etc.
Out of many issues in sentiment mining, analysis and classification, one major
issue is that the reviews and comments can be in different languages like
English, Arabic, Urdu etc. Handling each language according to its rules is a
difficult task. A lot of research work has been done in English Language for
sentiment analysis and classification but limited sentiment analysis work is
being carried out on other regional languages like Arabic, Urdu and Hindi. In
this paper, Waikato Environment for Knowledge Analysis (WEKA) is used as a
platform to execute different classification models for text classification of
Roman Urdu text. Reviews dataset has been scrapped from different automobiles
sites. These extracted Roman Urdu reviews, containing 1000 positive and 1000
negative reviews, are then saved in WEKA attribute-relation file format (arff)
as labeled examples. Training is done on 80% of this data and rest of it is
used for testing purpose which is done using different models and results are
analyzed in each case. The results show that Multinomial Naive Bayes
outperformed Bagging, Deep Neural Network, Decision Tree, Random Forest,
AdaBoost, k-NN and SVM Classifiers in terms of more accuracy, precision, recall
and F-measure.
| 2,018 | Computation and Language |
Multilingual Constituency Parsing with Self-Attention and Pre-Training | We show that constituency parsing benefits from unsupervised pre-training
across a variety of languages and a range of pre-training conditions. We first
compare the benefits of no pre-training, fastText, ELMo, and BERT for English
and find that BERT outperforms ELMo, in large part due to increased model
capacity, whereas ELMo in turn outperforms the non-contextual fastText
embeddings. We also find that pre-training is beneficial across all 11
languages tested; however, large model sizes (more than 100 million parameters)
make it computationally expensive to train separate models for each language.
To address this shortcoming, we show that joint multilingual pre-training and
fine-tuning allows sharing all but a small number of parameters between ten
languages in the final model. The 10x reduction in model size compared to
fine-tuning one model per language causes only a 3.2% relative error increase
in aggregate. We further explore the idea of joint fine-tuning and show that it
gives low-resource languages a way to benefit from the larger datasets of other
languages. Finally, we demonstrate new state-of-the-art results for 11
languages, including English (95.8 F1) and Chinese (91.8 F1).
| 2,019 | Computation and Language |
Advancing Acoustic-to-Word CTC Model with Attention and Mixed-Units | The acoustic-to-word model based on the Connectionist Temporal Classification
(CTC) criterion is a natural end-to-end (E2E) system directly targeting word as
output unit. Two issues exist in the system: first, the current output of the
CTC model relies on the current input and does not account for context weighted
inputs. This is the hard alignment issue. Second, the word-based CTC model
suffers from the out-of-vocabulary (OOV) issue. This means it can model only
frequently occurring words while tagging the remaining words as OOV. Hence,
such a model is limited in its capacity in recognizing only a fixed set of
frequent words. In this study, we propose addressing these problems using a
combination of attention mechanism and mixed-units. In particular, we introduce
Attention CTC, Self-Attention CTC, Hybrid CTC, and Mixed-unit CTC.
First, we blend attention modeling capabilities directly into the CTC network
using Attention CTC and Self-Attention CTC. Second, to alleviate the OOV issue,
we present Hybrid CTC which uses a word and letter CTC with shared hidden
layers. The Hybrid CTC consults the letter CTC when the word CTC emits an OOV.
Then, we propose a much better solution by training a Mixed-unit CTC which
decomposes all the OOV words into sequences of frequent words and multi-letter
units. Evaluated on a 3400 hours Microsoft Cortana voice assistant task, our
final acoustic-to-word solution using attention and mixed-units achieves a
relative reduction in word error rate (WER) over the vanilla word CTC by
12.09\%. Such an E2E model without using any language model (LM) or complex
decoder also outperforms a traditional context-dependent (CD) phoneme CTC with
strong LM and decoder by 6.79% relative.
| 2,019 | Computation and Language |
Entity Synonym Discovery via Multipiece Bilateral Context Matching | Being able to automatically discover synonymous entities in an open-world
setting benefits various tasks such as entity disambiguation or knowledge graph
canonicalization. Existing works either only utilize entity features, or rely
on structured annotations from a single piece of context where the entity is
mentioned. To leverage diverse contexts where entities are mentioned, in this
paper, we generalize the distributional hypothesis to a multi-context setting
and propose a synonym discovery framework that detects entity synonyms from
free-text corpora with considerations on effectiveness and robustness. As one
of the key components in synonym discovery, we introduce a neural network model
SYNONYMNET to determine whether or not two given entities are synonym with each
other. Instead of using entities features, SYNONYMNET makes use of multiple
pieces of contexts in which the entity is mentioned, and compares the
context-level similarity via a bilateral matching schema. Experimental results
demonstrate that the proposed model is able to detect synonym sets that are not
observed during training on both generic and domain-specific datasets:
Wiki+Freebase, PubMed+UMLS, and MedBook+MKG, with up to 4.16% improvement in
terms of Area Under the Curve and 3.19% in terms of Mean Average Precision
compared to the best baseline method.
| 2,020 | Computation and Language |
Improving Tree-LSTM with Tree Attention | In Natural Language Processing (NLP), we often need to extract information
from tree topology. Sentence structure can be represented via a dependency tree
or a constituency tree structure. For this reason, a variant of LSTMs, named
Tree-LSTM, was proposed to work on tree topology. In this paper, we design a
generalized attention framework for both dependency and constituency trees by
encoding variants of decomposable attention inside a Tree-LSTM cell. We
evaluated our models on a semantic relatedness task and achieved notable
results compared to Tree-LSTM based methods with no attention as well as other
neural and non-neural methods and good results compared to Tree-LSTM based
methods with attention.
| 2,019 | Computation and Language |
Text Infilling | Recent years have seen remarkable progress of text generation in different
contexts, such as the most common setting of generating text from scratch, and
the emerging paradigm of retrieval-and-rewriting. Text infilling, which fills
missing text portions of a sentence or paragraph, is also of numerous use in
real life, yet is under-explored. Previous work has focused on restricted
settings by either assuming single word per missing portion or limiting to a
single missing portion to the end of the text. This paper studies the general
task of text infilling, where the input text can have an arbitrary number of
portions to be filled, each of which may require an arbitrary unknown number of
tokens. We study various approaches for the task, including a self-attention
model with segment-aware position encoding and bidirectional context modeling.
We create extensive supervised data by masking out text with varying
strategies. Experiments show the self-attention model greatly outperforms
others, creating a strong baseline for future research.
| 2,019 | Computation and Language |
A Deep Learning Approach for Similar Languages, Varieties and Dialects | Deep learning mechanisms are prevailing approaches in recent days for the
various tasks in natural language processing, speech recognition, image
processing and many others. To leverage this we use deep learning based
mechanism specifically Bidirectional- Long Short-Term Memory (B-LSTM) for the
task of dialectic identification in Arabic and German broadcast speech and Long
Short-Term Memory (LSTM) for discriminating between similar Languages. Two
unique B-LSTM models are created using the Large-vocabulary Continuous Speech
Recognition (LVCSR) based lexical features and a fixed length of 400 per
utterance bottleneck features generated by i-vector framework. These models
were evaluated on the VarDial 2017 datasets for the tasks Arabic, German
dialect identification with dialects of Egyptian, Gulf, Levantine, North
African, and MSA for Arabic and Basel, Bern, Lucerne, and Zurich for German.
Also for the task of Discriminating between Similar Languages like Bosnian,
Croatian and Serbian. The B-LSTM model showed accuracy of 0.246 on lexical
features and accuracy of 0.577 bottleneck features of i-Vector framework.
| 2,019 | Computation and Language |
Judge the Judges: A Large-Scale Evaluation Study of Neural Language
Models for Online Review Generation | We conduct a large-scale, systematic study to evaluate the existing
evaluation methods for natural language generation in the context of generating
online product reviews. We compare human-based evaluators with a variety of
automated evaluation procedures, including discriminative evaluators that
measure how well machine-generated text can be distinguished from human-written
text, as well as word overlap metrics that assess how similar the generated
text compares to human-written references. We determine to what extent these
different evaluators agree on the ranking of a dozen of state-of-the-art
generators for online product reviews. We find that human evaluators do not
correlate well with discriminative evaluators, leaving a bigger question of
whether adversarial accuracy is the correct objective for natural language
generation. In general, distinguishing machine-generated text is challenging
even for human evaluators, and human decisions correlate better with lexical
overlaps. We find lexical diversity an intriguing metric that is indicative of
the assessments of different evaluators. A post-experiment survey of
participants provides insights into how to evaluate and improve the quality of
natural language generation systems.
| 2,019 | Computation and Language |
Deep Representation Learning for Clustering of Health Tweets | Twitter has been a prominent social media platform for mining
population-level health data and accurate clustering of health-related tweets
into topics is important for extracting relevant health insights. In this work,
we propose deep convolutional autoencoders for learning compact representations
of health-related tweets, further to be employed in clustering. We compare our
method to several conventional tweet representation methods including
bag-of-words, term frequency-inverse document frequency, Latent Dirichlet
Allocation and Non-negative Matrix Factorization with 3 different clustering
algorithms. Our results show that the clustering performance using proposed
representation learning scheme significantly outperforms that of conventional
methods for all experiments of different number of clusters. In addition, we
propose a constraint on the learned representations during the neural network
training in order to further enhance the clustering performance. All in all,
this study introduces utilization of deep neural network-based architectures,
i.e., deep convolutional autoencoders, for learning informative representations
of health-related tweets.
| 2,019 | Computation and Language |
Pull out all the stops: Textual analysis via punctuation sequences | Whether enjoying the lucid prose of a favorite author or slogging through
some other writer's cumbersome, heavy-set prattle (full of parentheses, em
dashes, compound adjectives, and Oxford commas), readers will notice stylistic
signatures not only in word choice and grammar, but also in punctuation itself.
Indeed, visual sequences of punctuation from different authors produce
marvelously different (and visually striking) sequences. Punctuation is a
largely overlooked stylistic feature in "stylometry", the quantitative analysis
of written text. In this paper, we examine punctuation sequences in a corpus of
literary documents and ask the following questions: Are the properties of such
sequences a distinctive feature of different authors? Is it possible to
distinguish literary genres based on their punctuation sequences? Do the
punctuation styles of authors evolve over time? Are we on to something
interesting in trying to do stylometry without words, or are we full of sound
and fury (signifying nothing)?
| 2,021 | Computation and Language |
Types, Tokens, and Hapaxes: A New Heap's Law | Heap's Law states that in a large enough text corpus, the number of types as
a function of tokens grows as $N=KM^\beta$ for some free parameters $K,\beta$.
Much has been written about how this result and various generalizations can be
derived from Zipf's Law. Here we derive from first principles a completely
novel expression of the type-token curve and prove its superior accuracy on
real text. This expression naturally generalizes to equally accurate estimates
for counting hapaxes and higher $n$-legomena.
| 2,019 | Computation and Language |
Coarse-grain Fine-grain Coattention Network for Multi-evidence Question
Answering | End-to-end neural models have made significant progress in question
answering, however recent studies show that these models implicitly assume that
the answer and evidence appear close together in a single document. In this
work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new
question answering model that combines information from evidence across
multiple documents. The CFC consists of a coarse-grain module that interprets
documents with respect to the query then finds a relevant answer, and a
fine-grain module which scores each candidate answer by comparing its
occurrences across all of the documents with the query. We design these modules
using hierarchies of coattention and self-attention, which learn to emphasize
different parts of the input. On the Qangaroo WikiHop multi-evidence question
answering task, the CFC obtains a new state-of-the-art result of 70.6% on the
blind test set, outperforming the previous best by 3% accuracy despite not
using pretrained contextual encoders.
| 2,019 | Computation and Language |
A Joint Model for Multimodal Document Quality Assessment | The quality of a document is affected by various factors, including
grammaticality, readability, stylistics, and expertise depth, making the task
of document quality assessment a complex one. In this paper, we explore this
task in the context of assessing the quality of Wikipedia articles and academic
papers. Observing that the visual rendering of a document can capture implicit
quality indicators that are not present in the document text --- such as
images, font choices, and visual layout --- we propose a joint model that
combines the text content with a visual rendering of the document for document
quality assessment. Experimental results over two datasets reveal that textual
and visual features are complementary, achieving state-of-the-art results.
| 2,019 | Computation and Language |
Machine Translation: A Literature Review | Machine translation (MT) plays an important role in benefiting linguists,
sociologists, computer scientists, etc. by processing natural language to
translate it into some other natural language. And this demand has grown
exponentially over past couple of years, considering the enormous exchange of
information between different regions with different regional languages.
Machine Translation poses numerous challenges, some of which are: a) Not all
words in one language has equivalent word in another language b) Two given
languages may have completely different structures c) Words can have more than
one meaning. Owing to these challenges, along with many others, MT has been
active area of research for more than five decades. Numerous methods have been
proposed in the past which either aim at improving the quality of the
translations generated by them, or study the robustness of these systems by
measuring their performance on many different languages. In this literature
review, we discuss statistical approaches (in particular word-based and
phrase-based) and neural approaches which have gained widespread prominence
owing to their state-of-the-art results across multiple major languages.
| 2,019 | Computation and Language |
Aspect Category Detection via Topic-Attention Network | The e-commerce has started a new trend in natural language processing through
sentiment analysis of user-generated reviews. Different consumers have
different concerns about various aspects of a specific product or service.
Aspect category detection, as a subtask of aspect-based sentiment analysis,
tackles the problem of categorizing a given review sentence into a set of
pre-defined aspect categories. In recent years, deep learning approaches have
brought revolutionary advances in multiple branches of natural language
processing including sentiment analysis. In this paper, we propose a deep
neural network method based on attention mechanism to identify different aspect
categories of a given review sentence. Our model utilizes several attentions
with different topic contexts, enabling it to attend to different parts of a
review sentence based on different topics. Experimental results on two datasets
in the restaurant domain released by SemEval workshop demonstrates that our
approach outperforms existing methods on both datasets. Visualization of the
topic attention weights shows the effectiveness of our model in identifying
words related to different topics.
| 2,019 | Computation and Language |
Transfer learning from language models to image caption generators:
Better models may not transfer better | When designing a neural caption generator, a convolutional neural network can
be used to extract image features. Is it possible to also use a neural language
model to extract sentence prefix features? We answer this question by trying
different ways to transfer the recurrent neural network and embedding layer
from a neural language model to an image caption generator. We find that image
caption generators with transferred parameters perform better than those
trained from scratch, even when simply pre-training them on the text of the
same captions dataset it will later be trained on. We also find that the best
language models (in terms of perplexity) do not result in the best caption
generators after transfer learning.
| 2,019 | Computation and Language |
Speaker Adaptation for End-to-End CTC Models | We propose two approaches for speaker adaptation in end-to-end (E2E)
automatic speech recognition systems. One is Kullback-Leibler divergence (KLD)
regularization and the other is multi-task learning (MTL). Both approaches aim
to address the data sparsity especially output target sparsity issue of speaker
adaptation in E2E systems. The KLD regularization adapts a model by forcing the
output distribution from the adapted model to be close to the unadapted one.
The MTL utilizes a jointly trained auxiliary task to improve the performance of
the main task. We investigated our approaches on E2E connectionist temporal
classification (CTC) models with three different types of output units.
Experiments on the Microsoft short message dictation task demonstrated that MTL
outperforms KLD regularization. In particular, the MTL adaptation obtained
8.8\% and 4.0\% relative word error rate reductions (WERRs) for supervised and
unsupervised adaptations for the word CTC model, and 9.6% and 3.8% relative
WERRs for the mix-unit CTC model, respectively.
| 2,019 | Computation and Language |
Addressing Objects and Their Relations: The Conversational Entity
Dialogue Model | Statistical spoken dialogue systems usually rely on a single- or multi-domain
dialogue model that is restricted in its capabilities of modelling complex
dialogue structures, e.g., relations. In this work, we propose a novel dialogue
model that is centred around entities and is able to model relations as well as
multiple entities of the same type. We demonstrate in a prototype
implementation benefits of relation modelling on the dialogue level and show
that a trained policy using these relations outperforms the multi-domain
baseline. Furthermore, we show that by modelling the relations on the dialogue
level, the system is capable of processing relations present in the user input
and even learns to address them in the system response.
| 2,019 | Computation and Language |
A Comparative Study on Vocabulary Reduction for Phrase Table Smoothing | This work systematically analyzes the smoothing effect of vocabulary
reduction for phrase translation models. We extensively compare various
word-level vocabularies to show that the performance of smoothing is not
significantly affected by the choice of vocabulary. This result provides
empirical evidence that the standard phrase translation model is extremely
sparse. Our experiments also reveal that vocabulary reduction is more effective
for smoothing large-scale phrase tables.
| 2,019 | Computation and Language |
Unsupervised Training for Large Vocabulary Translation Using Sparse
Lexicon and Word Classes | We address for the first time unsupervised training for a translation task
with hundreds of thousands of vocabulary words. We scale up the
expectation-maximization (EM) algorithm to learn a large translation table
without any parallel text or seed lexicon. First, we solve the memory
bottleneck and enforce the sparsity with a simple thresholding scheme for the
lexicon. Second, we initialize the lexicon training with word classes, which
efficiently boosts the performance. Our methods produced promising results on
two large-scale unsupervised translation tasks.
| 2,019 | Computation and Language |
Improving Unsupervised Word-by-Word Translation with Language Model and
Denoising Autoencoder | Unsupervised learning of cross-lingual word embedding offers elegant matching
of words across languages, but has fundamental limitations in translating
sentences. In this paper, we propose simple yet effective methods to improve
word-by-word translation of cross-lingual embeddings, using only monolingual
corpora but without any back-translation. We integrate a language model for
context-aware search, and use a novel denoising autoencoder to handle
reordering. Our system surpasses state-of-the-art unsupervised neural
translation systems without costly iterative training. We also analyze the
effect of vocabulary size and denoising type on the translation performance,
which provides better understanding of learning the cross-lingual word
embedding and its usage in translation.
| 2,019 | Computation and Language |
Named Entity Recognition in Electronic Health Records Using Transfer
Learning Bootstrapped Neural Networks | Neural networks (NNs) have become the state of the art in many machine
learning applications, especially in image and sound processing [1]. The same,
although to a lesser extent [2,3], could be said in natural language processing
(NLP) tasks, such as named entity recognition. However, the success of NNs
remains dependent on the availability of large labelled datasets, which is a
significant hurdle in many important applications. One such case are electronic
health records (EHRs), which are arguably the largest source of medical data,
most of which lies hidden in natural text [4,5]. Data access is difficult due
to data privacy concerns, and therefore annotated datasets are scarce. With
scarce data, NNs will likely not be able to extract this hidden information
with practical accuracy. In our study, we develop an approach that solves these
problems for named entity recognition, obtaining 94.6 F1 score in I2B2 2009
Medical Extraction Challenge [6], 4.3 above the architecture that won the
competition. Beyond the official I2B2 challenge, we further achieve 82.4 F1 on
extracting relationships between medical terms. To reach this state-of-the-art
accuracy, our approach applies transfer learning to leverage on datasets
annotated for other I2B2 tasks, and designs and trains embeddings that
specially benefit from such transfer.
| 2,019 | Computation and Language |
Text Mining Customer Reviews For Aspect-based Restaurant Rating | This study applies text mining to analyze customer reviews and automatically
assign a collective restaurant star rating based on five predetermined aspects:
ambiance, cost, food, hygiene, and service. The application provides a web and
mobile crowd sourcing platform where users share dining experiences and get
insights about the strengths and weaknesses of a restaurant through user
contributed feedback. Text reviews are tokenized into sentences. Noun-adjective
pairs are extracted from each sentence using Stanford Core NLP library and are
associated to aspects based on the bag of associated words fed into the system.
The sentiment weight of the adjectives is determined through AFINN library. An
overall restaurant star rating is computed based on the individual aspect
rating. Further, a word cloud is generated to provide visual display of the
most frequently occurring terms in the reviews. The more feedbacks are added
the more reflective the sentiment score to the restaurants' performance.
| 2,018 | Computation and Language |
Vector representations of text data in deep learning | In this dissertation we report results of our research on dense distributed
representations of text data. We propose two novel neural models for learning
such representations. The first model learns representations at the document
level, while the second model learns word-level representations.
For document-level representations we propose Binary Paragraph Vector: a
neural network models for learning binary representations of text documents,
which can be used for fast document retrieval. We provide a thorough evaluation
of these models and demonstrate that they outperform the seminal method in the
field in the information retrieval task. We also report strong results in
transfer learning settings, where our models are trained on a generic text
corpus and then used to infer codes for documents from a domain-specific
dataset. In contrast to previously proposed approaches, Binary Paragraph Vector
models learn embeddings directly from raw text data.
For word-level representations we propose Disambiguated Skip-gram: a neural
network model for learning multi-sense word embeddings. Representations learned
by this model can be used in downstream tasks, like part-of-speech tagging or
identification of semantic relations. In the word sense induction task
Disambiguated Skip-gram outperforms state-of-the-art models on three out of
four benchmarks datasets. Our model has an elegant probabilistic
interpretation. Furthermore, unlike previous models of this kind, it is
differentiable with respect to all its parameters and can be trained with
backpropagation. In addition to quantitative results, we present qualitative
evaluation of Disambiguated Skip-gram, including two-dimensional visualisations
of selected word-sense embeddings.
| 2,019 | Computation and Language |
Interactive Matching Network for Multi-Turn Response Selection in
Retrieval-Based Chatbots | In this paper, we propose an interactive matching network (IMN) for the
multi-turn response selection task. First, IMN constructs word representations
from three aspects to address the challenge of out-of-vocabulary (OOV) words.
Second, an attentive hierarchical recurrent encoder (AHRE), which is capable of
encoding sentences hierarchically and generating more descriptive
representations by aggregating with an attention mechanism, is designed.
Finally, the bidirectional interactions between whole multi-turn contexts and
response candidates are calculated to derive the matching information between
them. Experiments on four public datasets show that IMN outperforms the
baseline models on all metrics, achieving a new state-of-the-art performance
and demonstrating compatibility across domains for multi-turn response
selection.
| 2,019 | Computation and Language |
Stance Classification for Rumour Analysis in Twitter: Exploiting
Affective Information and Conversation Structure | Analysing how people react to rumours associated with news in social media is
an important task to prevent the spreading of misinformation, which is nowadays
widely recognized as a dangerous tendency. In social media conversations, users
show different stances and attitudes towards rumourous stories. Some users take
a definite stance, supporting or denying the rumour at issue, while others just
comment it, or ask for additional evidence related to the veracity of the
rumour. On this line, a new shared task has been proposed at SemEval-2017 (Task
8, SubTask A), which is focused on rumour stance classification in English
tweets. The goal is predicting user stance towards emerging rumours in Twitter,
in terms of supporting, denying, querying, or commenting the original rumour,
looking at the conversation threads originated by the rumour. This paper
describes a new approach to this task, where the use of conversation-based and
affective-based features, covering different facets of affect, has been
explored. Our classification model outperforms the best-performing systems for
stance classification at SemEval-2017 Task 8, showing the effectiveness of the
feature set proposed.
| 2,019 | Computation and Language |
Team EP at TAC 2018: Automating data extraction in systematic reviews of
environmental agents | We describe our entry for the Systematic Review Information Extraction track
of the 2018 Text Analysis Conference. Our solution is an end-to-end, deep
learning, sequence tagging model based on the BI-LSTM-CRF architecture.
However, we use interleaved, alternating LSTM layers with highway connections
instead of the more traditional approach, where last hidden states of both
directions are concatenated to create an input to the next layer. We also make
extensive use of pre-trained word embeddings, namely GloVe and ELMo. Thanks to
a number of regularization techniques, we were able to achieve relatively large
capacity of the model (31.3M+ of trainable parameters) for the size of training
set (100 documents, less than 200K tokens). The system's official score was
60.9% (micro-F1) and it ranked first for the Task 1. Additionally, after
rectifying an obvious mistake in the submission format, the system scored
67.35%.
| 2,019 | Computation and Language |
Multi-turn Inference Matching Network for Natural Language Inference | Natural Language Inference (NLI) is a fundamental and challenging task in
Natural Language Processing (NLP). Most existing methods only apply one-pass
inference process on a mixed matching feature, which is a concatenation of
different matching features between a premise and a hypothesis. In this paper,
we propose a new model called Multi-turn Inference Matching Network (MIMN) to
perform multi-turn inference on different matching features. In each turn, the
model focuses on one particular matching feature instead of the mixed matching
feature. To enhance the interaction between different matching features, a
memory component is employed to store the history inference information. The
inference of each turn is performed on the current matching feature and the
memory. We conduct experiments on three different NLI datasets. The
experimental results show that our model outperforms or achieves the
state-of-the-art performance on all the three datasets.
| 2,019 | Computation and Language |
DEMN: Distilled-Exposition Enhanced Matching Network for Story
Comprehension | This paper proposes a Distilled-Exposition Enhanced Matching Network (DEMN)
for story-cloze test, which is still a challenging task in story comprehension.
We divide a complete story into three narrative segments: an
\textit{exposition}, a \textit{climax}, and an \textit{ending}. The model
consists of three modules: input module, matching module, and distillation
module. The input module provides semantic representations for the three
segments and then feeds them into the other two modules. The matching module
collects interaction features between the ending and the climax. The
distillation module distills the crucial semantic information in the exposition
and infuses it into the matching module in two different ways. We evaluate our
single and ensemble model on ROCStories Corpus \cite{Mostafazadeh2016ACA},
achieving an accuracy of 80.1\% and 81.2\% on the test set respectively. The
experimental results demonstrate that our DEMN model achieves a
state-of-the-art performance.
| 2,019 | Computation and Language |
Multi-Perspective Fusion Network for Commonsense Reading Comprehension | Commonsense Reading Comprehension (CRC) is a significantly challenging task,
aiming at choosing the right answer for the question referring to a narrative
passage, which may require commonsense knowledge inference. Most of the
existing approaches only fuse the interaction information of choice, passage,
and question in a simple combination manner from a \emph{union} perspective,
which lacks the comparison information on a deeper level. Instead, we propose a
Multi-Perspective Fusion Network (MPFN), extending the single fusion method
with multiple perspectives by introducing the \emph{difference} and
\emph{similarity} fusion\deleted{along with the \emph{union}}. More
comprehensive and accurate information can be captured through the three types
of fusion. We design several groups of experiments on MCScript dataset
\cite{Ostermann:LREC18:MCScript} to evaluate the effectiveness of the three
types of fusion respectively. From the experimental results, we can conclude
that the difference fusion is comparable with union fusion, and the similarity
fusion needs to be activated by the union fusion. The experimental result also
shows that our MPFN model achieves the state-of-the-art with an accuracy of
83.52\% on the official test set.
| 2,019 | Computation and Language |
Multi-style Generative Reading Comprehension | This study tackles generative reading comprehension (RC), which consists of
answering questions based on textual evidence and natural language generation
(NLG). We propose a multi-style abstractive summarization model for question
answering, called Masque. The proposed model has two key characteristics.
First, unlike most studies on RC that have focused on extracting an answer span
from the provided passages, our model instead focuses on generating a summary
from the question and multiple passages. This serves to cover various answer
styles required for real-world applications. Second, whereas previous studies
built a specific model for each answer style because of the difficulty of
acquiring one general model, our approach learns multi-style answers within a
model to improve the NLG capability for all styles involved. This also enables
our model to give an answer in the target style. Experiments show that our
model achieves state-of-the-art performance on the Q&A task and the Q&A + NLG
task of MS MARCO 2.1 and the summary task of NarrativeQA. We observe that the
transfer of the style-independent NLG capability to the target style is the key
to its success.
| 2,019 | Computation and Language |
Choosing the Right Word: Using Bidirectional LSTM Tagger for Writing
Support Systems | Scientific writing is difficult. It is even harder for those for whom English
is a second language (ESL learners). Scholars around the world spend a
significant amount of time and resources proofreading their work before
submitting it for review or publication.
In this paper we present a novel machine learning based application for
proper word choice task. Proper word choice is a generalization the lexical
substitution (LS) and grammatical error correction (GEC) tasks. We demonstrate
and evaluate the usefulness of applying bidirectional Long Short Term Memory
(LSTM) tagger, for this task. While state-of-the-art grammatical error
correction uses error-specific classifiers and machine translation methods, we
demonstrate an unsupervised method that is based solely on a high quality text
corpus and does not require manually annotated data. We use a bidirectional
Recurrent Neural Network (RNN) with LSTM for learning the proper word choice
based on a word's sentential context. We demonstrate and evaluate our
application on both a domain-specific (scientific), writing task and a
general-purpose writing task. We show that our domain-specific and
general-purpose models outperform state-of-the-art general context learning. As
an additional contribution of this research, we also share our code,
pre-trained models, and a new ESL learner test set with the research community.
| 2,019 | Computation and Language |
On the Possibilities and Limitations of Multi-hop Reasoning Under
Linguistic Imperfections | Systems for language understanding have become remarkably strong at
overcoming linguistic imperfections in tasks involving phrase matching or
simple reasoning. Yet, their accuracy drops dramatically as the number of
reasoning steps increases. We present the first formal framework to study such
empirical observations. It allows one to quantify the amount and effect of
ambiguity, redundancy, incompleteness, and inaccuracy that the use of language
introduces when representing a hidden conceptual space. The idea is to consider
two interrelated spaces: a conceptual meaning space that is unambiguous and
complete but hidden, and a linguistic space that captures a noisy grounding of
the meaning space in the words of a language---the level at which all systems,
whether neural or symbolic, operate. Applying this framework to a special class
of multi-hop reasoning, namely the connectivity problem in graphs of
relationships between concepts, we derive rigorous intuitions and impossibility
results even under this simplified setting. For instance, if a query requires a
moderately large (logarithmic) number of hops in the meaning graph, no
reasoning system operating over a noisy graph grounded in language is likely to
correctly answer it. This highlights a fundamental barrier that extends to a
broader class of reasoning problems and systems, and suggests an alternative
path forward: focusing on aligning the two spaces via richer representations,
before investing in reasoning with many hops.
| 2,020 | Computation and Language |
Team Papelo: Transformer Networks at FEVER | We develop a system for the FEVER fact extraction and verification challenge
that uses a high precision entailment classifier based on transformer networks
pretrained with language modeling, to classify a broad set of potential
evidence. The precision of the entailment classifier allows us to enhance
recall by considering every statement from several articles to decide upon each
claim. We include not only the articles best matching the claim text by TFIDF
score, but read additional articles whose titles match named entities and
capitalized expressions occurring in the claim text. The entailment module
evaluates potential evidence one statement at a time, together with the title
of the page the evidence came from (providing a hint about possible pronoun
antecedents). In preliminary evaluation, the system achieves .5736 FEVER score,
.6108 label accuracy, and .6485 evidence F1 on the FEVER shared task test set.
| 2,019 | Computation and Language |
Supervised Transfer Learning for Product Information Question Answering | Popular e-commerce websites such as Amazon offer community question answering
systems for users to pose product related questions and experienced customers
may provide answers voluntarily. In this paper, we show that the large volume
of existing community question answering data can be beneficial when building a
system for answering questions related to product facts and specifications. Our
experimental results demonstrate that the performance of a model for answering
questions related to products listed in the Home Depot website can be improved
by a large margin via a simple transfer learning technique from an existing
large-scale Amazon community question answering dataset. Transfer learning can
result in an increase of about 10% in accuracy in the experimental setting
where we restrict the size of the data of the target task used for training. As
an application of this work, we integrate the best performing model trained in
this work into a mobile-based shopping assistant and show its usefulness.
| 2,019 | Computation and Language |
Computational Register Analysis and Synthesis | The study of register in computational language research has historically
been divided into register analysis, seeking to determine the registerial
character of a text or corpus, and register synthesis, seeking to generate a
text in a desired register. This article surveys the different approaches to
these disparate tasks. Register synthesis has tended to use more theoretically
articulated notions of register and genre than analysis work, which often seeks
to categorize on the basis of intuitive and somewhat incoherent notions of
prelabeled 'text types'. I argue that an integration of computational register
analysis and synthesis will benefit register studies as a whole, by enabling a
new large-scale research program in register studies. It will enable
comprehensive global mapping of functional language varieties in multiple
languages, including the relationships between them. Furthermore, computational
methods together with high coverage systematically collected and analyzed data
will thus enable rigorous empirical validation and refinement of different
theories of register, which will have also implications for our understanding
of linguistic variation in general.
| 2,019 | Computation and Language |
Sequential Attention-based Network for Noetic End-to-End Response
Selection | The noetic end-to-end response selection challenge as one track in Dialog
System Technology Challenges 7 (DSTC7) aims to push the state of the art of
utterance classification for real world goal-oriented dialog systems, for which
participants need to select the correct next utterances from a set of
candidates for the multi-turn context. This paper describes our systems that
are ranked the top on both datasets under this challenge, one focused and small
(Advising) and the other more diverse and large (Ubuntu). Previous
state-of-the-art models use hierarchy-based (utterance-level and token-level)
neural networks to explicitly model the interactions among different turns'
utterances for context modeling. In this paper, we investigate a sequential
matching model based only on chain sequence for multi-turn response selection.
Our results demonstrate that the potentials of sequential matching approaches
have not yet been fully exploited in the past for multi-turn response
selection. In addition to ranking the top in the challenge, the proposed model
outperforms all previous models, including state-of-the-art hierarchy-based
models, and achieves new state-of-the-art performances on two large-scale
public multi-turn response selection benchmark datasets.
| 2,019 | Computation and Language |
What do Language Representations Really Represent? | A neural language model trained on a text corpus can be used to induce
distributed representations of words, such that similar words end up with
similar representations. If the corpus is multilingual, the same model can be
used to learn distributed representations of languages, such that similar
languages end up with similar representations. We show that this holds even
when the multilingual corpus has been translated into English, by picking up
the faint signal left by the source languages. However, just like it is a
thorny problem to separate semantic from syntactic similarity in word
representations, it is not obvious what type of similarity is captured by
language representations. We investigate correlations and causal relationships
between language representations learned from translations on one hand, and
genetic, geographical, and several levels of structural similarity between
languages on the other. Of these, structural similarity is found to correlate
most strongly with language representation similarity, while genetic
relationships---a convenient benchmark used for evaluation in previous
work---appears to be a confounding factor. Apart from implications about
translation effects, we see this more generally as a case where NLP and
linguistic typology can interact and benefit one another.
| 2,019 | Computation and Language |
Is it Time to Swish? Comparing Deep Learning Activation Functions Across
NLP tasks | Activation functions play a crucial role in neural networks because they are
the nonlinearities which have been attributed to the success story of deep
learning. One of the currently most popular activation functions is ReLU, but
several competitors have recently been proposed or 'discovered', including
LReLU functions and swish. While most works compare newly proposed activation
functions on few tasks (usually from image classification) and against few
competitors (usually ReLU), we perform the first large-scale comparison of 21
activation functions across eight different NLP tasks. We find that a largely
unknown activation function performs most stably across all tasks, the
so-called penalized tanh function. We also show that it can successfully
replace the sigmoid and tanh gates in LSTM cells, leading to a 2 percentage
point (pp) improvement over the standard choices on a challenging NLP task.
| 2,019 | Computation and Language |
Sentiment Analysis of Czech Texts: An Algorithmic Survey | In the area of online communication, commerce and transactions, analyzing
sentiment polarity of texts written in various natural languages has become
crucial. While there have been a lot of contributions in resources and studies
for the English language, "smaller" languages like Czech have not received much
attention. In this survey, we explore the effectiveness of many existing
machine learning algorithms for sentiment analysis of Czech Facebook posts and
product reviews. We report the sets of optimal parameter values for each
algorithm and the scores in both datasets. We finally observe that support
vector machines are the best classifier and efforts to increase performance
even more with bagging, boosting or voting ensemble schemes fail to do so.
| 2,019 | Computation and Language |
Sentence Rewriting for Semantic Parsing | A major challenge of semantic parsing is the vocabulary mismatch problem
between natural language and target ontology. In this paper, we propose a
sentence rewriting based semantic parsing method, which can effectively resolve
the mismatch problem by rewriting a sentence into a new form which has the same
structure with its target logical form. Specifically, we propose two
sentence-rewriting methods for two common types of mismatch: a dictionary-based
method for 1-N mismatch and a template-based method for N-1 mismatch. We
evaluate our entence rewriting based semantic parser on the benchmark semantic
parsing dataset -- WEBQUESTIONS. Experimental results show that our system
outperforms the base system with a 3.4% gain in F1, and generates logical forms
more accurately and parses sentences more robustly.
| 2,019 | Computation and Language |
Equalizing Gender Biases in Neural Machine Translation with Word
Embeddings Techniques | Neural machine translation has significantly pushed forward the quality of
the field. However, there are remaining big issues with the output translations
and one of them is fairness. Neural models are trained on large text corpora
which contain biases and stereotypes. As a consequence, models inherit these
social biases. Recent methods have shown results in reducing gender bias in
other natural language processing tools such as word embeddings. We take
advantage of the fact that word embeddings are used in neural machine
translation to propose a method to equalize gender biases in neural machine
translation using these representations. Specifically, we propose, experiment
and analyze the integration of two debiasing techniques over GloVe embeddings
in the Transformer translation architecture. We evaluate our proposed system on
the WMT English-Spanish benchmark task, showing gains up to one BLEU point. As
for the gender bias evaluation, we generate a test set of occupations and we
show that our proposed system learns to equalize existing biases from the
baseline system.
| 2,019 | Computation and Language |
Emotion Detection using Data Driven Models | Text is the major method that is used for communication now a days, each and
every day lots of text are created. In this paper the text data is used for the
classification of the emotions. Emotions are the way of expression of the
persons feelings which has an high influence on the decision making tasks.
Datasets are collected which are available publically and combined together
based on the three emotions that are considered here positive, negative and
neutral. In this paper we have proposed the text representation method TFIDF
and keras embedding and then given to the classical machine learning algorithms
of which Logistics Regression gives the highest accuracy of about 75.6%, after
which it is passed to the deep learning algorithm which is the CNN which gives
the state of art accuracy of about 45.25%. For the research purpose the
datasets that has been collected are released.
| 2,019 | Computation and Language |
Linguistic Analysis of Pretrained Sentence Encoders with Acceptability
Judgments | Recent work on evaluating grammatical knowledge in pretrained sentence
encoders gives a fine-grained view of a small number of phenomena. We introduce
a new analysis dataset that also has broad coverage of linguistic phenomena. We
annotate the development set of the Corpus of Linguistic Acceptability (CoLA;
Warstadt et al., 2018) for the presence of 13 classes of syntactic phenomena
including various forms of argument alternations, movement, and modification.
We use this analysis set to investigate the grammatical knowledge of three
pretrained encoders: BERT (Devlin et al., 2018), GPT (Radford et al., 2018),
and the BiLSTM baseline from Warstadt et al. We find that these models have a
strong command of complex or non-canonical argument structures like
ditransitives (Sue gave Dan a book) and passives (The book was read). Sentences
with long distance dependencies like questions (What do you think I ate?)
challenge all models, but for these, BERT and GPT have a distinct advantage
over the baseline. We conclude that recent sentence encoders, despite showing
near-human performance on acceptability classification overall, still fail to
make fine-grained grammaticality distinctions for many complex syntactic
structures.
| 2,020 | Computation and Language |
From Plots to Endings: A Reinforced Pointer Generator for Story Ending
Generation | We introduce a new task named Story Ending Generation (SEG), whic-h aims at
generating a coherent story ending from a sequence of story plot. Wepropose a
framework consisting of a Generator and a Reward Manager for thistask. The
Generator follows the pointer-generator network with coverage mech-anism to
deal with out-of-vocabulary (OOV) and repetitive words. Moreover, amixed loss
method is introduced to enable the Generator to produce story endingsof high
semantic relevance with story plots. In the Reward Manager, the rewardis
computed to fine-tune the Generator with policy-gradient reinforcement
learn-ing (PGRL). We conduct experiments on the recently-introduced
ROCStoriesCorpus. We evaluate our model in both automatic evaluation and human
evalua-tion. Experimental results show that our model exceeds the
sequence-to-sequencebaseline model by 15.75% and 13.57% in terms of CIDEr and
consistency scorerespectively.
| 2,018 | Computation and Language |
Dialog System Technology Challenge 7 | This paper introduces the Seventh Dialog System Technology Challenges (DSTC),
which use shared datasets to explore the problem of building dialog systems.
Recently, end-to-end dialog modeling approaches have been applied to various
dialog tasks. The seventh DSTC (DSTC7) focuses on developing technologies
related to end-to-end dialog systems for (1) sentence selection, (2) sentence
generation and (3) audio visual scene aware dialog. This paper summarizes the
overall setup and results of DSTC7, including detailed descriptions of the
different tracks and provided datasets. We also describe overall trends in the
submitted systems and the key results. Each track introduced new datasets and
participants achieved impressive results using state-of-the-art end-to-end
technologies.
| 2,019 | Computation and Language |
Advanced Rich Transcription System for Estonian Speech | This paper describes the current TT\"U speech transcription system for
Estonian speech. The system is designed to handle semi-spontaneous speech, such
as broadcast conversations, lecture recordings and interviews recorded in
diverse acoustic conditions. The system is based on the Kaldi toolkit.
Multi-condition training using background noise profiles extracted
automatically from untranscribed data is used to improve the robustness of the
system. Out-of-vocabulary words are recovered using a phoneme n-gram based
decoding subgraph and a FST-based phoneme-to-grapheme model. The system
achieves a word error rate of 8.1% on a test set of broadcast conversations.
The system also performs punctuation recovery and speaker identification.
Speaker identification models are trained using a recently proposed weakly
supervised training method.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.