Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Decomposing Generalization: Models of Generic, Habitual, and Episodic
Statements | We present a novel semantic framework for modeling linguistic expressions of
generalization---generic, habitual, and episodic statements---as combinations
of simple, real-valued referential properties of predicates and their
arguments. We use this framework to construct a dataset covering the entirety
of the Universal Dependencies English Web Treebank. We use this dataset to
probe the efficacy of type-level and token-level information---including
hand-engineered features and static (GloVe) and contextual (ELMo) word
embeddings---for predicting expressions of generalization. Data and code are
available at decomp.io.
| 2,019 | Computation and Language |
Exploring the context of recurrent neural network based conversational
agents | Conversational agents have begun to rise both in the academic (in terms of
research) and commercial (in terms of applications) world. This paper
investigates the task of building a non-goal driven conversational agent, using
neural network generative models and analyzes how the conversation context is
handled. It compares a simpler Encoder-Decoder with a Hierarchical Recurrent
Encoder-Decoder architecture, which includes an additional module to model the
context of the conversation using previous utterances information. We found
that the hierarchical model was able to extract relevant context information
and include them in the generation of the output. However, it performed worse
(35-40%) than the simple Encoder-Decoder model regarding both grammatically
correct output and meaningful response. Despite these results, experiments
demonstrate how conversations about similar topics appear close to each other
in the context space due to the increased frequency of specific topic-related
words, thus leaving promising directions for future research and how the
context of a conversation can be exploited.
| 2,019 | Computation and Language |
Towards Controlled Transformation of Sentiment in Sentences | An obstacle to the development of many natural language processing products
is the vast amount of training examples necessary to get satisfactory results.
The generation of these examples is often a tedious and time-consuming task.
This paper this paper proposes a method to transform the sentiment of sentences
in order to limit the work necessary to generate more training data. This means
that one sentence can be transformed to an opposite sentiment sentence and
should reduce by half the work required in the generation of text. The proposed
pipeline consists of a sentiment classifier with an attention mechanism to
highlight the short phrases that determine the sentiment of a sentence. Then,
these phrases are changed to phrases of the opposite sentiment using a baseline
model and an autoencoder approach. Experiments are run on both the separate
parts of the pipeline as well as on the end-to-end model. The sentiment
classifier is tested on its accuracy and is found to perform adequately. The
autoencoder is tested on how well it is able to change the sentiment of an
encoded phrase and it was found that such a task is possible. We use human
evaluation to judge the performance of the full (end-to-end) pipeline and that
reveals that a model using word vectors outperforms the encoder model.
Numerical evaluation shows that a success rate of 54.7% is achieved on the
sentiment change.
| 2,019 | Computation and Language |
Multi-Task Deep Neural Networks for Natural Language Understanding | In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for
learning representations across multiple natural language understanding (NLU)
tasks. MT-DNN not only leverages large amounts of cross-task data, but also
benefits from a regularization effect that leads to more general
representations in order to adapt to new tasks and domains. MT-DNN extends the
model proposed in Liu et al. (2015) by incorporating a pre-trained
bidirectional transformer language model, known as BERT (Devlin et al., 2018).
MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI,
SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.7%
(2.2% absolute improvement). We also demonstrate using the SNLI and SciTail
datasets that the representations learned by MT-DNN allow domain adaptation
with substantially fewer in-domain labels than the pre-trained BERT
representations. The code and pre-trained models are publicly available at
https://github.com/namisan/mt-dnn.
| 2,019 | Computation and Language |
Towards Generating Long and Coherent Text with Multi-Level Latent
Variable Models | Variational autoencoders (VAEs) have received much attention recently as an
end-to-end architecture for text generation with latent variables. In this
paper, we investigate several multi-level structures to learn a VAE model to
generate long, and coherent text. In particular, we use a hierarchy of
stochastic layers between the encoder and decoder networks to generate more
informative latent codes. We also investigate a multi-level decoder structure
to learn a coherent long-term structure by generating intermediate sentence
representations as high-level plan vectors. Empirical results demonstrate that
a multi-level VAE model produces more coherent and less repetitive long text
compared to the standard VAE models and can further mitigate the
posterior-collapse issue.
| 2,019 | Computation and Language |
DREAM: A Challenge Dataset and Models for Dialogue-Based Reading
Comprehension | We present DREAM, the first dialogue-based multiple-choice reading
comprehension dataset. Collected from English-as-a-foreign-language
examinations designed by human experts to evaluate the comprehension level of
Chinese learners of English, our dataset contains 10,197 multiple-choice
questions for 6,444 dialogues. In contrast to existing reading comprehension
datasets, DREAM is the first to focus on in-depth multi-turn multi-party
dialogue understanding. DREAM is likely to present significant challenges for
existing reading comprehension systems: 84% of answers are non-extractive, 85%
of questions require reasoning beyond a single sentence, and 34% of questions
also involve commonsense knowledge.
We apply several popular neural reading comprehension models that primarily
exploit surface information within the text and find them to, at best, just
barely outperform a rule-based approach. We next investigate the effects of
incorporating dialogue structure and different kinds of general world knowledge
into both rule-based and (neural and non-neural) machine learning-based reading
comprehension models. Experimental results on the DREAM dataset show the
effectiveness of dialogue structure and general world knowledge. DREAM will be
available at https://dataset.org/dream/.
| 2,019 | Computation and Language |
Dating Documents using Graph Convolution Networks | Document date is essential for many important tasks, such as document
retrieval, summarization, event detection, etc. While existing approaches for
these tasks assume accurate knowledge of the document date, this is not always
available, especially for arbitrary documents from the Web. Document Dating is
a challenging problem which requires inference over the temporal structure of
the document. Prior document dating systems have largely relied on handcrafted
features while ignoring such document internal structures. In this paper, we
propose NeuralDater, a Graph Convolutional Network (GCN) based document dating
approach which jointly exploits syntactic and temporal graph structures of
document in a principled way. To the best of our knowledge, this is the first
application of deep learning for the problem of document dating. Through
extensive experiments on real-world datasets, we find that NeuralDater
significantly outperforms state-of-the-art baseline by 19% absolute (45%
relative) accuracy points.
| 2,018 | Computation and Language |
A Simple Regularization-based Algorithm for Learning Cross-Domain Word
Embeddings | Learning word embeddings has received a significant amount of attention
recently. Often, word embeddings are learned in an unsupervised manner from a
large collection of text. The genre of the text typically plays an important
role in the effectiveness of the resulting embeddings. How to effectively train
word embedding models using data from different domains remains a problem that
is underexplored. In this paper, we present a simple yet effective method for
learning word embeddings based on text from different domains. We demonstrate
the effectiveness of our approach through extensive experiments on various
down-stream NLP tasks.
| 2,017 | Computation and Language |
Massively Multilingual Transfer for NER | In cross-lingual transfer, NLP models over one or more source languages are
applied to a low-resource target language. While most prior work has used a
single source model or a few carefully selected models, here we consider a
`massive' setting with many such models. This setting raises the problem of
poor transfer, particularly from distant languages. We propose two techniques
for modulating the transfer, suitable for zero-shot or few-shot learning,
respectively. Evaluating on named entity recognition, we show that our
techniques are much more effective than strong baselines, including standard
ensembling, and our unsupervised method rivals oracle selection of the single
best individual model.
| 2,019 | Computation and Language |
Joint Entity Linking with Deep Reinforcement Learning | Entity linking is the task of aligning mentions to corresponding entities in
a given knowledge base. Previous studies have highlighted the necessity for
entity linking systems to capture the global coherence. However, there are two
common weaknesses in previous global models. First, most of them calculate the
pairwise scores between all candidate entities and select the most relevant
group of entities as the final result. In this process, the consistency among
wrong entities as well as that among right ones are involved, which may
introduce noise data and increase the model complexity. Second, the cues of
previously disambiguated entities, which could contribute to the disambiguation
of the subsequent mentions, are usually ignored by previous models. To address
these problems, we convert the global linking into a sequence decision problem
and propose a reinforcement learning model which makes decisions from a global
perspective. Our model makes full use of the previous referred entities and
explores the long-term influence of current selection on subsequent decisions.
We conduct experiments on different types of datasets, the results show that
our model outperforms state-of-the-art systems and has better generalization
performance.
| 2,019 | Computation and Language |
tax2vec: Constructing Interpretable Features from Taxonomies for Short
Text Classification | The use of background knowledge is largely unexploited in text classification
tasks. This paper explores word taxonomies as means for constructing new
semantic features, which may improve the performance and robustness of the
learned classifiers. We propose tax2vec, a parallel algorithm for constructing
taxonomy-based features, and demonstrate its use on six short text
classification problems: prediction of gender, personality type, age, news
topics, drug side effects and drug effectiveness. The constructed semantic
features, in combination with fast linear classifiers, tested against strong
baselines such as hierarchical attention neural networks, achieves comparable
classification results on short text documents. The algorithm's performance is
also tested in a few-shot learning setting, indicating that the inclusion of
semantic features can improve the performance in data-scarce situations. The
tax2vec capability to extract corpus-specific semantic keywords is also
demonstrated. Finally, we investigate the semantic space of potential features,
where we observe a similarity with the well known Zipf's law.
| 2,020 | Computation and Language |
Human acceptability judgements for extractive sentence compression | Recent approaches to English-language sentence compression rely on parallel
corpora consisting of sentence-compression pairs. However, a sentence may be
shortened in many different ways, which each might be suited to the needs of a
particular application. Therefore, in this work, we collect and model
crowdsourced judgements of the acceptability of many possible sentence
shortenings. We then show how a model of such judgements can be used to support
a flexible approach to the compression task. We release our model and dataset
for future work.
| 2,019 | Computation and Language |
Examining the Presence of Gender Bias in Customer Reviews Using Word
Embedding | Humans have entered the age of algorithms. Each minute, algorithms shape
countless preferences from suggesting a product to a potential life partner. In
the marketplace algorithms are trained to learn consumer preferences from
customer reviews because user-generated reviews are considered the voice of
customers and a valuable source of information to firms. Insights mined from
reviews play an indispensable role in several business activities ranging from
product recommendation, targeted advertising, promotions, segmentation etc. In
this research, we question whether reviews might hold stereotypic gender bias
that algorithms learn and propagate Utilizing data from millions of
observations and a word embedding approach, GloVe, we show that algorithms
designed to learn from human language output also learn gender bias. We also
examine why such biases occur: whether the bias is caused because of a negative
bias against females or a positive bias for males. We examine the impact of
gender bias in reviews on choice and conclude with policy implications for
female consumers, especially when they are unaware of the bias, and the ethical
implications for firms.
| 2,019 | Computation and Language |
How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong
Baselines, Comparative Analyses, and Some Misconceptions | Cross-lingual word embeddings (CLEs) enable multilingual modeling of meaning
and facilitate cross-lingual transfer of NLP models. Despite their ubiquitous
usage in downstream tasks, recent increasingly popular projection-based CLE
models are almost exclusively evaluated on a single task only: bilingual
lexicon induction (BLI). Even BLI evaluations vary greatly, hindering our
ability to correctly interpret performance and properties of different CLE
models. In this work, we make the first step towards a comprehensive evaluation
of cross-lingual word embeddings. We thoroughly evaluate both supervised and
unsupervised CLE models on a large number of language pairs in the BLI task and
three downstream tasks, providing new insights concerning the ability of
cutting-edge CLE models to support cross-lingual NLP. We empirically
demonstrate that the performance of CLE models largely depends on the task at
hand and that optimizing CLE models for BLI can result in deteriorated
downstream performance. We indicate the most robust supervised and unsupervised
CLE models and emphasize the need to reassess existing baselines, which still
display competitive performance across the board. We hope that our work will
catalyze further work on CLE evaluation and model analysis.
| 2,019 | Computation and Language |
Deconstructing Word Embeddings | A review of Word Embedding Models through a deconstructive approach reveals
their several shortcomings and inconsistencies. These include instability of
the vector representations, a distorted analogical reasoning, geometric
incompatibility with linguistic features, and the inconsistencies in the corpus
data. A new theoretical embedding model, Derridian Embedding, is proposed in
this paper. Contemporary embedding models are evaluated qualitatively in terms
of how adequate they are in relation to the capabilities of a Derridian
Embedding.
| 2,019 | Computation and Language |
Riconoscimento ortografico per apostrofo ed espressioni polirematiche | The work presents two algorithms of manipulation and comparison between
strings whose purpose is the orthographic recognition of the apostrophe and of
the compound expressions. The theory supporting general reasoning refers to the
basic concept of EditDistance, the improvements that ensure the achievement of
the objective are achieved with the aid of tools borrowed from the use of
techniques for processing large amounts of data on distributed platforms.
| 2,019 | Computation and Language |
Character-based Surprisal as a Model of Reading Difficulty in the
Presence of Error | Intuitively, human readers cope easily with errors in text; typos,
misspelling, word substitutions, etc. do not unduly disrupt natural reading.
Previous work indicates that letter transpositions result in increased reading
times, but it is unclear if this effect generalizes to more natural errors. In
this paper, we report an eye-tracking study that compares two error types
(letter transpositions and naturally occurring misspelling) and two error rates
(10% or 50% of all words contain errors). We find that human readers show
unimpaired comprehension in spite of these errors, but error words cause more
reading difficulty than correct words. Also, transpositions are more difficult
than misspellings, and a high error rate increases difficulty for all words,
including correct ones. We then present a computational model that uses
character-based (rather than traditional word-based) surprisal to account for
these results. The model explains that transpositions are harder than
misspellings because they contain unexpected letter combinations. It also
explains the error rate effect: upcoming words are more difficultto predict
when the context is degraded, leading to increased surprisal.
| 2,019 | Computation and Language |
Query-oriented text summarization based on hypergraph transversals | Existing graph- and hypergraph-based algorithms for document summarization
represent the sentences of a corpus as the nodes of a graph or a hypergraph in
which the edges represent relationships of lexical similarities between
sentences. Each sentence of the corpus is then scored individually, using
popular node ranking algorithms, and a summary is produced by extracting highly
scored sentences. This approach fails to select a subset of jointly relevant
sentences and it may produce redundant summaries that are missing important
topics of the corpus. To alleviate this issue, a new hypergraph-based
summarizer is proposed in this paper, in which each node is a sentence and each
hyperedge is a theme, namely a group of sentences sharing a topic. Themes are
weighted in terms of their prominence in the corpus and their relevance to a
user-defined query. It is further shown that the problem of identifying a
subset of sentences covering the relevant themes of the corpus is equivalent to
that of finding a hypergraph transversal in our theme-based hypergraph. Two
extensions of the notion of hypergraph transversal are proposed for the purpose
of summarization, and polynomial time algorithms building on the theory of
submodular functions are proposed for solving the associated discrete
optimization problems. The worst-case time complexity of the proposed
algorithms is squared in the number of terms, which makes it cheaper than the
existing hypergraph-based methods. A thorough comparative analysis with related
models on DUC benchmark datasets demonstrates the effectiveness of our
approach, which outperforms existing graph- or hypergraph-based methods by at
least 6% of ROUGE-SU4 score.
| 2,019 | Computation and Language |
Natural Language Processing, Sentiment Analysis and Clinical Analytics | Recent advances in Big Data has prompted health care practitioners to utilize
the data available on social media to discern sentiment and emotions
expression. Health Informatics and Clinical Analytics depend heavily on
information gathered from diverse sources. Traditionally, a healthcare
practitioner will ask a patient to fill out a questionnaire that will form the
basis of diagnosing the medical condition. However, medical practitioners have
access to many sources of data including the patients writings on various
media. Natural Language Processing (NLP) allows researchers to gather such data
and analyze it to glean the underlying meaning of such writings. The field of
sentiment analysis (applied to many other domains) depend heavily on techniques
utilized by NLP. This work will look into various prevalent theories underlying
the NLP field and how they can be leveraged to gather users sentiments on
social media. Such sentiments can be culled over a period of time thus
minimizing the errors introduced by data input and other stressors.
Furthermore, we look at some applications of sentiment analysis and application
of NLP to mental health. The reader will also learn about the NLTK toolkit that
implements various NLP theories and how they can make the data scavenging
process a lot easier.
| 2,019 | Computation and Language |
Making a Case for Social Media Corpus for Detecting Depression | The social media platform provides an opportunity to gain valuable insights
into user behaviour. Users mimic their internal feelings and emotions in a
disinhibited fashion using natural language. Techniques in Natural Language
Processing have helped researchers decipher standard documents and cull
together inferences from massive amount of data. A representative corpus is a
prerequisite for NLP and one of the challenges we face today is the
non-standard and noisy language that exists on the internet. Our work focuses
on building a corpus from social media that is focused on detecting mental
illness. We use depression as a case study and demonstrate the effectiveness of
using such a corpus for helping practitioners detect such cases. Our results
show a high correlation between our Social Media Corpus and the standard corpus
for depression.
| 2,019 | Computation and Language |
How to Write High-quality News on Social Network? Predicting News
Quality by Mining Writing Style | Rapid development of Internet technologies promotes traditional newspapers to
report news on social networks. However, people on social networks may have
different needs which naturally arises the question: whether can we analyze the
influence of writing style on news quality automatically and assist writers in
improving news quality? It's challenging due to writing style and 'quality' are
hard to measure. First, we use 'popularity' as the measure of 'quality'. It is
natural on social networks but brings new problems: popularity are also
influenced by event and publisher. So we design two methods to alleviate their
influence. Then, we proposed eight types of linguistic features (53 features in
all) according eight writing guidelines and analyze their relationship with
news quality. The experimental results show these linguistic features influence
greatly on news quality. Based on it, we design a news quality assessment model
on social network (SNQAM). SNQAM performs excellently on predicting quality,
presenting interpretable quality score and giving accessible suggestions on how
to improve it according to writing guidelines we referred to.
| 2,021 | Computation and Language |
Word Embeddings for Sentiment Analysis: A Comprehensive Empirical Survey | This work investigates the role of factors like training method, training
corpus size and thematic relevance of texts in the performance of word
embedding features on sentiment analysis of tweets, song lyrics, movie reviews
and item reviews. We also explore specific training or post-processing methods
that can be used to enhance the performance of word embeddings in certain tasks
or domains. Our empirical observations indicate that models trained with
multithematic texts that are large and rich in vocabulary are the best in
answering syntactic and semantic word analogy questions. We further observe
that influence of thematic relevance is stronger on movie and phone reviews,
but weaker on tweets and lyrics. These two later domains are more sensitive to
corpus size and training method, with Glove outperforming Word2vec. "Injecting"
extra intelligence from lexicons or generating sentiment specific word
embeddings are two prominent alternatives for increasing performance of word
embedding features.
| 2,019 | Computation and Language |
Graph Neural Networks with Generated Parameters for Relation Extraction | Recently, progress has been made towards improving relational reasoning in
machine learning field. Among existing models, graph neural networks (GNNs) is
one of the most effective approaches for multi-hop relational reasoning. In
fact, multi-hop relational reasoning is indispensable in many natural language
processing tasks such as relation extraction. In this paper, we propose to
generate the parameters of graph neural networks (GP-GNNs) according to natural
language sentences, which enables GNNs to process relational reasoning on
unstructured text inputs. We verify GP-GNNs in relation extraction from text.
Experimental results on a human-annotated dataset and two distantly supervised
datasets show that our model achieves significant improvements compared to
baselines. We also perform a qualitative analysis to demonstrate that our model
could discover more accurate relations by multi-hop relational reasoning.
| 2,019 | Computation and Language |
Review Conversational Reading Comprehension | Inspired by conversational reading comprehension (CRC), this paper studies a
novel task of leveraging reviews as a source to build an agent that can answer
multi-turn questions from potential consumers of online businesses. We first
build a review CRC dataset and then propose a novel task-aware pre-tuning step
running between language model (e.g., BERT) pre-training and domain-specific
fine-tuning. The proposed pre-tuning requires no data annotation, but can
greatly enhance the performance on our end task. Experimental results show that
the proposed approach is highly effective and has competitive performance as
the supervised approach. The dataset is available at
\url{https://github.com/howardhsu/RCRC}
| 2,019 | Computation and Language |
Neural Extractive Text Summarization with Syntactic Compression | Recent neural network approaches to summarization are largely either
selection-based extraction or generation-based abstraction. In this work, we
present a neural model for single-document summarization based on joint
extraction and syntactic compression. Our model chooses sentences from the
document, identifies possible compressions based on constituency parses, and
scores those compressions with a neural model to produce the final summary. For
learning, we construct oracle extractive-compressive summaries, then learn both
of our components jointly with this supervision. Experimental results on the
CNN/Daily Mail and New York Times datasets show that our model achieves strong
performance (comparable to state-of-the-art systems) as evaluated by ROUGE.
Moreover, our approach outperforms an off-the-shelf compression module, and
human and manual evaluation shows that our model's output generally remains
grammatical.
| 2,019 | Computation and Language |
Inferring Concept Hierarchies from Text Corpora via Hyperbolic
Embeddings | We consider the task of inferring is-a relationships from large text corpora.
For this purpose, we propose a new method combining hyperbolic embeddings and
Hearst patterns. This approach allows us to set appropriate constraints for
inferring concept hierarchies from distributional contexts while also being
able to predict missing is-a relationships and to correct wrong extractions.
Moreover -- and in contrast with other methods -- the hierarchical nature of
hyperbolic space allows us to learn highly efficient representations and to
improve the taxonomic consistency of the inferred hierarchies. Experimentally,
we show that our approach achieves state-of-the-art performance on several
commonly-used benchmarks.
| 2,019 | Computation and Language |
Universal Lemmatizer: A Sequence to Sequence Model for Lemmatizing
Universal Dependencies Treebanks | In this paper we present a novel lemmatization method based on a
sequence-to-sequence neural network architecture and morphosyntactic context
representation. In the proposed method, our context-sensitive lemmatizer
generates the lemma one character at a time based on the surface form
characters and its morphosyntactic features obtained from a morphological
tagger. We argue that a sliding window context representation suffers from
sparseness, while in majority of cases the morphosyntactic features of a word
bring enough information to resolve lemma ambiguities while keeping the context
representation dense and more practical for machine learning systems.
Additionally, we study two different data augmentation methods utilizing
autoencoder training and morphological transducers especially beneficial for
low resource languages. We evaluate our lemmatizer on 52 different languages
and 76 different treebanks, showing that our system outperforms all latest
baseline systems. Compared to the best overall baseline, UDPipe Future, our
system outperforms it on 62 out of 76 treebanks reducing errors on average by
19% relative. The lemmatizer together with all trained models is made available
as a part of the Turku-neural-parsing-pipeline under the Apache 2.0 license.
| 2,020 | Computation and Language |
Improving Question Answering with External Knowledge | We focus on multiple-choice question answering (QA) tasks in subject areas
such as science, where we require both broad background knowledge and the facts
from the given subject-area reference corpus. In this work, we explore simple
yet effective methods for exploiting two sources of external knowledge for
subject-area QA. The first enriches the original subject-area reference corpus
with relevant text snippets extracted from an open-domain resource (i.e.,
Wikipedia) that cover potentially ambiguous concepts in the question and answer
options. As in other QA research, the second method simply increases the amount
of training data by appending additional in-domain subject-area instances.
Experiments on three challenging multiple-choice science QA tasks (i.e.,
ARC-Easy, ARC-Challenge, and OpenBookQA) demonstrate the effectiveness of our
methods: in comparison to the previous state-of-the-art, we obtain absolute
gains in accuracy of up to 8.1%, 13.0%, and 12.8%, respectively. While we
observe consistent gains when we introduce knowledge from Wikipedia, we find
that employing additional QA training instances is not uniformly helpful:
performance degrades when the added instances exhibit a higher level of
difficulty than the original training data. As one of the first studies on
exploiting unstructured external knowledge for subject-area QA, we hope our
methods, observations, and discussion of the exposed limitations may shed light
on further developments in the area.
| 2,019 | Computation and Language |
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural
Language Inference | A machine learning system can score well on a given test set by relying on
heuristics that are effective for frequent example types but break down in more
challenging cases. We study this issue within natural language inference (NLI),
the task of determining whether one sentence entails another. We hypothesize
that statistical NLI models may adopt three fallible syntactic heuristics: the
lexical overlap heuristic, the subsequence heuristic, and the constituent
heuristic. To determine whether models have adopted these heuristics, we
introduce a controlled evaluation set called HANS (Heuristic Analysis for NLI
Systems), which contains many examples where the heuristics fail. We find that
models trained on MNLI, including BERT, a state-of-the-art model, perform very
poorly on HANS, suggesting that they have indeed adopted these heuristics. We
conclude that there is substantial room for improvement in NLI systems, and
that the HANS dataset can motivate and measure progress in this area
| 2,019 | Computation and Language |
Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers | Most approaches to extraction multiple relations from a paragraph require
multiple passes over the paragraph. In practice, multiple passes are
computationally expensive and this makes difficult to scale to longer
paragraphs and larger text corpora. In this work, we focus on the task of
multiple relation extraction by encoding the paragraph only once (one-pass). We
build our solution on the pre-trained self-attentive (Transformer) models,
where we first add a structured prediction layer to handle extraction between
multiple entity pairs, then enhance the paragraph embedding to capture multiple
relational information associated with each entity with an entity-aware
attention technique. We show that our approach is not only scalable but can
also perform state-of-the-art on the standard benchmark ACE 2005.
| 2,019 | Computation and Language |
A Comprehensive Exploration on WikiSQL with Table-Aware Word
Contextualization | We present SQLova, the first Natural-language-to-SQL (NL2SQL) model to
achieve human performance in WikiSQL dataset. We revisit and discuss diverse
popular methods in NL2SQL literature, take a full advantage of BERT {Devlin et
al., 2018) through an effective table contextualization method, and coherently
combine them, outperforming the previous state of the art by 8.2% and 2.5% in
logical form and execution accuracy, respectively. We particularly note that
BERT with a seq2seq decoder leads to a poor performance in the task, indicating
the importance of a careful design when using such large pretrained models. We
also provide a comprehensive analysis on the dataset and our model, which can
be helpful for designing future NL2SQL datsets and models. We especially show
that our model's performance is near the upper bound in WikiSQL, where we
observe that a large portion of the evaluation errors are due to wrong
annotations, and our model is already exceeding human performance by 1.3% in
execution accuracy.
| 2,019 | Computation and Language |
Strategies for Structuring Story Generation | Writers generally rely on plans or sketches to write long stories, but most
current language models generate word by word from left to right. We explore
coarse-to-fine models for creating narrative texts of several hundred words,
and introduce new models which decompose stories by abstracting over actions
and entities. The model first generates the predicate-argument structure of the
text, where different mentions of the same entity are marked with placeholder
tokens. It then generates a surface realization of the predicate-argument
structure, and finally replaces the entity placeholders with context-sensitive
names and references. Human judges prefer the stories from our models to a wide
range of previous approaches to hierarchical text generation. Extensive
analysis shows that our methods can help improve the diversity and coherence of
events and entities in generated stories.
| 2,019 | Computation and Language |
Unsupervised Clinical Language Translation | As patients' access to their doctors' clinical notes becomes common,
translating professional, clinical jargon to layperson-understandable language
is essential to improve patient-clinician communication. Such translation
yields better clinical outcomes by enhancing patients' understanding of their
own health conditions, and thus improving patients' involvement in their own
care. Existing research has used dictionary-based word replacement or
definition insertion to approach the need. However, these methods are limited
by expert curation, which is hard to scale and has trouble generalizing to
unseen datasets that do not share an overlapping vocabulary. In contrast, we
approach the clinical word and sentence translation problem in a completely
unsupervised manner. We show that a framework using representation learning,
bilingual dictionary induction and statistical machine translation yields the
best precision at 10 of 0.827 on professional-to-consumer word translation, and
mean opinion scores of 4.10 and 4.28 out of 5 for clinical correctness and
layperson readability, respectively, on sentence translation. Our
fully-unsupervised strategy overcomes the curation problem, and the clinically
meaningful evaluation reduces biases from inappropriate evaluators, which are
critical in clinical machine learning.
| 2,019 | Computation and Language |
An Effective Approach to Unsupervised Machine Translation | While machine translation has traditionally relied on large amounts of
parallel corpora, a recent research line has managed to train both Neural
Machine Translation (NMT) and Statistical Machine Translation (SMT) systems
using monolingual corpora only. In this paper, we identify and address several
deficiencies of existing unsupervised SMT approaches by exploiting subword
information, developing a theoretically well founded unsupervised tuning
method, and incorporating a joint refinement procedure. Moreover, we use our
improved SMT system to initialize a dual NMT model, which is further fine-tuned
through on-the-fly back-translation. Together, we obtain large improvements
over the previous state-of-the-art in unsupervised machine translation. For
instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points
more than the previous best unsupervised system, and 0.5 points more than the
(supervised) shared task winner back in 2014.
| 2,021 | Computation and Language |
An Argument-Marker Model for Syntax-Agnostic Proto-Role Labeling | Semantic proto-role labeling (SPRL) is an alternative to semantic role
labeling (SRL) that moves beyond a categorical definition of roles, following
Dowty's feature-based view of proto-roles. This theory determines agenthood vs.
patienthood based on a participant's instantiation of more or less typical
agent vs. patient properties, such as, for example, volition in an event. To
perform SPRL, we develop an ensemble of hierarchical models with self-attention
and concurrently learned predicate-argument-markers. Our method is competitive
with the state-of-the art, overall outperforming previous work in two
formulations of the task (multi-label and multi-variate Likert scale
prediction). In contrast to previous work, our results do not depend on gold
argument heads derived from supplementary gold tree banks.
| 2,019 | Computation and Language |
Insertion-based Decoding with automatically Inferred Generation Order | Conventional neural autoregressive decoding commonly assumes a fixed
left-to-right generation order, which may be sub-optimal. In this work, we
propose a novel decoding algorithm -- InDIGO -- which supports flexible
sequence generation in arbitrary orders through insertion operations. We extend
Transformer, a state-of-the-art sequence generation model, to efficiently
implement the proposed approach, enabling it to be trained with either a
pre-defined generation order or adaptive orders obtained from beam-search.
Experiments on four real-world tasks, including word order recovery, machine
translation, image caption and code generation, demonstrate that our algorithm
can generate sequences following arbitrary orders, while achieving competitive
or even better performance compared to the conventional left-to-right
generation. The generated sequences show that InDIGO adopts adaptive generation
orders based on input information.
| 2,019 | Computation and Language |
The FLoRes Evaluation Datasets for Low-Resource Machine Translation:
Nepali-English and Sinhala-English | For machine translation, a vast majority of language pairs in the world are
considered low-resource because they have little parallel data available.
Besides the technical challenges of learning with limited supervision, it is
difficult to evaluate methods trained on low-resource language pairs because of
the lack of freely and publicly available benchmarks. In this work, we
introduce the FLoRes evaluation datasets for Nepali-English and
Sinhala-English, based on sentences translated from Wikipedia. Compared to
English, these are languages with very different morphology and syntax, for
which little out-of-domain parallel data is available and for which relatively
large amounts of monolingual data are freely available. We describe our process
to collect and cross-check the quality of translations, and we report baseline
performance using several learning settings: fully supervised, weakly
supervised, semi-supervised, and fully unsupervised. Our experiments
demonstrate that current state-of-the-art methods perform rather poorly on this
benchmark, posing a challenge to the research community working on low-resource
MT. Data and code to reproduce our experiments are available at
https://github.com/facebookresearch/flores.
| 2,019 | Computation and Language |
Fine-Grained Temporal Relation Extraction | We present a novel semantic framework for modeling temporal relations and
event durations that maps pairs of events to real-valued scales. We use this
framework to construct the largest temporal relations dataset to date, covering
the entirety of the Universal Dependencies English Web Treebank. We use this
dataset to train models for jointly predicting fine-grained temporal relations
and event durations. We report strong results on our data and show the efficacy
of a transfer-learning approach for predicting categorical relations.
| 2,019 | Computation and Language |
Training on Synthetic Noise Improves Robustness to Natural Noise in
Machine Translation | We consider the problem of making machine translation more robust to
character-level variation at the source side, such as typos. Existing methods
achieve greater coverage by applying subword models such as byte-pair encoding
(BPE) and character-level encoders, but these methods are highly sensitive to
spelling mistakes. We show how training on a mild amount of random synthetic
noise can dramatically improve robustness to these variations, without
diminishing performance on clean text. We focus on translation performance on
natural noise, as captured by frequent corrections in Wikipedia edit logs, and
show that robustness to such noise can be achieved using a balanced diet of
simple synthetic noises at training time, without access to the natural noise
data or distribution.
| 2,019 | Computation and Language |
An Ensemble Dialogue System for Facts-Based Sentence Generation | This study aims to generate responses based on real-world facts by
conditioning context and external facts extracted from information websites.
Our system is an ensemble system that combines three modules: generated-based
module, retrieval-based module, and reranking module. Therefore, this system
can return diverse and meaningful responses from various perspectives. The
experiments and evaluations are conducted with the sentence generation task in
Dialog System Technology Challenges 7 (DSTC7-Task2). As a result, the proposed
system performed significantly better than sole modules, and worked fine at the
DSTC7-Task2, specifically on the objective evaluation.
| 2,019 | Computation and Language |
The Referential Reader: A Recurrent Entity Network for Anaphora
Resolution | We present a new architecture for storing and accessing entity mentions
during online text processing. While reading the text, entity references are
identified, and may be stored by either updating or overwriting a cell in a
fixed-length memory. The update operation implies coreference with the other
mentions that are stored in the same cell; the overwrite operation causes these
mentions to be forgotten. By encoding the memory operations as differentiable
gates, it is possible to train the model end-to-end, using both a supervised
anaphora resolution objective as well as a supplementary language modeling
objective. Evaluation on a dataset of pronoun-name anaphora demonstrates strong
performance with purely incremental text processing.
| 2,019 | Computation and Language |
Restructuring Conversations using Discourse Relations for Zero-shot
Abstractive Dialogue Summarization | Dialogue summarization is a challenging problem due to the informal and
unstructured nature of conversational data. Recent advances in abstractive
summarization have been focused on data-hungry neural models and adapting these
models to a new domain requires the availability of domain-specific manually
annotated corpus created by linguistic experts. We propose a zero-shot
abstractive dialogue summarization method that uses discourse relations to
provide structure to conversations, and then uses an out-of-the-box document
summarization model to create final summaries. Experiments on the AMI and ICSI
meeting corpus, with document summarization models like PGN and BART, shows
that our method improves the ROGUE score by up to 3 points, and even performs
competitively against other state-of-the-art methods.
| 2,020 | Computation and Language |
End-to-End Open-Domain Question Answering with BERTserini | We demonstrate an end-to-end question answering system that integrates BERT
with the open-source Anserini information retrieval toolkit. In contrast to
most question answering and reading comprehension models today, which operate
over small amounts of input text, our system integrates best practices from IR
with a BERT-based reader to identify answers from a large corpus of Wikipedia
articles in an end-to-end fashion. We report large improvements over previous
results on a standard benchmark test collection, showing that fine-tuning
pretrained BERT with SQuAD is sufficient to achieve high accuracy in
identifying answer spans.
| 2,019 | Computation and Language |
On the Choice of Modeling Unit for Sequence-to-Sequence Speech
Recognition | In conventional speech recognition, phoneme-based models outperform
grapheme-based models for non-phonetic languages such as English. The
performance gap between the two typically reduces as the amount of training
data is increased. In this work, we examine the impact of the choice of
modeling unit for attention-based encoder-decoder models. We conduct
experiments on the LibriSpeech 100hr, 460hr, and 960hr tasks, using various
target units (phoneme, grapheme, and word-piece); across all tasks, we find
that grapheme or word-piece models consistently outperform phoneme-based
models, even though they are evaluated without a lexicon or an external
language model. We also investigate model complementarity: we find that we can
improve WERs by up to 9% relative by rescoring N-best lists generated from a
strong word-piece based baseline with either the phoneme or the grapheme model.
Rescoring an N-best list generated by the phonemic system, however, provides
limited improvements. Further analysis shows that the word-piece-based models
produce more diverse N-best hypotheses, and thus lower oracle WERs, than
phonemic models.
| 2,019 | Computation and Language |
Word Embeddings for Entity-annotated Texts | Learned vector representations of words are useful tools for many information
retrieval and natural language processing tasks due to their ability to capture
lexical semantics. However, while many such tasks involve or even rely on named
entities as central components, popular word embedding models have so far
failed to include entities as first-class citizens. While it seems intuitive
that annotating named entities in the training corpus should result in more
intelligent word features for downstream tasks, performance issues arise when
popular embedding approaches are naively applied to entity annotated corpora.
Not only are the resulting entity embeddings less useful than expected, but one
also finds that the performance of the non-entity word embeddings degrades in
comparison to those trained on the raw, unannotated corpus. In this paper, we
investigate approaches to jointly train word and entity embeddings on a large
corpus with automatically annotated and linked entities. We discuss two
distinct approaches to the generation of such embeddings, namely the training
of state-of-the-art embeddings on raw-text and annotated versions of the
corpus, as well as node embeddings of a co-occurrence graph representation of
the annotated corpus. We compare the performance of annotated embeddings and
classical word embeddings on a variety of word similarity, analogy, and
clustering evaluation tasks, and investigate their performance in
entity-specific tasks. Our findings show that it takes more than training
popular word embedding models on an annotated corpus to create entity
embeddings with acceptable performance on common test cases. Based on these
results, we discuss how and when node embeddings of the co-occurrence graph
representation of the text can restore the performance.
| 2,020 | Computation and Language |
Squared English Word: A Method of Generating Glyph to Use Super
Characters for Sentiment Analysis | The Super Characters method addresses sentiment analysis problems by first
converting the input text into images and then applying 2D-CNN models to
classify the sentiment. It achieves state of the art performance on many
benchmark datasets. However, it is not as straightforward to apply in Latin
languages as in Asian languages. Because the 2D-CNN model is designed to
recognize two-dimensional images, it is better if the inputs are in the form of
glyphs. In this paper, we propose SEW (Squared English Word) method generating
a squared glyph for each English word by drawing Super Characters images of
each English word at the alphabet level, combining the squared glyph together
into a whole Super Characters image at the sentence level, and then applying
the CNN model to classify the sentiment within the sentence. We applied the SEW
method to Wikipedia dataset and obtained a 2.1% accuracy gain compared to the
original Super Characters method. For multi-modal data with both structured
tabular data and unstructured natural language text, the modified SEW method
integrates the data into a single image and classifies sentiment with one
unified CNN model.
| 2,019 | Computation and Language |
AD3: Attentive Deep Document Dater | Knowledge of the creation date of documents facilitates several tasks such as
summarization, event extraction, temporally focused information extraction etc.
Unfortunately, for most of the documents on the Web, the time-stamp metadata is
either missing or can't be trusted. Thus, predicting creation time from
document content itself is an important task. In this paper, we propose
Attentive Deep Document Dater (AD3), an attention-based neural document dating
system which utilizes both context and temporal information in documents in a
flexible and principled manner. We perform extensive experimentation on
multiple real-world datasets to demonstrate the effectiveness of AD3 over
neural and non-neural baselines.
| 2,018 | Computation and Language |
Adaptive Artificial Intelligent Q&A Platform | The paper presents an approach to build a question and answer system that is
capable of processing the information in a large dataset and allows the user to
gain knowledge from this dataset by asking questions in natural language form.
Key content of this research covers four dimensions which are; Corpus
Preprocessing, Question Preprocessing, Deep Neural Network for Answer
Extraction and Answer Generation. The system is capable of understanding the
question, responds to the user's query in natural language form as well. The
goal is to make the user feel as if they were interacting with a person than a
machine.
| 2,019 | Computation and Language |
Learning Taxonomies of Concepts and not Words using Contextualized Word
Representations: A Position Paper | Taxonomies are semantic hierarchies of concepts. One limitation of current
taxonomy learning systems is that they define concepts as single words. This
position paper argues that contextualized word representations, which recently
achieved state-of-the-art results on many competitive NLP tasks, are a
promising method to address this limitation. We outline a novel approach for
taxonomy learning that (1) defines concepts as synsets, (2) learns
density-based approximations of contextualized word representations, and (3)
can measure similarity and hypernymy among them.
| 2,019 | Computation and Language |
Dise\~no de un espacio sem\'antico sobre la base de la Wikipedia. Una
propuesta de an\'alisis de la sem\'antica latente para el idioma espa\~nol | Latent Semantic Analysis (LSA) was initially conceived by the cognitive
psychology at the 90s decade. Since its emergence, the LSA has been used to
model cognitive processes, pointing out academic texts, compare literature
works and analyse political speeches, among other applications. Taking as
starting point multivariate method for dimensionality reduction, this paper
propose a semantic space for Spanish language. Out results include a document
text matrix with dimensions 1.3 x10^6 and 5.9x10^6, which later is decomposed
into singular values. Those singular values are used to semantically words or
text.
| 2,019 | Computation and Language |
A Linear-complexity Multi-biometric Forensic Document Analysis System,
by Fusing the Stylome and Signature Modalities | Forensic Document Analysis (FDA) addresses the problem of finding the
authorship of a given document. Identification of the document writer via a
number of its modalities (e.g. handwriting, signature, linguistic writing style
(i.e. stylome), etc.) has been studied in the FDA state-of-the-art. But, no
research is conducted on the fusion of stylome and signature modalities. In
this paper, we propose such a bimodal FDA system (which has vast applications
in judicial, police-related, and historical documents analysis) with a focus on
time-complexity. The proposed bimodal system can be trained and tested with
linear time complexity. For this purpose, we first revisit Multinomial Na\"ive
Bayes (MNB), as the best state-of-the-art linear-complexity authorship
attribution system and, then, prove its superior accuracy to the well-known
linear-complexity classifiers in the state-of-the-art. Then, we propose a fuzzy
version of MNB for being fused with a state-of-the-art well-known
linear-complexity fuzzy signature recognition system. For the evaluation
purposes, we construct a chimeric dataset, composed of signatures and textual
contents of different letters. Despite its linear-complexity, the proposed
multi-biometric system is proven to meaningfully improve its state-of-the-art
unimodal counterparts, regarding the accuracy, F-Score, Detection Error
Trade-off (DET), Cumulative Match Characteristics (CMC), and Match Score
Histograms (MSH) evaluation metrics.
| 2,019 | Computation and Language |
Assessing Partisan Traits of News Text Attributions | On the topic of journalistic integrity, the current state of accurate,
impartial news reporting has garnered much debate in context to the 2016 US
Presidential Election. In pursuit of computational evaluation of news text, the
statements (attributions) ascribed by media outlets to sources provide a common
category of evidence on which to operate. In this paper, we develop an approach
to compare partisan traits of news text attributions and apply it to
characterize differences in statements ascribed to candidate, Hilary Clinton,
and incumbent President, Donald Trump. In doing so, we present a model trained
on over 600 in-house annotated attributions to identify each candidate with
accuracy > 88%. Finally, we discuss insights from its performance for future
research.
| 2,019 | Computation and Language |
Attention in Natural Language Processing | Attention is an increasingly popular mechanism used in a wide range of neural
architectures. The mechanism itself has been realized in a variety of formats.
However, because of the fast-paced advances in this domain, a systematic
overview of attention is still missing. In this article, we define a unified
model for attention architectures in natural language processing, with a focus
on those designed to work with vector representations of the textual data. We
propose a taxonomy of attention models according to four dimensions: the
representation of the input, the compatibility function, the distribution
function, and the multiplicity of the input and/or output. We present the
examples of how prior information can be exploited in attention models and
discuss ongoing research efforts and open challenges in the area, providing the
first extensive categorization of the vast body of literature in this exciting
domain.
| 2,021 | Computation and Language |
Non-Monotonic Sequential Text Generation | Standard sequential generation methods assume a pre-specified generation
order, such as text generation methods which generate words from left to right.
In this work, we propose a framework for training models of text generation
that operate in non-monotonic orders; the model directly learns good orders,
without any additional annotation. Our framework operates by generating a word
at an arbitrary position, and then recursively generating words to its left and
then words to its right, yielding a binary tree. Learning is framed as
imitation learning, including a coaching method which moves from imitating an
oracle to reinforcing the policy's own preferences. Experimental results
demonstrate that using the proposed method, it is possible to learn policies
which generate text without pre-specifying a generation order, while achieving
competitive performance with conventional left-to-right generation.
| 2,019 | Computation and Language |
Extending a model for ontology-based Arabic-English machine translation | The acceleration in telecommunication needs leads to many groups of research,
especially in communication facilitating and Machine Translation fields. While
people contact with others having different languages and cultures, they need
to have instant translations. However, the available instant translators are
still providing somewhat bad Arabic-English Translations, for instance when
translating books or articles, the meaning is not totally accurate. Therefore,
using the semantic web techniques to deal with the homographs and homonyms
semantically, the aim of this research is to extend a model for the
ontology-based Arabic-English Machine Translation, named NAN, which simulate
the human way in translation. The experimental results show that NAN
translation is approximately more similar to the Human Translation than the
other instant translators. The resulted translation will help getting the
translated texts in the target language somewhat correctly and semantically
more similar to human translations for the Non-Arabic Natives and the
Non-English natives.
| 2,019 | Computation and Language |
Compression of Recurrent Neural Networks for Efficient Language Modeling | Recurrent neural networks have proved to be an effective method for
statistical language modeling. However, in practice their memory and run-time
complexity are usually too large to be implemented in real-time offline mobile
applications. In this paper we consider several compression techniques for
recurrent neural networks including Long-Short Term Memory models. We make
particular attention to the high-dimensional output problem caused by the very
large vocabulary size. We focus on effective compression methods in the context
of their exploitation on devices: pruning, quantization, and matrix
decomposition approaches (low-rank factorization and tensor train
decomposition, in particular). For each model we investigate the trade-off
between its size, suitability for fast inference and perplexity. We propose a
general pipeline for applying the most suitable methods to compress recurrent
neural networks for language modeling. It has been shown in the experimental
study with the Penn Treebank (PTB) dataset that the most efficient results in
terms of speed and compression-perplexity balance are obtained by matrix
decomposition techniques.
| 2,019 | Computation and Language |
End-to-end Anchored Speech Recognition | Voice-controlled house-hold devices, like Amazon Echo or Google Home, face
the problem of performing speech recognition of device-directed speech in the
presence of interfering background speech, i.e., background noise and
interfering speech from another person or media device in proximity need to be
ignored. We propose two end-to-end models to tackle this problem with
information extracted from the "anchored segment". The anchored segment refers
to the wake-up word part of an audio stream, which contains valuable speaker
information that can be used to suppress interfering speech and background
noise. The first method is called "Multi-source Attention" where the attention
mechanism takes both the speaker information and decoder state into
consideration. The second method directly learns a frame-level mask on top of
the encoder output. We also explore a multi-task learning setup where we use
the ground truth of the mask to guide the learner. Given that audio data with
interfering speech is rare in our training data set, we also propose a way to
synthesize "noisy" speech from "clean" speech to mitigate the mismatch between
training and test data. Our proposed methods show up to 15% relative reduction
in WER for Amazon Alexa live data with interfering background speech without
significantly degrading on clean speech.
| 2,019 | Computation and Language |
Towards Autoencoding Variational Inference for Aspect-based Opinion
Summary | Aspect-based Opinion Summary (AOS), consisting of aspect discovery and
sentiment classification steps, has recently been emerging as one of the most
crucial data mining tasks in e-commerce systems. Along this direction, the
LDA-based model is considered as a notably suitable approach, since this model
offers both topic modeling and sentiment classification. However, unlike
traditional topic modeling, in the context of aspect discovery it is often
required some initial seed words, whose prior knowledge is not easy to be
incorporated into LDA models. Moreover, LDA approaches rely on sampling
methods, which need to load the whole corpus into memory, making them hardly
scalable. In this research, we study an alternative approach for AOS problem,
based on Autoencoding Variational Inference (AVI). Firstly, we introduce the
Autoencoding Variational Inference for Aspect Discovery (AVIAD) model, which
extends the previous work of Autoencoding Variational Inference for Topic
Models (AVITM) to embed prior knowledge of seed words. This work includes
enhancement of the previous AVI architecture and also modification of the loss
function. Ultimately, we present the Autoencoding Variational Inference for
Joint Sentiment/Topic (AVIJST) model. In this model, we substantially extend
the AVI model to support the JST model, which performs topic modeling for
corresponding sentiment. The experimental results show that our proposed models
enjoy higher topic coherent, faster convergence time and better accuracy on
sentiment classification, as compared to their LDA-based counterparts.
| 2,019 | Computation and Language |
Understanding Chat Messages for Sticker Recommendation in Messaging Apps | Stickers are popularly used in messaging apps such as Hike to visually
express a nuanced range of thoughts and utterances to convey exaggerated
emotions. However, discovering the right sticker from a large and ever
expanding pool of stickers while chatting can be cumbersome. In this paper, we
describe a system for recommending stickers in real time as the user is typing
based on the context of the conversation. We decompose the sticker
recommendation (SR) problem into two steps. First, we predict the message that
the user is likely to send in the chat. Second, we substitute the predicted
message with an appropriate sticker. Majority of Hike's messages are in the
form of text which is transliterated from users' native language to the Roman
script. This leads to numerous orthographic variations of the same message and
makes accurate message prediction challenging. To address this issue, we learn
dense representations of chat messages employing character level convolution
network in an unsupervised manner. We use them to cluster the messages that
have the same meaning. In the subsequent steps, we predict the message cluster
instead of the message. Our approach does not depend on human labelled data
(except for validation), leading to fully automatic updation and tuning
pipeline for the underlying models. We also propose a novel hybrid message
prediction model, which can run with low latency on low-end phones that have
severe computational limitations. Our described system has been deployed for
more than $6$ months and is being used by millions of users along with hundreds
of thousands of expressive stickers.
| 2,019 | Computation and Language |
Aspect Specific Opinion Expression Extraction using Attention based
LSTM-CRF Network | Opinion phrase extraction is one of the key tasks in fine-grained sentiment
analysis. While opinion expressions could be generic subjective expressions,
aspect specific opinion expressions contain both the aspect as well as the
opinion expression within the original sentence context. In this work, we
formulate the task as an instance of token-level sequence labeling. When
multiple aspects are present in a sentence, detection of opinion phrase
boundary becomes difficult and label of each word depend not only upon the
surrounding words but also with the concerned aspect. We propose a neural
network architecture with bidirectional LSTM (Bi-LSTM) and a novel attention
mechanism. Bi-LSTM layer learns the various sequential pattern among the words
without requiring any hand-crafted features. The attention mechanism captures
the importance of context words on a particular aspect opinion expression when
multiple aspects are present in a sentence via location and content based
memory. A Conditional Random Field (CRF) model is incorporated in the final
layer to explicitly model the dependencies among the output labels.
Experimental results on Hotel dataset from Tripadvisor.com showed that our
approach outperformed several state-of-the-art baselines.
| 2,019 | Computation and Language |
Humor in Word Embeddings: Cockamamie Gobbledegook for Nincompoops | While humor is often thought to be beyond the reach of Natural Language
Processing, we show that several aspects of single-word humor correlate with
simple linear directions in Word Embeddings. In particular: (a) the word
vectors capture multiple aspects discussed in humor theories from various
disciplines; (b) each individual's sense of humor can be represented by a
vector, which can predict differences in people's senses of humor on new,
unrated, words; and (c) upon clustering humor ratings of multiple demographic
groups, different humor preferences emerge across the different groups. Humor
ratings are taken from the work of Engelthaler and Hills (2017) as well as from
an original crowdsourcing study of 120,000 words. Our dataset further includes
annotations for the theoretically-motivated humor features we identify.
| 2,019 | Computation and Language |
Models of Visually Grounded Speech Signal Pay Attention To Nouns: a
Bilingual Experiment on English and Japanese | We investigate the behaviour of attention in neural models of visually
grounded speech trained on two languages: English and Japanese. Experimental
results show that attention focuses on nouns and this behaviour holds true for
two very typologically different languages. We also draw parallels between
artificial neural attention and human attention and show that neural attention
focuses on word endings as it has been theorised for human attention. Finally,
we investigate how two visually grounded monolingual models can be used to
perform cross-lingual speech-to-speech retrieval. For both languages, the
enriched bilingual (speech-image) corpora with part-of-speech tags and forced
alignments are distributed to the community for reproducible research.
| 2,019 | Computation and Language |
Speaker diarisation using 2D self-attentive combination of embeddings | Speaker diarisation systems often cluster audio segments using speaker
embeddings such as i-vectors and d-vectors. Since different types of embeddings
are often complementary, this paper proposes a generic framework to improve
performance by combining them into a single embedding, referred to as a
c-vector. This combination uses a 2-dimensional (2D) self-attentive structure,
which extends the standard self-attentive layer by averaging not only across
time but also across different types of embeddings. Two types of 2D
self-attentive structure in this paper are the simultaneous combination and the
consecutive combination, adopting a single and multiple self-attentive layers
respectively. The penalty term in the original self-attentive layer which is
jointly minimised with the objective function to encourage diversity of
annotation vectors is also modified to obtain not only different local peaks
but also the overall trends in the multiple annotation vectors. Experiments on
the AMI meeting corpus show that our modified penalty term improves the d-
vector relative speaker error rate (SER) by 6% and 21% for d-vector systems,
and a 10% further relative SER reduction can be obtained using the c-vector
from our best 2D self-attentive structure.
| 2,019 | Computation and Language |
Insertion Transformer: Flexible Sequence Generation via Insertion
Operations | We present the Insertion Transformer, an iterative, partially autoregressive
model for sequence generation based on insertion operations. Unlike typical
autoregressive models which rely on a fixed, often left-to-right ordering of
the output, our approach accommodates arbitrary orderings by allowing for
tokens to be inserted anywhere in the sequence during decoding. This
flexibility confers a number of advantages: for instance, not only can our
model be trained to follow specific orderings such as left-to-right generation
or a binary tree traversal, but it can also be trained to maximize entropy over
all valid insertions for robustness. In addition, our model seamlessly
accommodates both fully autoregressive generation (one insertion at a time) and
partially autoregressive generation (simultaneous insertions at multiple
locations). We validate our approach by analyzing its performance on the WMT
2014 English-German machine translation task under various settings for
training and decoding. We find that the Insertion Transformer outperforms many
prior non-autoregressive approaches to translation at comparable or better
levels of parallelism, and successfully recovers the performance of the
original Transformer while requiring only logarithmically many iterations
during decoding.
| 2,019 | Computation and Language |
Multilingual Neural Machine Translation With Soft Decoupled Encoding | Multilingual training of neural machine translation (NMT) systems has led to
impressive accuracy improvements on low-resource languages. However, there are
still significant challenges in efficiently learning word representations in
the face of paucity of data. In this paper, we propose Soft Decoupled Encoding
(SDE), a multilingual lexicon encoding framework specifically designed to share
lexical-level information intelligently without requiring heuristic
preprocessing such as pre-segmenting the data. SDE represents a word by its
spelling through a character encoding, and its semantic meaning through a
latent embedding space shared by all languages. Experiments on a standard
dataset of four low-resource languages show consistent improvements over strong
multilingual NMT baselines, with gains of up to 2 BLEU on one of the tested
languages, achieving the new state-of-the-art on all four language pairs.
| 2,019 | Computation and Language |
Word embeddings for idiolect identification | The term idiolect refers to the unique and distinctive use of language of an
individual and it is the theoretical foundation of Authorship Attribution. In
this paper we are focusing on learning distributed representations (embeddings)
of social media users that reflect their writing style. These representations
can be considered as stylistic fingerprints of the authors. We are exploring
the performance of the two main flavours of distributed representations, namely
embeddings produced by Neural Probabilistic Language models (such as word2vec)
and matrix factorization (such as GloVe).
| 2,019 | Computation and Language |
Neural embeddings for metaphor detection in a corpus of Greek texts | One of the major challenges that NLP faces is metaphor detection, especially
by automatic means, a task that becomes even more difficult for languages
lacking in linguistic resources and tools. Our purpose is the automatic
differentiation between literal and metaphorical meaning in authentic
non-annotated phrases from the Corpus of Greek Texts by means of computational
methods of machine learning. For this purpose the theoretical background of
distributional semantics is discussed and employed. Distributional Semantics
Theory develops concepts and methods for the quantification and classification
of semantic similarities displayed by linguistic elements in large amounts of
linguistic data according to their distributional properties. In accordance
with this model, the approach followed in the thesis takes into account the
linguistic context for the computation of the distributional representation of
phrases in geometrical space, as well as for their comparison with the
distributional representations of other phrases, whose function in speech is
already "known" with the objective to reach conclusions about their literal or
metaphorical function in the specific linguistic context. This procedure aims
at dealing with the lack of linguistic resources for the Greek language, as the
almost impossible up to now semantic comparison between "phrases", takes the
form of an arithmetical comparison of their distributional representations in
geometrical space.
| 2,019 | Computation and Language |
BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field
Language Model | We show that BERT (Devlin et al., 2018) is a Markov random field language
model. This formulation gives way to a natural procedure to sample sentences
from BERT. We generate from BERT and find that it can produce high-quality,
fluent generations. Compared to the generations of a traditional left-to-right
language model, BERT generates sentences that are more diverse but of slightly
worse quality.
| 2,019 | Computation and Language |
Table2answer: Read the database and answer without SQL | Semantic parsing is the task of mapping natural language to logic form. In
question answering, semantic parsing can be used to map the question to logic
form and execute the logic form to get the answer. One key problem for semantic
parsing is the hard label work. We study this problem in another way: we do not
use the logic form any more. Instead we only use the schema and answer info. We
think that the logic form step can be injected into the deep model. The reason
why we think removing the logic form step is possible is that human can do the
task without explicit logic form. We use BERT-based model and do the experiment
in the WikiSQL dataset, which is a large natural language to SQL dataset. Our
experimental evaluations that show that our model can achieves the baseline
results in WikiSQL dataset.
| 2,019 | Computation and Language |
Machine Reading Comprehension for Answer Re-Ranking in Customer Support
Chatbots | Recent advances in deep neural networks, language modeling and language
generation have introduced new ideas to the field of conversational agents. As
a result, deep neural models such as sequence-to-sequence, Memory Networks, and
the Transformer have become key ingredients of state-of-the-art dialog systems.
While those models are able to generate meaningful responses even in unseen
situation, they need a lot of training data to build a reliable model. Thus,
most real-world systems stuck to traditional approaches based on information
retrieval and even hand-crafted rules, due to their robustness and
effectiveness, especially for narrow-focused conversations. Here, we present a
method that adapts a deep neural architecture from the domain of machine
reading comprehension to re-rank the suggested answers from different models
using the question as context. We train our model using negative sampling based
on question-answer pairs from the Twitter Customer Support Dataset.The
experimental results show that our re-ranking framework can improve the
performance in terms of word overlap and semantics both for individual models
as well as for model combinations.
| 2,019 | Computation and Language |
SECTOR: A Neural Model for Coherent Topic Segmentation and
Classification | When searching for information, a human reader first glances over a document,
spots relevant sections and then focuses on a few sentences for resolving her
intention. However, the high variance of document structure complicates to
identify the salient topic of a given section at a glance. To tackle this
challenge, we present SECTOR, a model to support machine reading systems by
segmenting documents into coherent sections and assigning topic labels to each
section. Our deep neural network architecture learns a latent topic embedding
over the course of a document. This can be leveraged to classify local topics
from plain text and segment a document at topic shifts. In addition, we
contribute WikiSection, a publicly available dataset with 242k labeled sections
in English and German from two distinct domains: diseases and cities. From our
extensive evaluation of 20 architectures, we report a highest score of 71.6% F1
for the segmentation and classification of 30 topics from the English city
domain, scored by our SECTOR LSTM model with bloom filter embeddings and
bidirectional segmentation. This is a significant improvement of 29.5 points F1
compared to state-of-the-art CNN classifiers with baseline segmentation.
| 2,019 | Computation and Language |
Learning to Select Knowledge for Response Generation in Dialog Systems | End-to-end neural models for intelligent dialogue systems suffer from the
problem of generating uninformative responses. Various methods were proposed to
generate more informative responses by leveraging external knowledge. However,
few previous work has focused on selecting appropriate knowledge in the
learning process. The inappropriate selection of knowledge could prohibit the
model from learning to make full use of the knowledge. Motivated by this, we
propose an end-to-end neural model which employs a novel knowledge selection
mechanism where both prior and posterior distributions over knowledge are used
to facilitate knowledge selection. Specifically, a posterior distribution over
knowledge is inferred from both utterances and responses, and it ensures the
appropriate selection of knowledge during the training process. Meanwhile, a
prior distribution, which is inferred from utterances only, is used to
approximate the posterior distribution so that appropriate knowledge can be
selected even without responses during the inference process. Compared with the
previous work, our model can better incorporate appropriate knowledge in
response generation. Experiments on both automatic and human evaluation verify
the superiority of our model over previous baselines.
| 2,019 | Computation and Language |
Explainable Text-Driven Neural Network for Stock Prediction | It has been shown that financial news leads to the fluctuation of stock
prices. However, previous work on news-driven financial market prediction
focused only on predicting stock price movement without providing an
explanation. In this paper, we propose a dual-layer attention-based neural
network to address this issue. In the initial stage, we introduce a
knowledge-based method to adaptively extract relevant financial news. Then, we
use input attention to pay more attention to the more influential news and
concatenate the day embeddings with the output of the news representation.
Finally, we use an output attention mechanism to allocate different weights to
different days in terms of their contribution to stock price movement. Thorough
empirical studies based upon historical prices of several individual stocks
demonstrate the superiority of our proposed method in stock price prediction
compared to state-of-the-art methods.
| 2,018 | Computation and Language |
Leveraging Newswire Treebanks for Parsing Conversational Data with
Argument Scrambling | We investigate the problem of parsing conversational data of
morphologically-rich languages such as Hindi where argument scrambling occurs
frequently. We evaluate a state-of-the-art non-linear transition-based parsing
system on a new dataset containing 506 dependency trees for sentences from
Bollywood (Hindi) movie scripts and Twitter posts of Hindi monolingual
speakers. We show that a dependency parser trained on a newswire treebank is
strongly biased towards the canonical structures and degrades when applied to
conversational data. Inspired by Transformational Generative Grammar, we
mitigate the sampling bias by generating all theoretically possible alternative
word orders of a clause from the existing (kernel) structures in the treebank.
Training our parser on canonical and transformed structures improves
performance on conversational data by around 9% LAS over the baseline newswire
parser.
| 2,017 | Computation and Language |
Categorical Metadata Representation for Customized Text Classification | The performance of text classification has improved tremendously using
intelligently engineered neural-based models, especially those injecting
categorical metadata as additional information, e.g., using user/product
information for sentiment classification. These information have been used to
modify parts of the model (e.g., word embeddings, attention mechanisms) such
that results can be customized according to the metadata. We observe that
current representation methods for categorical metadata, which are devised for
human consumption, are not as effective as claimed in popular classification
methods, outperformed even by simple concatenation of categorical features in
the final layer of the sentence encoder. We conjecture that categorical
features are harder to represent for machine use, as available context only
indirectly describes the category, and even such context is often scarce (for
tail category). To this end, we propose to use basis vectors to effectively
incorporate categorical metadata on various parts of a neural-based model. This
additionally decreases the number of parameters dramatically, especially when
the number of categorical features is large. Extensive experiments on various
datasets with different properties are performed and show that through our
method, we can represent categorical metadata more effectively to customize
parts of the model, including unexplored ones, and increase the performance of
the model greatly.
| 2,019 | Computation and Language |
Transfer Learning for Sequence Labeling Using Source Model and Target
Data | In this paper, we propose an approach for transferring the knowledge of a
neural model for sequence labeling, learned from the source domain, to a new
model trained on a target domain, where new label categories appear. Our
transfer learning (TL) techniques enable to adapt the source model using the
target data and new categories, without accessing to the source data. Our
solution consists in adding new neurons in the output layer of the target model
and transferring parameters from the source model, which are then fine-tuned
with the target data. Additionally, we propose a neural adapter to learn the
difference between the source and the target label distribution, which provides
additional important information to the target model. Our experiments on Named
Entity Recognition show that (i) the learned knowledge in the source model can
be effectively transferred when the target data contains new categories and
(ii) our neural adapter further improves such transfer.
| 2,019 | Computation and Language |
Generating Natural Language Explanations for Visual Question Answering
using Scene Graphs and Visual Attention | In this paper, we present a novel approach for the task of eXplainable
Question Answering (XQA), i.e., generating natural language (NL) explanations
for the Visual Question Answering (VQA) problem. We generate NL explanations
comprising of the evidence to support the answer to a question asked to an
image using two sources of information: (a) annotations of entities in an image
(e.g., object labels, region descriptions, relation phrases) generated from the
scene graph of the image, and (b) the attention map generated by a VQA model
when answering the question. We show how combining the visual attention map
with the NL representation of relevant scene graph entities, carefully selected
using a language model, can give reasonable textual explanations without the
need of any additional collected data (explanation captions, etc). We run our
algorithms on the Visual Genome (VG) dataset and conduct internal user-studies
to demonstrate the efficacy of our approach over a strong baseline. We have
also released a live web demo showcasing our VQA and textual explanation
generation using scene graphs and visual attention.
| 2,019 | Computation and Language |
Context-Aware Self-Attention Networks | Self-attention model have shown its flexibility in parallel computation and
the effectiveness on modeling both long- and short-term dependencies. However,
it calculates the dependencies between representations without considering the
contextual information, which have proven useful for modeling dependencies
among neural representations in various natural language tasks. In this work,
we focus on improving self-attention networks through capturing the richness of
context. To maintain the simplicity and flexibility of the self-attention
networks, we propose to contextualize the transformations of the query and key
layers, which are used to calculates the relevance between elements.
Specifically, we leverage the internal representations that embed both global
and deep contexts, thus avoid relying on external resources. Experimental
results on WMT14 English-German and WMT17 Chinese-English translation tasks
demonstrate the effectiveness and universality of the proposed methods.
Furthermore, we conducted extensive analyses to quantity how the context
vectors participate in the self-attention model.
| 2,019 | Computation and Language |
Dynamic Layer Aggregation for Neural Machine Translation with
Routing-by-Agreement | With the promising progress of deep neural networks, layer aggregation has
been used to fuse information across layers in various fields, such as computer
vision and machine translation. However, most of the previous methods combine
layers in a static fashion in that their aggregation strategy is independent of
specific hidden states. Inspired by recent progress on capsule networks, in
this paper we propose to use routing-by-agreement strategies to aggregate
layers dynamically. Specifically, the algorithm learns the probability of a
part (individual layer representations) assigned to a whole (aggregated
representations) in an iterative way and combines parts accordingly. We
implement our algorithm on top of the state-of-the-art neural machine
translation model TRANSFORMER and conduct experiments on the widely-used WMT14
English-German and WMT17 Chinese-English translation datasets. Experimental
results across language pairs show that the proposed approach consistently
outperforms the strong baseline model and a representative static aggregation
model.
| 2,019 | Computation and Language |
Improving Semantic Parsing for Task Oriented Dialog | Semantic parsing using hierarchical representations has recently been
proposed for task oriented dialog with promising results [Gupta et al 2018]. In
this paper, we present three different improvements to the model:
contextualized embeddings, ensembling, and pairwise re-ranking based on a
language model. We taxonomize the errors possible for the hierarchical
representation, such as wrong top intent, missing spans or split spans, and
show that the three approaches correct different kinds of errors. The best
model combines the three techniques and gives 6.4% better exact match accuracy
than the state-of-the-art, with an error reduction of 33%, resulting in a new
state-of-the-art result on the Task Oriented Parsing (TOP) dataset.
| 2,019 | Computation and Language |
Contextual Word Representations: A Contextual Introduction | This introduction aims to tell the story of how we put words into computers.
It is part of the story of the field of natural language processing (NLP), a
branch of artificial intelligence. It targets a wide audience with a basic
understanding of computer programming, but avoids a detailed mathematical
treatment, and it does not present any algorithms. It also does not focus on
any particular application of NLP such as translation, question answering, or
information extraction. The ideas presented here were developed by many
researchers over many decades, so the citations are not exhaustive but rather
direct the reader to a handful of papers that are, in the author's view,
seminal. After reading this document, you should have a general understanding
of word vectors (also known as word embeddings): why they exist, what problems
they solve, where they come from, how they have changed over time, and what
some of the open questions about them are. Readers already familiar with word
vectors are advised to skip to Section 5 for the discussion of the most recent
advance, contextual word vectors.
| 2,020 | Computation and Language |
A Fully Differentiable Beam Search Decoder | We introduce a new beam search decoder that is fully differentiable, making
it possible to optimize at training time through the inference procedure. Our
decoder allows us to combine models which operate at different granularities
(e.g. acoustic and language models). It can be used when target sequences are
not aligned to input sequences by considering all possible alignments between
the two. We demonstrate our approach scales by applying it to speech
recognition, jointly training acoustic and word-level language models. The
system is end-to-end, with gradients flowing through the whole architecture
from the word-level transcriptions. Recent research efforts have shown that
deep neural networks with attention-based mechanisms are powerful enough to
successfully train an acoustic model from the final transcription, while
implicitly learning a language model. Instead, we show that it is possible to
discriminatively train an acoustic model jointly with an explicit and possibly
pre-trained language model.
| 2,019 | Computation and Language |
CruzAffect at AffCon 2019 Shared Task: A feature-rich approach to
characterize happiness | We present our system, CruzAffect, for the CL-Aff Shared Task 2019.
CruzAffect consists of several types of robust and efficient models for
affective classification tasks. We utilize both traditional classifiers, such
as XGBoosted Forest, as well as a deep learning Convolutional Neural Networks
(CNN) classifier. We explore rich feature sets such as syntactic features,
emotional features, and profile features, and utilize several sentiment
lexicons, to discover essential indicators of social involvement and control
that a subject might exercise in their happy moments, as described in textual
snippets from the HappyDB database. The data comes with a labeled set (10K),
and a larger unlabeled set (70K). We therefore use supervised methods on the
10K dataset, and a bootstrapped semi-supervised approach for the 70K. We
evaluate these models for binary classification of agency and social labels
(Task 1), as well as multi-class prediction for concepts labels (Task 2). We
obtain promising results on the held-out data, suggesting that the proposed
feature sets effectively represent the data for affective classification tasks.
We also build concepts models that discover general themes recurring in happy
moments. Our results indicate that generic characteristics are shared between
the classes of agency, social and concepts, suggesting it should be possible to
build general models for affective classification tasks.
| 2,019 | Computation and Language |
Combination of Domain Knowledge and Deep Learning for Sentiment Analysis
of Short and Informal Messages on Social Media | Sentiment analysis has been emerging recently as one of the major natural
language processing (NLP) tasks in many applications. Especially, as social
media channels (e.g. social networks or forums) have become significant sources
for brands to observe user opinions about their products, this task is thus
increasingly crucial. However, when applied with real data obtained from social
media, we notice that there is a high volume of short and informal messages
posted by users on those channels. This kind of data makes the existing works
suffer from many difficulties to handle, especially ones using deep learning
approaches. In this paper, we propose an approach to handle this problem. This
work is extended from our previous work, in which we proposed to combine the
typical deep learning technique of Convolutional Neural Networks with domain
knowledge. The combination is used for acquiring additional training data
augmentation and a more reasonable loss function. In this work, we further
improve our architecture by various substantial enhancements, including
negation-based data augmentation, transfer learning for word embeddings, the
combination of word-level embeddings and character-level embeddings, and using
multitask learning technique for attaching domain knowledge rules in the
learning process. Those enhancements, specifically aiming to handle short and
informal messages, help us to enjoy significant improvement in performance once
experimenting on real datasets.
| 2,019 | Computation and Language |
Exploring Language Similarities with Dimensionality Reduction Technique | In recent years several novel models were developed to process natural
language, development of accurate language translation systems have helped us
overcome geographical barriers and communicate ideas effectively. These models
are developed mostly for a few languages that are widely used while other
languages are ignored. Most of the languages that are spoken share lexical,
syntactic and sematic similarity with several other languages and knowing this
can help us leverage the existing model to build more specific and accurate
models that can be used for other languages, so here I have explored the idea
of representing several known popular languages in a lower dimension such that
their similarities can be visualized using simple 2 dimensional plots. This can
even help us understand newly discovered languages that may not share its
vocabulary with any of the existing languages.
| 2,019 | Computation and Language |
Twitch Plays Pokemon, Machine Learns Twitch: Unsupervised Context-Aware
Anomaly Detection for Identifying Trolls in Streaming Data | With the increasing importance of online communities, discussion forums, and
customer reviews, Internet "trolls" have proliferated thereby making it
difficult for information seekers to find relevant and correct information. In
this paper, we consider the problem of detecting and identifying Internet
trolls, almost all of which are human agents. Identifying a human agent among a
human population presents significant challenges compared to detecting
automated spam or computerized robots. To learn a troll's behavior, we use
contextual anomaly detection to profile each chat user. Using clustering and
distance-based methods, we use contextual data such as the group's current
goal, the current time, and the username to classify each point as an anomaly.
A user whose features significantly differ from the norm will be classified as
a troll. We collected 38 million data points from the viral Internet fad,
Twitch Plays Pokemon. Using clustering and distance-based methods, we develop
heuristics for identifying trolls. Using MapReduce techniques for preprocessing
and user profiling, we are able to classify trolls based on 10 features
extracted from a user's lifetime history.
| 2,019 | Computation and Language |
A Comparative Study of Feature Selection Methods for Dialectal Arabic
Sentiment Classification Using Support Vector Machine | Unlike other languages, the Arabic language has a morphological complexity
which makes the Arabic sentiment analysis is a challenging task. Moreover, the
presence of the dialects in the Arabic texts have made the sentiment analysis
task is more challenging, due to the absence of specific rules that govern the
writing or speaking system. Generally, one of the problems of sentiment
analysis is the high dimensionality of the feature vector. To resolve this
problem, many feature selection methods have been proposed. In contrast to the
dialectal Arabic language, these selection methods have been investigated
widely for the English language. This work investigated the effect of feature
selection methods and their combinations on dialectal Arabic sentiment
classification. The feature selection methods are Information Gain (IG),
Correlation, Support Vector Machine (SVM), Gini Index (GI), and Chi-Square. A
number of experiments were carried out on dialectical Jordanian reviews with
using an SVM classifier. Furthermore, the effect of different term weighting
schemes, stemmers, stop words removal, and feature models on the performance
were investigated. The experimental results showed that the best performance of
the SVM classifier was obtained after the SVM and correlation feature selection
methods had been combined with the uni-gram model.
| 2,019 | Computation and Language |
CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix
Space Model | Continuous Bag of Words (CBOW) is a powerful text embedding method. Due to
its strong capabilities to encode word content, CBOW embeddings perform well on
a wide range of downstream tasks while being efficient to compute. However,
CBOW is not capable of capturing the word order. The reason is that the
computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ
and ZYX are the same. In order to address this shortcoming, we propose a
learning algorithm for the Continuous Matrix Space Model, which we call
Continual Multiplication of Words (CMOW). Our algorithm is an adaptation of
word2vec, so that it can be trained on large quantities of unlabeled text. We
empirically show that CMOW better captures linguistic properties, but it is
inferior to CBOW in memorizing word content. Motivated by these findings, we
propose a hybrid model that combines the strengths of CBOW and CMOW. Our
results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to
memorize word content while at the same time substantially improving its
ability to encode other linguistic information by 8%. As a result, the hybrid
also performs better on 8 out of 11 supervised downstream tasks with an average
improvement of 1.2%.
| 2,019 | Computation and Language |
Self-Attention Aligner: A Latency-Control End-to-End Model for ASR Using
Self-Attention Network and Chunk-Hopping | Self-attention network, an attention-based feedforward neural network, has
recently shown the potential to replace recurrent neural networks (RNNs) in a
variety of NLP tasks. However, it is not clear if the self-attention network
could be a good alternative of RNNs in automatic speech recognition (ASR),
which processes the longer speech sequences and may have online recognition
requirements. In this paper, we present a RNN-free end-to-end model:
self-attention aligner (SAA), which applies the self-attention networks to a
simplified recurrent neural aligner (RNA) framework. We also propose a
chunk-hopping mechanism, which enables the SAA model to encode on segmented
frame chunks one after another to support online recognition. Experiments on
two Mandarin ASR datasets show the replacement of RNNs by the self-attention
networks yields a 8.4%-10.2% relative character error rate (CER) reduction. In
addition, the chunk-hopping mechanism allows the SAA to have only a 2.5%
relative CER degradation with a 320ms latency. After jointly training with a
self-attention network language model, our SAA model obtains further error rate
reduction on multiple datasets. Especially, it achieves 24.12% CER on the
Mandarin ASR benchmark (HKUST), exceeding the best end-to-end model by over 2%
absolute CER.
| 2,019 | Computation and Language |
Investigating the Effect of Segmentation Methods on Neural Model based
Sentiment Analysis on Informal Short Texts in Turkish | This work investigates segmentation approaches for sentiment analysis on
informal short texts in Turkish. The two building blocks of the proposed work
are segmentation and deep neural network model. Segmentation focuses on
preprocessing of text with different methods. These methods are grouped in
four: morphological, sub-word, tokenization, and hybrid approaches. We analyzed
several variants for each of these four methods. The second stage focuses on
evaluation of the neural model for sentiment analysis. The performance of each
segmentation method is evaluated under Convolutional Neural Network (CNN) and
Recurrent Neural Network (RNN) model proposed in the literature for sentiment
classification.
| 2,019 | Computation and Language |
Author Profiling for Hate Speech Detection | The rapid growth of social media in recent years has fed into some highly
undesirable phenomena such as proliferation of abusive and offensive language
on the Internet. Previous research suggests that such hateful content tends to
come from users who share a set of common stereotypes and form communities
around them. The current state-of-the-art approaches to hate speech detection
are oblivious to user and community information and rely entirely on textual
(i.e., lexical and semantic) cues. In this paper, we propose a novel approach
to this problem that incorporates community-based profiling features of Twitter
users. Experimenting with a dataset of 16k tweets, we show that our methods
significantly outperform the current state of the art in hate speech detection.
Further, we conduct a qualitative analysis of model characteristics. We release
our code, pre-trained models and all the resources used in the public domain.
| 2,019 | Computation and Language |
Learned In Speech Recognition: Contextual Acoustic Word Embeddings | End-to-end acoustic-to-word speech recognition models have recently gained
popularity because they are easy to train, scale well to large amounts of
training data, and do not require a lexicon. In addition, word models may also
be easier to integrate with downstream tasks such as spoken language
understanding, because inference (search) is much simplified compared to
phoneme, character or any other sort of sub-word units. In this paper, we
describe methods to construct contextual acoustic word embeddings directly from
a supervised sequence-to-sequence acoustic-to-word speech recognition model
using the learned attention distribution. On a suite of 16 standard sentence
evaluation tasks, our embeddings show competitive performance against a
word2vec model trained on the speech transcriptions. In addition, we evaluate
these embeddings on a spoken language understanding task, and observe that our
embeddings match the performance of text-based embeddings in a pipeline of
first performing speech recognition and then constructing word embeddings from
transcriptions.
| 2,019 | Computation and Language |
A Walk-based Model on Entity Graphs for Relation Extraction | We present a novel graph-based neural network model for relation extraction.
Our model treats multiple pairs in a sentence simultaneously and considers
interactions among them. All the entities in a sentence are placed as nodes in
a fully-connected graph structure. The edges are represented with
position-aware contexts around the entity pairs. In order to consider different
relation paths between two entities, we construct up to l-length walks between
each pair. The resulting walks are merged and iteratively used to update the
edge representations into longer walks representations. We show that the model
achieves performance comparable to the state-of-the-art systems on the ACE 2005
dataset without using any external tools.
| 2,018 | Computation and Language |
Classifying textual data: shallow, deep and ensemble methods | This paper focuses on a comparative evaluation of the most common and modern
methods for text classification, including the recent deep learning strategies
and ensemble methods. The study is motivated by a challenging real data
problem, characterized by high-dimensional and extremely sparse data, deriving
from incoming calls to the customer care of an Italian phone company. We will
show that deep learning outperforms many classical (shallow) strategies but the
combination of shallow and deep learning methods in a unique ensemble
classifier may improve the robustness and the accuracy of "single"
classification methods.
| 2,019 | Computation and Language |
Predicting US State-Level Agricultural Sentiment as a Measure of Food
Security with Tweets from Farming Communities | The ability to obtain accurate food security metrics in developing areas
where relevant data can be sparse is critically important for policy makers
tasked with implementing food aid programs. As a result, a great deal of work
has been dedicated to predicting important food security metrics such as annual
crop yields using a variety of methods including simulation, remote sensing,
weather models, and human expert input. As a complement to existing techniques
in crop yield prediction, this work develops neural network models for
predicting the sentiment of Twitter feeds from farming communities.
Specifically, we investigate the potential of both direct learning on a small
dataset of agriculturally-relevant tweets and transfer learning from larger,
well-labeled sentiment datasets from other domains (e.g.~politics) to
accurately predict agricultural sentiment, which we hope would ultimately serve
as a useful crop yield predictor. We find that direct learning from small,
relevant datasets outperforms transfer learning from large, fully-labeled
datasets, that convolutional neural networks broadly outperform recurrent
neural networks on Twitter sentiment classification, and that these models
perform substantially less well on ternary sentiment problems characteristic of
practical settings than on binary problems often found in the literature.
| 2,019 | Computation and Language |
A novel repetition normalized adversarial reward for headline generation | While reinforcement learning can effectively improve language generation
models, it often suffers from generating incoherent and repetitive phrases
\cite{paulus2017deep}. In this paper, we propose a novel repetition normalized
adversarial reward to mitigate these problems. Our repetition penalized reward
can greatly reduce the repetition rate and adversarial training mitigates
generating incoherent phrases. Our model significantly outperforms the baseline
model on ROUGE-1\,(+3.24), ROUGE-L\,(+2.25), and a decreased repetition-rate
(-4.98\%).
| 2,019 | Computation and Language |
Sentence Compression via DC Programming Approach | Sentence compression is an important problem in natural language processing.
In this paper, we firstly establish a new sentence compression model based on
the probability model and the parse tree model. Our sentence compression model
is equivalent to an integer linear program (ILP) which can both guarantee the
syntax correctness of the compression and save the main meaning. We propose
using a DC (Difference of convex) programming approach (DCA) for finding local
optimal solution of our model. Combing DCA with a parallel-branch-and-bound
framework, we can find global optimal solution. Numerical results demonstrate
the good quality of our sentence compression model and the excellent
performance of our proposed solution algorithm.
| 2,019 | Computation and Language |
Discovery of Natural Language Concepts in Individual Units of CNNs | Although deep convolutional networks have achieved improved performance in
many natural language tasks, they have been treated as black boxes because they
are difficult to interpret. Especially, little is known about how they
represent language in their intermediate layers. In an attempt to understand
the representations of deep convolutional networks trained on language tasks,
we show that individual units are selectively responsive to specific morphemes,
words, and phrases, rather than responding to arbitrary and uninterpretable
patterns. In order to quantitatively analyze such an intriguing phenomenon, we
propose a concept alignment method based on how units respond to the replicated
text. We conduct analyses with different architectures on multiple datasets for
classification and translation tasks and provide new insights into how deep
models understand natural language.
| 2,019 | Computation and Language |
Neural Machine Translation for Cebuano to Tagalog with Subword Unit
Translation | The Philippines is an archipelago composed of 7, 641 different islands with
more than 150 different languages. This linguistic differences and diversity,
though may be seen as a beautiful feature, have contributed to the difficulty
in the promotion of educational and cultural development of different domains
in the country. An effective machine translation system solely dedicated to
cater Philippine languages will surely help bridge this gap. In this research
work, a never before applied approach for language translation to a Philippine
language was used for a Cebuano to Tagalog translator. A Recurrent Neural
Network was used to implement the translator using OpenNMT sequence modeling
tool in TensorFlow. The performance of the translation was evaluated using the
BLEU Score metric. For the Cebuano to Tagalog translation, BLEU produced a
score of 20.01. A subword unit translation for verbs and copyable approach was
performed where commonly seen mistranslated words from the source to the target
were corrected. The BLEU score increased to 22.87. Though slightly higher, this
score still indicates that the translation is somehow understandable but is not
yet considered as a good translation.
| 2,019 | Computation and Language |
Semantic Neural Machine Translation using AMR | It is intuitive that semantic representations can be useful for machine
translation, mainly because they can help in enforcing meaning preservation and
handling data sparsity (many sentences correspond to one meaning) of machine
translation models. On the other hand, little work has been done on leveraging
semantics for neural machine translation (NMT). In this work, we study the
usefulness of AMR (short for abstract meaning representation) on NMT.
Experiments on a standard English-to-German dataset show that incorporating AMR
as additional knowledge can significantly improve a strong attention-based
sequence-to-sequence neural translation model.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.