Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Exploring and Improving Robustness of Multi Task Deep Neural Networks
via Domain Agnostic Defenses | In this paper, we explore the robustness of the Multi-Task Deep Neural
Networks (MT-DNN) against non-targeted adversarial attacks across Natural
Language Understanding (NLU) tasks as well as some possible ways to defend
against them. Liu et al., have shown that the Multi-Task Deep Neural Network,
due to the regularization effect produced when training as a result of its
cross task data, is more robust than a vanilla BERT model trained only on one
task (1.1%-1.5% absolute difference). We further show that although the MT-DNN
has generalized better, making it easily transferable across domains and tasks,
it can still be compromised as after only 2 attacks (1-character and
2-character) the accuracy drops by 42.05% and 32.24% for the SNLI and SciTail
tasks. Finally, we propose a domain agnostic defense which restores the model's
accuracy (36.75% and 25.94% respectively) as opposed to a general-purpose
defense or an off-the-shelf spell checker.
| 2,020 | Computation and Language |
The empirical structure of word frequency distributions | The frequencies at which individual words occur across languages follow power
law distributions, a pattern of findings known as Zipf's law. A vast literature
argues over whether this serves to optimize the efficiency of human
communication, however this claim is necessarily post hoc, and it has been
suggested that Zipf's law may in fact describe mixtures of other distributions.
From this perspective, recent findings that Sinosphere first (family) names are
geometrically distributed are notable, because this is actually consistent with
information theoretic predictions regarding optimal coding. First names form
natural communicative distributions in most languages, and I show that when
analyzed in relation to the communities in which they are used, first name
distributions across a diverse set of languages are both geometric and,
historically, remarkably similar, with power law distributions only emerging
when empirical distributions are aggregated. I then show this pattern of
findings replicates in communicative distributions of English nouns and verbs.
These results indicate that if lexical distributions support efficient
communication, they do so because their functional structures directly satisfy
the constraints described by information theory, and not because of Zipf's law.
Understanding the function of these information structures is likely to be key
to explaining humankind's remarkable communicative capacities.
| 2,020 | Computation and Language |
Language Models Are An Effective Patient Representation Learning
Technique For Electronic Health Record Data | Widespread adoption of electronic health records (EHRs) has fueled the
development of using machine learning to build prediction models for various
clinical outcomes. This process is often constrained by having a relatively
small number of patient records for training the model. We demonstrate that
using patient representation schemes inspired from techniques in natural
language processing can increase the accuracy of clinical prediction models by
transferring information learned from the entire patient population to the task
of training a specific model, where only a subset of the population is
relevant. Such patient representation schemes enable a 3.5% mean improvement in
AUROC on five prediction tasks compared to standard baselines, with the average
improvement rising to 19% when only a small number of patient records are
available for training the clinical prediction model.
| 2,020 | Computation and Language |
Urdu-English Machine Transliteration using Neural Networks | Machine translation has gained much attention in recent years. It is a
sub-field of computational linguistic which focus on translating text from one
language to other language. Among different translation techniques, neural
network currently leading the domain with its capabilities of providing a
single large neural network with attention mechanism, sequence-to-sequence and
long-short term modelling. Despite significant progress in domain of machine
translation, translation of out-of-vocabulary words(OOV) which include
technical terms, named-entities, foreign words are still a challenge for
current state-of-art translation systems, and this situation becomes even worse
while translating between low resource languages or languages having different
structures. Due to morphological richness of a language, a word may have
different meninges in different context. In such scenarios, translation of word
is not only enough in order provide the correct/quality translation.
Transliteration is a way to consider the context of word/sentence during
translation. For low resource language like Urdu, it is very difficult to
have/find parallel corpus for transliteration which is large enough to train
the system. In this work, we presented transliteration technique based on
Expectation Maximization (EM) which is un-supervised and language independent.
Systems learns the pattern and out-of-vocabulary (OOV) words from parallel
corpus and there is no need to train it on transliteration corpus explicitly.
This approach is tested on three models of statistical machine translation
(SMT) which include phrasebased, hierarchical phrase-based and factor based
models and two models of neural machine translation which include LSTM and
transformer model.
| 2,020 | Computation and Language |
Dialectal Layers in West Iranian: a Hierarchical Dirichlet Process
Approach to Linguistic Relationships | This paper addresses a series of complex and unresolved issues in the
historical phonology of West Iranian languages. The West Iranian languages
(Persian, Kurdish, Balochi, and other languages) display a high degree of
non-Lautgesetzlich behavior. Most of this irregularity is undoubtedly due to
language contact; we argue, however, that an oversimplified view of the
processes at work has prevailed in the literature on West Iranian dialectology,
with specialists assuming that deviations from an expected outcome in a given
non-Persian language are due to lexical borrowing from some chronological stage
of Persian. It is demonstrated that this qualitative approach yields at times
problematic conclusions stemming from the lack of explicit probabilistic
inferences regarding the distribution of the data: Persian may not be the sole
donor language; additionally, borrowing at the lexical level is not always the
mechanism that introduces irregularity. In many cases, the possibility that
West Iranian languages show different reflexes in different conditioning
environments remains under-explored. We employ a novel Bayesian approach
designed to overcome these problems and tease apart the different determinants
of irregularity in patterns of West Iranian sound change. Our methodology
allows us to provisionally resolve a number of outstanding questions in the
literature on West Iranian dialectology concerning the dialectal affiliation of
certain sound changes. We outline future directions for work of this sort.
| 2,022 | Computation and Language |
Tensor Graph Convolutional Networks for Text Classification | Compared to sequential learning models, graph-based neural networks exhibit
some excellent properties, such as ability capturing global information. In
this paper, we investigate graph-based neural networks for text classification
problem. A new framework TensorGCN (tensor graph convolutional networks), is
presented for this task. A text graph tensor is firstly constructed to describe
semantic, syntactic, and sequential contextual information. Then, two kinds of
propagation learning perform on the text graph tensor. The first is intra-graph
propagation used for aggregating information from neighborhood nodes in a
single graph. The second is inter-graph propagation used for harmonizing
heterogeneous information between graphs. Extensive experiments are conducted
on benchmark datasets, and the results illustrate the effectiveness of our
proposed framework. Our proposed TensorGCN presents an effective way to
harmonize and integrate heterogeneous information from different kinds of
graphs.
| 2,020 | Computation and Language |
Embedding Compression with Isotropic Iterative Quantization | Continuous representation of words is a standard component in deep
learning-based NLP models. However, representing a large vocabulary requires
significant memory, which can cause problems, particularly on
resource-constrained platforms. Therefore, in this paper we propose an
isotropic iterative quantization (IIQ) approach for compressing embedding
vectors into binary ones, leveraging the iterative quantization technique well
established for image retrieval, while satisfying the desired isotropic
property of PMI based models. Experiments with pre-trained embeddings (i.e.,
GloVe and HDC) demonstrate a more than thirty-fold compression ratio with
comparable and sometimes even improved performance over the original
real-valued embedding vectors.
| 2,020 | Computation and Language |
A Continuous Space Neural Language Model for Bengali Language | Language models are generally employed to estimate the probability
distribution of various linguistic units, making them one of the fundamental
parts of natural language processing. Applications of language models include a
wide spectrum of tasks such as text summarization, translation and
classification. For a low resource language like Bengali, the research in this
area so far can be considered to be narrow at the very least, with some
traditional count based models being proposed. This paper attempts to address
the issue and proposes a continuous-space neural language model, or more
specifically an ASGD weight dropped LSTM language model, along with techniques
to efficiently train it for Bengali Language. The performance analysis with
some currently existing count based models illustrated in this paper also shows
that the proposed architecture outperforms its counterparts by achieving an
inference perplexity as low as 51.2 on the held out data set for Bengali.
| 2,020 | Computation and Language |
Authorship Attribution in Bangla literature using Character-level CNN | Characters are the smallest unit of text that can extract stylometric signals
to determine the author of a text. In this paper, we investigate the
effectiveness of character-level signals in Authorship Attribution of Bangla
Literature and show that the results are promising but improvable. The time and
memory efficiency of the proposed model is much higher than the word level
counterparts but accuracy is 2-5% less than the best performing word-level
models. Comparison of various word-based models is performed and shown that the
proposed model performs increasingly better with larger datasets. We also
analyze the effect of pre-training character embedding of diverse Bangla
character set in authorship attribution. It is seen that the performance is
improved by up to 10% on pre-training. We used 2 datasets from 6 to 14 authors,
balancing them before training and compare the results.
| 2,020 | Computation and Language |
A BERT based Sentiment Analysis and Key Entity Detection Approach for
Online Financial Texts | The emergence and rapid progress of the Internet have brought ever-increasing
impact on financial domain. How to rapidly and accurately mine the key
information from the massive negative financial texts has become one of the key
issues for investors and decision makers. Aiming at the issue, we propose a
sentiment analysis and key entity detection approach based on BERT, which is
applied in online financial text mining and public opinion analysis in social
media. By using pre-train model, we first study sentiment analysis, and then we
consider key entity detection as a sentence matching or Machine Reading
Comprehension (MRC) task in different granularity. Among them, we mainly focus
on negative sentimental information. We detect the specific entity by using our
approach, which is different from traditional Named Entity Recognition (NER).
In addition, we also use ensemble learning to improve the performance of
proposed approach. Experimental results show that the performance of our
approach is generally higher than SVM, LR, NBM, and BERT for two financial
sentiment analysis and key entity detection datasets.
| 2,020 | Computation and Language |
AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses | Many sequence-to-sequence dialogue models tend to generate safe,
uninformative responses. There have been various useful efforts on trying to
eliminate them. However, these approaches either improve decoding algorithms
during inference, rely on hand-crafted features, or employ complex models. In
our work, we build dialogue models that are dynamically aware of what
utterances or tokens are dull without any feature-engineering. Specifically, we
start with a simple yet effective automatic metric, AvgOut, which calculates
the average output probability distribution of all time steps on the decoder
side during training. This metric directly estimates which tokens are more
likely to be generated, thus making it a faithful evaluation of the model
diversity (i.e., for diverse models, the token probabilities should be more
evenly distributed rather than peaked at a few dull tokens). We then leverage
this novel metric to propose three models that promote diversity without losing
relevance. The first model, MinAvgOut, directly maximizes the diversity score
through the output distributions of each batch; the second model, Label
Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled
by the diversity score to control the diversity level; the third model, RL,
adopts Reinforcement Learning and treats the diversity score as a reward
signal. Moreover, we experiment with a hybrid model by combining the loss terms
of MinAvgOut and RL. All four models outperform their base LSTM-RNN model on
both diversity and relevance by a large margin, and are comparable to or better
than competitive baselines (also verified via human evaluation). Moreover, our
approaches are orthogonal to the base model, making them applicable as an
add-on to other emerging better dialogue models in the future.
| 2,020 | Computation and Language |
A Unified System for Aggression Identification in English Code-Mixed and
Uni-Lingual Texts | Wide usage of social media platforms has increased the risk of aggression,
which results in mental stress and affects the lives of people negatively like
psychological agony, fighting behavior, and disrespect to others. Majority of
such conversations contains code-mixed languages[28]. Additionally, the way
used to express thought or communication style also changes from one social
media plat-form to another platform (e.g., communication styles are different
in twitter and Facebook). These all have increased the complexity of the
problem. To solve these problems, we have introduced a unified and robust
multi-modal deep learning architecture which works for English code-mixed
dataset and uni-lingual English dataset both.The devised system, uses
psycho-linguistic features and very ba-sic linguistic features. Our multi-modal
deep learning architecture contains, Deep Pyramid CNN, Pooled BiLSTM, and
Disconnected RNN(with Glove and FastText embedding, both). Finally, the system
takes the decision based on model averaging. We evaluated our system on English
Code-Mixed TRAC 2018 dataset and uni-lingual English dataset obtained from
Kaggle. Experimental results show that our proposed system outperforms all the
previous approaches on English code-mixed dataset and uni-lingual English
dataset.
| 2,020 | Computation and Language |
Stereotypical Bias Removal for Hate Speech Detection Task using
Knowledge-based Generalizations | With the ever-increasing cases of hate spread on social media platforms, it
is critical to design abuse detection mechanisms to proactively avoid and
control such incidents. While there exist methods for hate speech detection,
they stereotype words and hence suffer from inherently biased training. Bias
removal has been traditionally studied for structured datasets, but we aim at
bias mitigation from unstructured text data. In this paper, we make two
important contributions. First, we systematically design methods to quantify
the bias for any model and propose algorithms for identifying the set of words
which the model stereotypes. Second, we propose novel methods leveraging
knowledge-based generalizations for bias-free learning. Knowledge-based
generalization provides an effective way to encode knowledge because the
abstraction they provide not only generalizes content but also facilitates
retraction of information from the hate speech detection classifier, thereby
reducing the imbalance. We experiment with multiple knowledge generalization
policies and analyze their effect on general performance and in mitigating
bias. Our experiments with two real-world datasets, a Wikipedia Talk Pages
dataset (WikiDetox) of size ~96k and a Twitter dataset of size ~24k, show that
the use of knowledge-based generalizations results in better performance by
forcing the classifier to learn from generalized content. Our methods utilize
existing knowledge-bases and can easily be extended to other tasks
| 2,019 | Computation and Language |
Schema2QA: High-Quality and Low-Cost Q&A Agents for the Structured Web | Building a question-answering agent currently requires large annotated
datasets, which are prohibitively expensive. This paper proposes Schema2QA, an
open-source toolkit that can generate a Q&A system from a database schema
augmented with a few annotations for each field. The key concept is to cover
the space of possible compound queries on the database with a large number of
in-domain questions synthesized with the help of a corpus of generic query
templates. The synthesized data and a small paraphrase set are used to train a
novel neural network based on the BERT pretrained model. We use Schema2QA to
generate Q&A systems for five Schema.org domains, restaurants, people, movies,
books and music, and obtain an overall accuracy between 64% and 75% on
crowdsourced questions for these domains. Once annotations and paraphrases are
obtained for a Schema.org schema, no additional manual effort is needed to
create a Q&A agent for any website that uses the same schema. Furthermore, we
demonstrate that learning can be transferred from the restaurant to the hotel
domain, obtaining a 64% accuracy on crowdsourced questions with no manual
effort. Schema2QA achieves an accuracy of 60% on popular restaurant questions
that can be answered using Schema.org. Its performance is comparable to Google
Assistant, 7% lower than Siri, and 15% higher than Alexa. It outperforms all
these assistants by at least 18% on more complex, long-tail questions.
| 2,023 | Computation and Language |
AandP: Utilizing Prolog for converting between active sentence and
passive sentence with three-steps conversion | I introduce a simple but efficient method to solve one of the critical
aspects of English grammar which is the relationship between active sentence
and passive sentence. In fact, an active sentence and its corresponding passive
sentence express the same meaning, but their structure is different. I utilized
Prolog [4] along with Definite Clause Grammars (DCG) [5] for doing the
conversion between active sentence and passive sentence. Some advanced
techniques were also used such as Extra Arguments, Extra Goals, Lexicon, etc. I
tried to solve a variety of cases of active and passive sentences such as 12
English tenses, modal verbs, negative form, etc. More details and my
contributions will be presented in the following sections. The source code is
available at https://github.com/tqtrunghnvn/ActiveAndPassive.
| 2,020 | Computation and Language |
Enhancing lexical-based approach with external knowledge for Vietnamese
multiple-choice machine reading comprehension | Although Vietnamese is the 17th most popular native-speaker language in the
world, there are not many research studies on Vietnamese machine reading
comprehension (MRC), the task of understanding a text and answering questions
about it. One of the reasons is because of the lack of high-quality benchmark
datasets for this task. In this work, we construct a dataset which consists of
2,783 pairs of multiple-choice questions and answers based on 417 Vietnamese
texts which are commonly used for teaching reading comprehension for elementary
school pupils. In addition, we propose a lexical-based MRC method that utilizes
semantic similarity measures and external knowledge sources to analyze
questions and extract answers from the given text. We compare the performance
of the proposed model with several baseline lexical-based and neural
network-based models. Our proposed method achieves 61.81% by accuracy, which is
5.51% higher than the best baseline model. We also measure human performance on
our dataset and find that there is a big gap between machine-model and human
performances. This indicates that significant progress can be made on this
task. The dataset is freely available on our website for research purposes.
| 2,020 | Computation and Language |
Comparing Rule-based, Feature-based and Deep Neural Methods for
De-identification of Dutch Medical Records | Unstructured information in electronic health records provide an invaluable
resource for medical research. To protect the confidentiality of patients and
to conform to privacy regulations, de-identification methods automatically
remove personally identifying information from these medical records. However,
due to the unavailability of labeled data, most existing research is
constrained to English medical text and little is known about the
generalizability of de-identification methods across languages and domains. In
this study, we construct a varied dataset consisting of the medical records of
1260 patients by sampling data from 9 institutes and three domains of Dutch
healthcare. We test the generalizability of three de-identification methods
across languages and domains. Our experiments show that an existing rule-based
method specifically developed for the Dutch language fails to generalize to
this new data. Furthermore, a state-of-the-art neural architecture performs
strongly across languages and domains, even with limited training data.
Compared to feature-based and rule-based methods the neural method requires
significantly less configuration effort and domain-knowledge. We make all code
and pre-trained de-identification models available to the research community,
allowing practitioners to apply them to their datasets and to enable future
benchmarks.
| 2,020 | Computation and Language |
Speech Emotion Recognition Based on Multi-feature and Multi-lingual
Fusion | A speech emotion recognition algorithm based on multi-feature and
Multi-lingual fusion is proposed in order to resolve low recognition accuracy
caused by lack of large speech dataset and low robustness of acoustic features
in the recognition of speech emotion. First, handcrafted and deep automatic
features are extracted from existing data in Chinese and English speech
emotions. Then, the various features are fused respectively. Finally, the fused
features of different languages are fused again and trained in a classification
model. Distinguishing the fused features with the unfused ones, the results
manifest that the fused features significantly enhance the accuracy of speech
emotion recognition algorithm. The proposed solution is evaluated on the two
Chinese corpus and two English corpus, and is shown to provide more accurate
predictions compared to original solution. As a result of this study, the
multi-feature and Multi-lingual fusion algorithm can significantly improve the
speech emotion recognition accuracy when the dataset is small.
| 2,020 | Computation and Language |
Lexical Sememe Prediction using Dictionary Definitions by Capturing
Local Semantic Correspondence | Sememes, defined as the minimum semantic units of human languages in
linguistics, have been proven useful in many NLP tasks. Since manual
construction and update of sememe knowledge bases (KBs) are costly, the task of
automatic sememe prediction has been proposed to assist sememe annotation. In
this paper, we explore the approach of applying dictionary definitions to
predicting sememes for unannotated words. We find that sememes of each word are
usually semantically matched to different words in its dictionary definition,
and we name this matching relationship local semantic correspondence.
Accordingly, we propose a Sememe Correspondence Pooling (SCorP) model, which is
able to capture this kind of matching to predict sememes. We evaluate our model
and baseline methods on a famous sememe KB HowNet and find that our model
achieves state-of-the-art performance. Moreover, further quantitative analysis
shows that our model can properly learn the local semantic correspondence
between sememes and words in dictionary definitions, which explains the
effectiveness of our model. The source codes of this paper can be obtained from
https://github.com/thunlp/scorp.
| 2,020 | Computation and Language |
Multi-step Joint-Modality Attention Network for Scene-Aware Dialogue
System | Understanding dynamic scenes and dialogue contexts in order to converse with
users has been challenging for multimodal dialogue systems. The 8-th Dialog
System Technology Challenge (DSTC8) proposed an Audio Visual Scene-Aware Dialog
(AVSD) task, which contains multiple modalities including audio, vision, and
language, to evaluate how dialogue systems understand different modalities and
response to users. In this paper, we proposed a multi-step joint-modality
attention network (JMAN) based on recurrent neural network (RNN) to reason on
videos. Our model performs a multi-step attention mechanism and jointly
considers both visual and textual representations in each reasoning process to
better integrate information from the two different modalities. Compared to the
baseline released by AVSD organizers, our model achieves a relative 12.1% and
22.4% improvement over the baseline on ROUGE-L score and CIDEr score.
| 2,020 | Computation and Language |
RobBERT: a Dutch RoBERTa-based Language Model | Pre-trained language models have been dominating the field of natural
language processing in recent years, and have led to significant performance
gains for various complex natural language tasks. One of the most prominent
pre-trained language models is BERT, which was released as an English as well
as a multilingual version. Although multilingual BERT performs well on many
tasks, recent studies show that BERT models trained on a single language
significantly outperform the multilingual version. Training a Dutch BERT model
thus has a lot of potential for a wide range of Dutch NLP tasks. While previous
approaches have used earlier implementations of BERT to train a Dutch version
of BERT, we used RoBERTa, a robustly optimized BERT approach, to train a Dutch
language model called RobBERT. We measured its performance on various tasks as
well as the importance of the fine-tuning dataset size. We also evaluated the
importance of language-specific tokenizers and the model's fairness. We found
that RobBERT improves state-of-the-art results for various tasks, and
especially significantly outperforms other models when dealing with smaller
datasets. These results indicate that it is a powerful pre-trained model for a
large variety of Dutch language tasks. The pre-trained and fine-tuned models
are publicly available to support further downstream Dutch NLP applications.
| 2,020 | Computation and Language |
A Hybrid Solution to Learn Turn-Taking in Multi-Party Service-based Chat
Groups | To predict the next most likely participant to interact in a multi-party
conversation is a difficult problem. In a text-based chat group, the only
information available is the sender, the content of the text and the dialogue
history. In this paper we present our study on how these information can be
used on the prediction task through a corpus and architecture that integrates
turn-taking classifiers based on Maximum Likelihood Expectation (MLE),
Convolutional Neural Networks (CNN) and Finite State Automata (FSA). The corpus
is a synthetic adaptation of the Multi-Domain Wizard-of-Oz dataset (MultiWOZ)
to a multiple travel service-based bots scenario with dialogue errors and was
created to simulate user's interaction and evaluate the architecture. We
present experimental results which show that the CNN approach achieves better
performance than the baseline with an accuracy of 92.34%, but the integrated
solution with MLE, CNN and FSA achieves performance even better, with 95.65%.
| 2,020 | Computation and Language |
Modality-Balanced Models for Visual Dialogue | The Visual Dialog task requires a model to exploit both image and
conversational context information to generate the next response to the
dialogue. However, via manual analysis, we find that a large number of
conversational questions can be answered by only looking at the image without
any access to the context history, while others still need the conversation
context to predict the correct answers. We demonstrate that due to this reason,
previous joint-modality (history and image) models over-rely on and are more
prone to memorizing the dialogue history (e.g., by extracting certain keywords
or patterns in the context information), whereas image-only models are more
generalizable (because they cannot memorize or extract keywords from history)
and perform substantially better at the primary normalized discounted
cumulative gain (NDCG) task metric which allows multiple correct answers.
Hence, this observation encourages us to explicitly maintain two models, i.e.,
an image-only model and an image-history joint model, and combine their
complementary abilities for a more balanced multimodal model. We present
multiple methods for this integration of the two models, via ensemble and
consensus dropout fusion with shared parameters. Empirically, our models
achieve strong results on the Visual Dialog challenge 2019 (rank 3 on NDCG and
high balance across metrics), and substantially outperform the winner of the
Visual Dialog challenge 2018 on most metrics.
| 2,020 | Computation and Language |
A Common Semantic Space for Monolingual and Cross-Lingual
Meta-Embeddings | This paper presents a new technique for creating monolingual and
cross-lingual meta-embeddings. Our method integrates multiple word embeddings
created from complementary techniques, textual sources, knowledge bases and
languages. Existing word vectors are projected to a common semantic space using
linear transformations and averaging. With our method the resulting
meta-embeddings maintain the dimensionality of the original embeddings without
losing information while dealing with the out-of-vocabulary problem. An
extensive empirical evaluation demonstrates the effectiveness of our technique
with respect to previous work on various intrinsic and extrinsic multilingual
evaluations, obtaining competitive results for Semantic Textual Similarity and
state-of-the-art performance for word similarity and POS tagging (English and
Spanish). The resulting cross-lingual meta-embeddings also exhibit excellent
cross-lingual transfer learning capabilities. In other words, we can leverage
pre-trained source embeddings from a resource-rich language in order to improve
the word representations for under-resourced languages.
| 2,021 | Computation and Language |
Adaptive Parameterization for Neural Dialogue Generation | Neural conversation systems generate responses based on the
sequence-to-sequence (SEQ2SEQ) paradigm. Typically, the model is equipped with
a single set of learned parameters to generate responses for given input
contexts. When confronting diverse conversations, its adaptability is rather
limited and the model is hence prone to generate generic responses. In this
work, we propose an {\bf Ada}ptive {\bf N}eural {\bf D}ialogue generation
model, \textsc{AdaND}, which manages various conversations with
conversation-specific parameterization. For each conversation, the model
generates parameters of the encoder-decoder by referring to the input context.
In particular, we propose two adaptive parameterization mechanisms: a
context-aware and a topic-aware parameterization mechanism. The context-aware
parameterization directly generates the parameters by capturing local semantics
of the given context. The topic-aware parameterization enables parameter
sharing among conversations with similar topics by first inferring the latent
topics of the given context and then generating the parameters with respect to
the distributional topics. Extensive experiments conducted on a large-scale
real-world conversational dataset show that our model achieves superior
performance in terms of both quantitative metrics and human evaluations.
| 2,020 | Computation and Language |
Capturing Evolution in Word Usage: Just Add More Clusters? | The way the words are used evolves through time, mirroring cultural or
technological evolution of society. Semantic change detection is the task of
detecting and analysing word evolution in textual data, even in short periods
of time. In this paper we focus on a new set of methods relying on
contextualised embeddings, a type of semantic modelling that revolutionised the
NLP field recently. We leverage the ability of the transformer-based BERT model
to generate contextualised embeddings capable of detecting semantic change of
words across time. Several approaches are compared in a common setting in order
to establish strengths and weaknesses for each of them. We also propose several
ideas for improvements, managing to drastically improve the performance of
existing approaches.
| 2,020 | Computation and Language |
Fair Transfer of Multiple Style Attributes in Text | To preserve anonymity and obfuscate their identity on online platforms users
may morph their text and portray themselves as a different gender or
demographic. Similarly, a chatbot may need to customize its communication style
to improve engagement with its audience. This manner of changing the style of
written text has gained significant attention in recent years. Yet these past
research works largely cater to the transfer of single style attributes. The
disadvantage of focusing on a single style alone is that this often results in
target text where other existing style attributes behave unpredictably or are
unfairly dominated by the new style. To counteract this behavior, it would be
nice to have a style transfer mechanism that can transfer or control multiple
styles simultaneously and fairly. Through such an approach, one could obtain
obfuscated or written text incorporated with a desired degree of multiple soft
styles such as female-quality, politeness, or formalness.
In this work, we demonstrate that the transfer of multiple styles cannot be
achieved by sequentially performing multiple single-style transfers. This is
because each single style-transfer step often reverses or dominates over the
style incorporated by a previous transfer step. We then propose a neural
network architecture for fairly transferring multiple style attributes in a
given text. We test our architecture on the Yelp data set to demonstrate our
superior performance as compared to existing one-style transfer steps performed
in a sequence.
| 2,020 | Computation and Language |
From Speech-to-Speech Translation to Automatic Dubbing | We present enhancements to a speech-to-speech translation pipeline in order
to perform automatic dubbing. Our architecture features neural machine
translation generating output of preferred length, prosodic alignment of the
translation with the original speech segments, neural text-to-speech with fine
tuning of the duration of each utterance, and, finally, audio rendering to
enriches text-to-speech output with background noise and reverberation
extracted from the original audio. We report on a subjective evaluation of
automatic dubbing of excerpts of TED Talks from English into Italian, which
measures the perceived naturalness of automatic dubbing and the relative
importance of each proposed enhancement.
| 2,020 | Computation and Language |
A multimodal deep learning approach for named entity recognition from
social media | Named Entity Recognition (NER) from social media posts is a challenging task.
User generated content that forms the nature of social media, is noisy and
contains grammatical and linguistic errors. This noisy content makes it much
harder for tasks such as named entity recognition. We propose two novel deep
learning approaches utilizing multimodal deep learning and Transformers. Both
of our approaches use image features from short social media posts to provide
better results on the NER task. On the first approach, we extract image
features using InceptionV3 and use fusion to combine textual and image
features. This presents more reliable name entity recognition when the images
related to the entities are provided by the user. On the second approach, we
use image features combined with text and feed it into a BERT like Transformer.
The experimental results, namely, the precision, recall and F1 score metrics
show the superiority of our work compared to other state-of-the-art NER
solutions.
| 2,021 | Computation and Language |
Nested-Wasserstein Self-Imitation Learning for Sequence Generation | Reinforcement learning (RL) has been widely studied for improving
sequence-generation models. However, the conventional rewards used for RL
training typically cannot capture sufficient semantic information and therefore
render model bias. Further, the sparse and delayed rewards make RL exploration
inefficient. To alleviate these issues, we propose the concept of
nested-Wasserstein distance for distributional semantic matching. To further
exploit it, a novel nested-Wasserstein self-imitation learning framework is
developed, encouraging the model to exploit historical high-rewarded sequences
for enhanced exploration and better semantic matching. Our solution can be
understood as approximately executing proximal policy optimization with
Wasserstein trust-regions. Experiments on a variety of unconditional and
conditional sequence-generation tasks demonstrate the proposed approach
consistently leads to improved performance.
| 2,020 | Computation and Language |
Audio Summarization with Audio Features and Probability Distribution
Divergence | The automatic summarization of multimedia sources is an important task that
facilitates the understanding of an individual by condensing the source while
maintaining relevant information. In this paper we focus on audio summarization
based on audio features and the probability of distribution divergence. Our
method, based on an extractive summarization approach, aims to select the most
relevant segments until a time threshold is reached. It takes into account the
segment's length, position and informativeness value. Informativeness of each
segment is obtained by mapping a set of audio features issued from its
Mel-frequency Cepstral Coefficients and their corresponding Jensen-Shannon
divergence score. Results over a multi-evaluator scheme shows that our approach
provides understandable and informative summaries.
| 2,020 | Computation and Language |
Recommending Themes for Ad Creative Design via Visual-Linguistic
Representations | There is a perennial need in the online advertising industry to refresh ad
creatives, i.e., images and text used for enticing online users towards a
brand. Such refreshes are required to reduce the likelihood of ad fatigue among
online users, and to incorporate insights from other successful campaigns in
related product categories. Given a brand, to come up with themes for a new ad
is a painstaking and time consuming process for creative strategists.
Strategists typically draw inspiration from the images and text used for past
ad campaigns, as well as world knowledge on the brands. To automatically infer
ad themes via such multimodal sources of information in past ad campaigns, we
propose a theme (keyphrase) recommender system for ad creative strategists. The
theme recommender is based on aggregating results from a visual question
answering (VQA) task, which ingests the following: (i) ad images, (ii) text
associated with the ads as well as Wikipedia pages on the brands in the ads,
and (iii) questions around the ad. We leverage transformer based cross-modality
encoders to train visual-linguistic representations for our VQA task. We study
two formulations for the VQA task along the lines of classification and
ranking; via experiments on a public dataset, we show that cross-modal
representations lead to significantly better classification accuracy and
ranking precision-recall metrics. Cross-modal representations show better
performance compared to separate image and text representations. In addition,
the use of multimodal information shows a significant lift over using only
textual or visual information.
| 2,020 | Computation and Language |
Text-based inference of moral sentiment change | We present a text-based framework for investigating moral sentiment change of
the public via longitudinal corpora. Our framework is based on the premise that
language use can inform people's moral perception toward right or wrong, and we
build our methodology by exploring moral biases learned from diachronic word
embeddings. We demonstrate how a parameter-free model supports inference of
historical shifts in moral sentiment toward concepts such as slavery and
democracy over centuries at three incremental levels: moral relevance, moral
polarity, and fine-grained moral dimensions. We apply this methodology to
visualizing moral time courses of individual concepts and analyzing the
relations between psycholinguistic variables and rates of moral sentiment
change at scale. Our work offers opportunities for applying natural language
processing toward characterizing moral sentiment change in society.
| 2,020 | Computation and Language |
Multi-level Head-wise Match and Aggregation in Transformer for Textual
Sequence Matching | Transformer has been successfully applied to many natural language processing
tasks. However, for textual sequence matching, simple matching between the
representation of a pair of sequences might bring in unnecessary noise. In this
paper, we propose a new approach to sequence pair matching with Transformer, by
learning head-wise matching representations on multiple levels. Experiments
show that our proposed approach can achieve new state-of-the-art performance on
multiple tasks that rely only on pre-computed sequence-vector-representation,
such as SNLI, MNLI-match, MNLI-mismatch, QQP, and SQuAD-binary.
| 2,020 | Computation and Language |
A Hierarchical Location Normalization System for Text | It's natural these days for people to know the local events from massive
documents. Many texts contain location information, such as city name or road
name, which is always incomplete or latent. It's significant to extract the
administrative area of the text and organize the hierarchy of area, called
location normalization. Existing detecting location systems either exclude
hierarchical normalization or present only a few specific regions. We propose a
system named ROIBase that normalizes the text by the Chinese hierarchical
administrative divisions. ROIBase adopts a co-occurrence constraint as the
basic framework to score the hit of the administrative area, achieves the
inference by special embeddings, and expands the recall by the ROI (region of
interest). It has high efficiency and interpretability because it mainly
establishes on the definite knowledge and has less complex logic than the
supervised models. We demonstrate that ROIBase achieves better performance
against feasible solutions and is useful as a strong support system for
location normalization.
| 2,020 | Computation and Language |
Length-controllable Abstractive Summarization by Guiding with Summary
Prototype | We propose a new length-controllable abstractive summarization model. Recent
state-of-the-art abstractive summarization models based on encoder-decoder
models generate only one summary per source text. However, controllable
summarization, especially of the length, is an important aspect for practical
applications. Previous studies on length-controllable abstractive summarization
incorporate length embeddings in the decoder module for controlling the summary
length. Although the length embeddings can control where to stop decoding, they
do not decide which information should be included in the summary within the
length constraint. Unlike the previous models, our length-controllable
abstractive summarization model incorporates a word-level extractive module in
the encoder-decoder model instead of length embeddings. Our model generates a
summary in two steps. First, our word-level extractor extracts a sequence of
important words (we call it the "prototype text") from the source text
according to the word-level importance scores and the length constraint.
Second, the prototype text is used as additional input to the encoder-decoder
model, which generates a summary by jointly encoding and copying words from
both the prototype text and source text. Since the prototype text is a guide to
both the content and length of the summary, our model can generate an
informative and length-controlled summary. Experiments with the CNN/Daily Mail
dataset and the NEWSROOM dataset show that our model outperformed previous
models in length-controlled settings.
| 2,020 | Computation and Language |
A Physical Embedding Model for Knowledge Graphs | Knowledge graph embedding methods learn continuous vector representations for
entities in knowledge graphs and have been used successfully in a large number
of applications. We present a novel and scalable paradigm for the computation
of knowledge graph embeddings, which we dub PYKE . Our approach combines a
physical model based on Hooke's law and its inverse with ideas from simulated
annealing to compute embeddings for knowledge graphs efficiently. We prove that
PYKE achieves a linear space complexity. While the time complexity for the
initialization of our approach is quadratic, the time complexity of each of its
iterations is linear in the size of the input knowledge graph. Hence, PYKE's
overall runtime is close to linear. Consequently, our approach easily scales up
to knowledge graphs containing millions of triples. We evaluate our approach
against six state-of-the-art embedding approaches on the DrugBank and DBpedia
datasets in two series of experiments. The first series shows that the cluster
purity achieved by PYKE is up to 26% (absolute) better than that of the state
of art. In addition, PYKE is more than 22 times faster than existing embedding
solutions in the best case. The results of our second series of experiments
show that PYKE is up to 23% (absolute) better than the state of art on the task
of type prediction while maintaining its superior scalability. Our
implementation and results are open-source and are available at
http://github.com/dice-group/PYKE.
| 2,020 | Computation and Language |
Domain-Aware Dialogue State Tracker for Multi-Domain Dialogue Systems | In task-oriented dialogue systems the dialogue state tracker (DST) component
is responsible for predicting the state of the dialogue based on the dialogue
history. Current DST approaches rely on a predefined domain ontology, a fact
that limits their effective usage for large scale conversational agents, where
the DST constantly needs to be interfaced with ever-increasing services and
APIs. Focused towards overcoming this drawback, we propose a domain-aware
dialogue state tracker, that is completely data-driven and it is modeled to
predict for dynamic service schemas. The proposed model utilizes domain and
slot information to extract both domain and slot specific representations for a
given dialogue, and then uses such representations to predict the values of the
corresponding slot. Integrating this mechanism with a pretrained language model
(i.e. BERT), our approach can effectively learn semantic relations.
| 2,020 | Computation and Language |
Generating Sense Embeddings for Syntactic and Semantic Analogy for
Portuguese | Word embeddings are numerical vectors which can represent words or concepts
in a low-dimensional continuous space. These vectors are able to capture useful
syntactic and semantic information. The traditional approaches like Word2Vec,
GloVe and FastText have a strict drawback: they produce a single vector
representation per word ignoring the fact that ambiguous words can assume
different meanings. In this paper we use techniques to generate sense
embeddings and present the first experiments carried out for Portuguese. Our
experiments show that sense vectors outperform traditional word vectors in
syntactic and semantic analogy tasks, proving that the language resource
generated here can improve the performance of NLP tasks in Portuguese.
| 2,019 | Computation and Language |
Improving Interaction Quality Estimation with BiLSTMs and the Impact on
Dialogue Policy Learning | Learning suitable and well-performing dialogue behaviour in statistical
spoken dialogue systems has been in the focus of research for many years. While
most work which is based on reinforcement learning employs an objective measure
like task success for modelling the reward signal, we use a reward based on
user satisfaction estimation. We propose a novel estimator and show that it
outperforms all previous estimators while learning temporal dependencies
implicitly. Furthermore, we apply this novel user satisfaction estimation model
live in simulated experiments where the satisfaction estimation model is
trained on one domain and applied in many other domains which cover a similar
task. We show that applying this model results in higher estimated
satisfaction, similar task success rates and a higher robustness to noise.
| 2,020 | Computation and Language |
Exploiting Cloze Questions for Few Shot Text Classification and Natural
Language Inference | Some NLP tasks can be solved in a fully unsupervised fashion by providing a
pretrained language model with "task descriptions" in natural language (e.g.,
Radford et al., 2019). While this approach underperforms its supervised
counterpart, we show in this work that the two ideas can be combined: We
introduce Pattern-Exploiting Training (PET), a semi-supervised training
procedure that reformulates input examples as cloze-style phrases to help
language models understand a given task. These phrases are then used to assign
soft labels to a large set of unlabeled examples. Finally, standard supervised
training is performed on the resulting training set. For several tasks and
languages, PET outperforms supervised training and strong semi-supervised
approaches in low-resource settings by a large margin.
| 2,021 | Computation and Language |
Where New Words Are Born: Distributional Semantic Analysis of Neologisms
and Their Semantic Neighborhoods | We perform statistical analysis of the phenomenon of neology, the process by
which new words emerge in a language, using large diachronic corpora of
English. We investigate the importance of two factors, semantic sparsity and
frequency growth rates of semantic neighbors, formalized in the distributional
semantics paradigm. We show that both factors are predictive of word emergence
although we find more support for the latter hypothesis. Besides presenting a
new linguistic application of distributional semantics, this study tackles the
linguistic question of the role of language-internal factors (in our case,
sparsity) in language change motivated by language-external factors (reflected
in frequency growth).
| 2,020 | Computation and Language |
Shared task: Lexical semantic change detection in German (Student
Project Report) | Recent NLP architectures have illustrated in various ways how semantic change
can be captured across time and domains. However, in terms of evaluation there
is a lack of benchmarks to compare the performance of these systems against
each other. We present the results of the first shared task on unsupervised
lexical semantic change detection (LSCD) in German based on the evaluation
framework proposed by Schlechtweg et al. (2019).
| 2,020 | Computation and Language |
Elephant in the Room: An Evaluation Framework for Assessing Adversarial
Examples in NLP | An adversarial example is an input transformed by small perturbations that
machine learning models consistently misclassify. While there are a number of
methods proposed to generate adversarial examples for text data, it is not
trivial to assess the quality of these adversarial examples, as minor
perturbations (such as changing a word in a sentence) can lead to a significant
shift in their meaning, readability and classification label. In this paper, we
propose an evaluation framework consisting of a set of automatic evaluation
metrics and human evaluation guidelines, to rigorously assess the quality of
adversarial examples based on the aforementioned properties. We experiment with
six benchmark attacking methods and found that some methods generate
adversarial examples with poor readability and content preservation. We also
learned that multiple factors could influence the attacking performance, such
as the length of the text inputs and architecture of the classifiers.
| 2,020 | Computation and Language |
Normalization of Input-output Shared Embeddings in Text Generation
Models | Neural Network based models have been state-of-the-art models for various
Natural Language Processing tasks, however, the input and output dimension
problem in the networks has still not been fully resolved, especially in text
generation tasks (e.g. Machine Translation, Text Summarization), in which input
and output both have huge sizes of vocabularies. Therefore, input-output
embedding weight sharing has been introduced and adopted widely, which remains
to be improved. Based on linear algebra and statistical theories, this paper
locates the shortcoming of existed input-output embedding weight sharing
method, then raises methods for improving input-output weight shared embedding,
among which methods of normalization of embedding weight matrices show best
performance. These methods are nearly computational cost-free, can get combined
with other embedding techniques, and show good effectiveness when applied on
state-of-the-art Neural Network models. For Transformer-big models, the
normalization techniques can get at best 0.6 BLEU improvement compared to the
original version of model on WMT'16 En-De dataset, and similar BLEU
improvements on IWSLT 14' datasets. For DynamicConv models, 0.5 BLEU
improvement can be attained on WMT'16 En-De dataset, and 0.41 BLEU improvement
on IWSLT 14' De-En translation task is achieved.
| 2,020 | Computation and Language |
ARAACOM: ARAbic Algerian Corpus for Opinion Mining | Nowadays, it is no more needed to do an enormous effort to distribute a lot
of forms to thousands of people and collect them, then convert this from into
electronic format to track people opinion about some subjects. A lot of web
sites can today reach a large spectrum with less effort. The majority of web
sites suggest to their visitors to leave backups about their feeling of the
site or events. So, this makes for us a lot of data which need powerful mean to
exploit. Opinion mining in the web becomes more and more an attracting task,
due the increasing need for individuals and societies to track the mood of
people against several subjects of daily life (sports, politics,
television,...). A lot of works in opinion mining was developed in western
languages especially English, such works in Arabic language still very scarce.
In this paper, we propose our approach, for opinion mining in Arabic Algerian
news paper. CCS CONCEPTS $\bullet$Information systems~Sentiment analysis
$\bullet$ Computing methodologies~Natural language processing
| 2,017 | Computation and Language |
ManyModalQA: Modality Disambiguation and QA over Diverse Inputs | We present a new multimodal question answering challenge, ManyModalQA, in
which an agent must answer a question by considering three distinct modalities:
text, images, and tables. We collect our data by scraping Wikipedia and then
utilize crowdsourcing to collect question-answer pairs. Our questions are
ambiguous, in that the modality that contains the answer is not easily
determined based solely upon the question. To demonstrate this ambiguity, we
construct a modality selector (or disambiguator) network, and this model gets
substantially lower accuracy on our challenge set, compared to existing
datasets, indicating that our questions are more ambiguous. By analyzing this
model, we investigate which words in the question are indicative of the
modality. Next, we construct a simple baseline ManyModalQA model, which, based
on the prediction from the modality selector, fires a corresponding pre-trained
state-of-the-art unimodal QA model. We focus on providing the community with a
new manymodal evaluation set and only provide a fine-tuning set, with the
expectation that existing datasets and approaches will be transferred for most
of the training, to encourage low-resource generalization without large,
monolithic training sets for each new task. There is a significant gap between
our baseline models and human performance; therefore, we hope that this
challenge encourages research in end-to-end modality disambiguation and
multimodal QA models, as well as transfer learning. Code and data available at:
https://github.com/hannandarryl/ManyModalQA
| 2,020 | Computation and Language |
TLT-school: a Corpus of Non Native Children Speech | This paper describes "TLT-school" a corpus of speech utterances collected in
schools of northern Italy for assessing the performance of students learning
both English and German. The corpus was recorded in the years 2017 and 2018
from students aged between nine and sixteen years, attending primary, middle
and high school. All utterances have been scored, in terms of some predefined
proficiency indicators, by human experts. In addition, most of utterances
recorded in 2017 have been manually transcribed carefully. Guidelines and
procedures used for manual transcriptions of utterances will be described in
detail, as well as results achieved by means of an automatic speech recognition
system developed by us. Part of the corpus is going to be freely distributed to
scientific community particularly interested both in non-native speech
recognition and automatic assessment of second language proficiency.
| 2,020 | Computation and Language |
Contextualized Embeddings in Named-Entity Recognition: An Empirical
Study on Generalization | Contextualized embeddings use unsupervised language model pretraining to
compute word representations depending on their context. This is intuitively
useful for generalization, especially in Named-Entity Recognition where it is
crucial to detect mentions never seen during training. However, standard
English benchmarks overestimate the importance of lexical over contextual
features because of an unrealistic lexical overlap between train and test
mentions. In this paper, we perform an empirical analysis of the generalization
capabilities of state-of-the-art contextualized embeddings by separating
mentions by novelty and with out-of-domain evaluation. We show that they are
particularly beneficial for unseen mentions detection, especially
out-of-domain. For models trained on CoNLL03, language model contextualization
leads to a +1.2% maximal relative micro-F1 score increase in-domain against
+13% out-of-domain on the WNUT dataset
| 2,020 | Computation and Language |
A Simple Baseline to Semi-Supervised Domain Adaptation for Machine
Translation | State-of-the-art neural machine translation (NMT) systems are data-hungry and
perform poorly on new domains with no supervised data. As data collection is
expensive and infeasible in many cases, domain adaptation methods are needed.
In this work, we propose a simple but effect approach to the semi-supervised
domain adaptation scenario of NMT, where the aim is to improve the performance
of a translation model on the target domain consisting of only non-parallel
data with the help of supervised source domain data. This approach iteratively
trains a Transformer-based NMT model via three training objectives: language
modeling, back-translation, and supervised translation. We evaluate this method
on two adaptation settings: adaptation between specific domains and adaptation
from a general domain to specific domains, and on two language pairs: German to
English and Romanian to English. With substantial performance improvement
achieved---up to +19.31 BLEU over the strongest baseline, and +47.69 BLEU
improvement over the unadapted model---we present this method as a simple but
tough-to-beat baseline in the field of semi-supervised domain adaptation for
NMT.
| 2,020 | Computation and Language |
Multilingual Denoising Pre-training for Neural Machine Translation | This paper demonstrates that multilingual denoising pre-training produces
significant performance gains across a wide variety of machine translation (MT)
tasks. We present mBART -- a sequence-to-sequence denoising auto-encoder
pre-trained on large-scale monolingual corpora in many languages using the BART
objective. mBART is one of the first methods for pre-training a complete
sequence-to-sequence model by denoising full texts in multiple languages, while
previous approaches have focused only on the encoder, decoder, or
reconstructing parts of the text. Pre-training a complete model allows it to be
directly fine tuned for supervised (both sentence-level and document-level) and
unsupervised machine translation, with no task-specific modifications. We
demonstrate that adding mBART initialization produces performance gains in all
but the highest-resource settings, including up to 12 BLEU points for low
resource MT and over 5 BLEU points for many document-level and unsupervised
models. We also show it also enables new types of transfer to language pairs
with no bi-text or that were not in the pre-training corpus, and present
extensive analysis of which factors contribute the most to effective
pre-training.
| 2,020 | Computation and Language |
Transition-Based Dependency Parsing using Perceptron Learner | Syntactic parsing using dependency structures has become a standard technique
in natural language processing with many different parsing models, in
particular data-driven models that can be trained on syntactically annotated
corpora. In this paper, we tackle transition-based dependency parsing using a
Perceptron Learner. Our proposed model, which adds more relevant features to
the Perceptron Learner, outperforms a baseline arc-standard parser. We beat the
UAS of the MALT and LSTM parsers. We also give possible ways to address parsing
of non-projective trees.
| 2,020 | Computation and Language |
Pre-training via Leveraging Assisting Languages and Data Selection for
Neural Machine Translation | Sequence-to-sequence (S2S) pre-training using large monolingual data is known
to improve performance for various S2S NLP tasks in low-resource settings.
However, large monolingual corpora might not always be available for the
languages of interest (LOI). To this end, we propose to exploit monolingual
corpora of other languages to complement the scarcity of monolingual corpora
for the LOI. A case study of low-resource Japanese-English neural machine
translation (NMT) reveals that leveraging large Chinese and French monolingual
corpora can help overcome the shortage of Japanese and English monolingual
corpora, respectively, for S2S pre-training. We further show how to utilize
script mapping (Chinese to Japanese) to increase the similarity between the two
monolingual corpora leading to further improvements in translation quality.
Additionally, we propose simple data-selection techniques to be used prior to
pre-training that significantly impact the quality of S2S pre-training. An
empirical comparison of our proposed methods reveals that leveraging assisting
language monolingual corpora, data selection and script mapping are extremely
important for NMT pre-training in low-resource scenarios.
| 2,020 | Computation and Language |
CheckThat! at CLEF 2020: Enabling the Automatic Identification and
Verification of Claims in Social Media | We describe the third edition of the CheckThat! Lab, which is part of the
2020 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes four
complementary tasks and a related task from previous lab editions, offered in
English, Arabic, and Spanish. Task 1 asks to predict which tweets in a Twitter
stream are worth fact-checking. Task 2 asks to determine whether a claim posted
in a tweet can be verified using a set of previously fact-checked claims. Task
3 asks to retrieve text snippets from a given set of Web pages that would be
useful for verifying a target tweet's claim. Task 4 asks to predict the
veracity of a target tweet's claim using a set of Web pages and potentially
useful snippets in them. Finally, the lab offers a fifth task that asks to
predict the check-worthiness of the claims made in English political debates
and speeches. CheckThat! features a full evaluation framework. The evaluation
is carried out using mean average precision or precision at rank k for ranking
tasks, and F1 for classification tasks.
| 2,018 | Computation and Language |
Variational Hierarchical Dialog Autoencoder for Dialog State Tracking
Data Augmentation | Recent works have shown that generative data augmentation, where synthetic
samples generated from deep generative models complement the training dataset,
benefit NLP tasks. In this work, we extend this approach to the task of dialog
state tracking for goal-oriented dialogs. Due to the inherent hierarchical
structure of goal-oriented dialogs over utterances and related annotations, the
deep generative model must be capable of capturing the coherence among
different hierarchies and types of dialog features. We propose the Variational
Hierarchical Dialog Autoencoder (VHDA) for modeling the complete aspects of
goal-oriented dialogs, including linguistic features and underlying structured
annotations, namely speaker information, dialog acts, and goals. The proposed
architecture is designed to model each aspect of goal-oriented dialogs using
inter-connected latent variables and learns to generate coherent goal-oriented
dialogs from the latent spaces. To overcome training issues that arise from
training complex variational models, we propose appropriate training
strategies. Experiments on various dialog datasets show that our model improves
the downstream dialog trackers' robustness via generative data augmentation. We
also discover additional benefits of our unified approach to modeling
goal-oriented dialogs: dialog response generation and user simulation, where
our model outperforms previous strong baselines.
| 2,020 | Computation and Language |
A Study of the Tasks and Models in Machine Reading Comprehension | To provide a survey on the existing tasks and models in Machine Reading
Comprehension (MRC), this report reviews: 1) the dataset collection and
performance evaluation of some representative simple-reasoning and
complex-reasoning MRC tasks; 2) the architecture designs, attention mechanisms,
and performance-boosting approaches for developing neural-network-based MRC
models; 3) some recently proposed transfer learning approaches to incorporating
text-style knowledge contained in external corpora into the neural networks of
MRC models; 4) some recently proposed knowledge base encoding approaches to
incorporating graph-style knowledge contained in external knowledge bases into
the neural networks of MRC models. Besides, according to what has been achieved
and what are still deficient, this report also proposes some open problems for
the future research.
| 2,020 | Computation and Language |
Action Recognition and State Change Prediction in a Recipe Understanding
Task Using a Lightweight Neural Network Model | Consider a natural language sentence describing a specific step in a food
recipe. In such instructions, recognizing actions (such as press, bake, etc.)
and the resulting changes in the state of the ingredients (shape molded,
custard cooked, temperature hot, etc.) is a challenging task. One way to cope
with this challenge is to explicitly model a simulator module that applies
actions to entities and predicts the resulting outcome (Bosselut et al. 2018).
However, such a model can be unnecessarily complex. In this paper, we propose a
simplified neural network model that separates action recognition and state
change prediction, while coupling the two through a novel loss function. This
allows learning to indirectly influence each other. Our model, although
simpler, achieves higher state change prediction performance (67% average
accuracy for ours vs. 55% in (Bosselut et al. 2018)) and takes fewer samples to
train (10K ours vs. 65K+ by (Bosselut et al. 2018)).
| 2,020 | Computation and Language |
Coordinated Reasoning for Cross-Lingual Knowledge Graph Alignment | Existing entity alignment methods mainly vary on the choices of encoding the
knowledge graph, but they typically use the same decoding method, which
independently chooses the local optimal match for each source entity. This
decoding method may not only cause the "many-to-one" problem but also neglect
the coordinated nature of this task, that is, each alignment decision may
highly correlate to the other decisions. In this paper, we introduce two
coordinated reasoning methods, i.e., the Easy-to-Hard decoding strategy and
joint entity alignment algorithm. Specifically, the Easy-to-Hard strategy first
retrieves the model-confident alignments from the predicted results and then
incorporates them as additional knowledge to resolve the remaining
model-uncertain alignments. To achieve this, we further propose an enhanced
alignment model that is built on the current state-of-the-art baseline. In
addition, to address the many-to-one problem, we propose to jointly predict
entity alignments so that the one-to-one constraint can be naturally
incorporated into the alignment prediction. Experimental results show that our
model achieves the state-of-the-art performance and our reasoning methods can
also significantly improve existing baselines.
| 2,020 | Computation and Language |
Reducing Non-Normative Text Generation from Language Models | Large-scale, transformer-based language models such as GPT-2 are pretrained
on diverse corpora scraped from the internet. Consequently, they are prone to
generating non-normative text (i.e. in violation of social norms). We introduce
a technique for fine-tuning GPT-2, using a policy gradient reinforcement
learning technique and a normative text classifier to produce reward and
punishment values. We evaluate our technique on five data sets using automated
and human participant experiments. The normative text classifier is 81-90%
accurate when compared to gold-standard human judgments of normative and
non-normative generated text. Our normative fine-tuning technique is able to
reduce non-normative text by 27-61%, depending on the data set.
| 2,020 | Computation and Language |
Semi-Autoregressive Training Improves Mask-Predict Decoding | The recently proposed mask-predict decoding algorithm has narrowed the
performance gap between semi-autoregressive machine translation models and the
traditional left-to-right approach. We introduce a new training method for
conditional masked language models, SMART, which mimics the semi-autoregressive
behavior of mask-predict, producing training examples that contain model
predictions as part of their inputs. Models trained with SMART produce
higher-quality translations when using mask-predict decoding, effectively
closing the remaining performance gap with fully autoregressive models.
| 2,020 | Computation and Language |
Linguistic Fingerprints of Internet Censorship: the Case of SinaWeibo | This paper studies how the linguistic components of blogposts collected from
Sina Weibo, a Chinese microblogging platform, might affect the blogposts'
likelihood of being censored. Our results go along with King et al. (2013)'s
Collective Action Potential (CAP) theory, which states that a blogpost's
potential of causing riot or assembly in real life is the key determinant of it
getting censored. Although there is not a definitive measure of this construct,
the linguistic features that we identify as discriminatory go along with the
CAP theory. We build a classifier that significantly outperforms non-expert
humans in predicting whether a blogpost will be censored. The crowdsourcing
results suggest that while humans tend to see censored blogposts as more
controversial and more likely to trigger action in real life than the
uncensored counterparts, they in general cannot make a better guess than our
model when it comes to `reading the mind' of the censors in deciding whether a
blogpost should be censored. We do not claim that censorship is only determined
by the linguistic features. There are many other factors contributing to
censorship decisions. The focus of the present paper is on the linguistic form
of blogposts. Our work suggests that it is possible to use linguistic
properties of social media posts to automatically predict if they are going to
be censored.
| 2,020 | Computation and Language |
Exploration Based Language Learning for Text-Based Games | This work presents an exploration and imitation-learning-based agent capable
of state-of-the-art performance in playing text-based computer games.
Text-based computer games describe their world to the player through natural
language and expect the player to interact with the game using text. These
games are of interest as they can be seen as a testbed for language
understanding, problem-solving, and language generation by artificial agents.
Moreover, they provide a learning environment in which these skills can be
acquired through interactions with an environment rather than using fixed
corpora. One aspect that makes these games particularly challenging for
learning agents is the combinatorially large action space. Existing methods for
solving text-based games are limited to games that are either very simple or
have an action space restricted to a predetermined set of admissible actions.
In this work, we propose to use the exploration approach of Go-Explore for
solving text-based games. More specifically, in an initial exploration phase,
we first extract trajectories with high rewards, after which we train a policy
to solve the game by imitating these trajectories. Our experiments show that
this approach outperforms existing solutions in solving text-based games, and
it is more sample efficient in terms of the number of interactions with the
environment. Moreover, we show that the learned policy can generalize better
than existing solutions to unseen games without using any restriction on the
action space.
| 2,020 | Computation and Language |
MT-BioNER: Multi-task Learning for Biomedical Named Entity Recognition
using Deep Bidirectional Transformers | Conversational agents such as Cortana, Alexa and Siri are continuously
working on increasing their capabilities by adding new domains. The support of
a new domain includes the design and development of a number of NLU components
for domain classification, intents classification and slots tagging (including
named entity recognition). Each component only performs well when trained on a
large amount of labeled data. Second, these components are deployed on
limited-memory devices which requires some model compression. Third, for some
domains such as the health domain, it is hard to find a single training data
set that covers all the required slot types. To overcome these mentioned
problems, we present a multi-task transformer-based neural architecture for
slot tagging. We consider the training of a slot tagger using multiple data
sets covering different slot types as a multi-task learning problem. The
experimental results on the biomedical domain have shown that the proposed
approach outperforms the previous state-of-the-art systems for slot tagging on
the different benchmark biomedical datasets in terms of (time and memory)
efficiency and effectiveness. The output slot tagger can be used by the
conversational agent to better identify entities in the input utterances.
| 2,020 | Computation and Language |
An Iterative Approach for Identifying Complaint Based Tweets in Social
Media Platforms | Twitter is a social media platform where users express opinions over a
variety of issues. Posts offering grievances or complaints can be utilized by
private/ public organizations to improve their service and promptly gauge a
low-cost assessment. In this paper, we propose an iterative methodology which
aims to identify complaint based posts pertaining to the transport domain. We
perform comprehensive evaluations along with releasing a novel dataset for the
research purposes.
| 2,020 | Computation and Language |
Learning To Detect Keyword Parts And Whole By Smoothed Max Pooling | We propose smoothed max pooling loss and its application to keyword spotting
systems. The proposed approach jointly trains an encoder (to detect keyword
parts) and a decoder (to detect whole keyword) in a semi-supervised manner. The
proposed new loss function allows training a model to detect parts and whole of
a keyword, without strictly depending on frame-level labeling from LVCSR (Large
vocabulary continuous speech recognition), making further optimization
possible. The proposed system outperforms the baseline keyword spotting model
in [1] due to increased optimizability. Further, it can be more easily adapted
for on-device learning applications due to reduced dependency on LVCSR.
| 2,020 | Computation and Language |
BERT's output layer recognizes all hidden layers? Some Intriguing
Phenomena and a simple way to boost BERT | Although Bidirectional Encoder Representations from Transformers (BERT) have
achieved tremendous success in many natural language processing (NLP) tasks, it
remains a black box. A variety of previous works have tried to lift the veil of
BERT and understand each layer's functionality. In this paper, we found that
surprisingly the output layer of BERT can reconstruct the input sentence by
directly taking each layer of BERT as input, even though the output layer has
never seen the input other than the final hidden layer. This fact remains true
across a wide variety of BERT-based models, even when some layers are
duplicated. Based on this observation, we propose a quite simple method to
boost the performance of BERT. By duplicating some layers in the BERT-based
models to make it deeper (no extra training required in this step), they obtain
better performance in the downstream tasks after fine-tuning.
| 2,021 | Computation and Language |
Intent Classification in Question-Answering Using LSTM Architectures | Question-answering (QA) is certainly the best known and probably also one of
the most complex problem within Natural Language Processing (NLP) and
artificial intelligence (AI). Since the complete solution to the problem of
finding a generic answer still seems far away, the wisest thing to do is to
break down the problem by solving single simpler parts. Assuming a modular
approach to the problem, we confine our research to intent classification for
an answer, given a question. Through the use of an LSTM network, we show how
this type of classification can be approached effectively and efficiently, and
how it can be properly used within a basic prototype responder.
| 2,020 | Computation and Language |
An Analysis of Word2Vec for the Italian Language | Word representation is fundamental in NLP tasks, because it is precisely from
the coding of semantic closeness between words that it is possible to think of
teaching a machine to understand text. Despite the spread of word embedding
concepts, still few are the achievements in linguistic contexts other than
English. In this work, analysing the semantic capacity of the Word2Vec
algorithm, an embedding for the Italian language is produced. Parameter setting
such as the number of epochs, the size of the context window and the number of
negatively backpropagated samples is explored.
| 2,020 | Computation and Language |
Generating Representative Headlines for News Stories | Millions of news articles are published online every day, which can be
overwhelming for readers to follow. Grouping articles that are reporting the
same event into news stories is a common way of assisting readers in their news
consumption. However, it remains a challenging research problem to efficiently
and effectively generate a representative headline for each story. Automatic
summarization of a document set has been studied for decades, while few studies
have focused on generating representative headlines for a set of articles.
Unlike summaries, which aim to capture most information with least redundancy,
headlines aim to capture information jointly shared by the story articles in
short length, and exclude information that is too specific to each individual
article. In this work, we study the problem of generating representative
headlines for news stories. We develop a distant supervision approach to train
large-scale generation models without any human annotation. This approach
centers on two technical components. First, we propose a multi-level
pre-training framework that incorporates massive unlabeled corpus with
different quality-vs.-quantity balance at different levels. We show that models
trained within this framework outperform those trained with pure human curated
corpus. Second, we propose a novel self-voting-based article attention layer to
extract salient information shared by multiple articles. We show that models
that incorporate this layer are robust to potential noises in news stories and
outperform existing baselines with or without noises. We can further enhance
our model by incorporating human labels, and we show our distant supervision
approach significantly reduces the demand on labeled data.
| 2,020 | Computation and Language |
DUMA: Reading Comprehension with Transposition Thinking | Multi-choice Machine Reading Comprehension (MRC) requires model to decide the
correct answer from a set of answer options when given a passage and a
question. Thus in addition to a powerful Pre-trained Language Model (PrLM) as
encoder, multi-choice MRC especially relies on a matching network design which
is supposed to effectively capture the relationships among the triplet of
passage, question and answers. While the newer and more powerful PrLMs have
shown their mightiness even without the support from a matching network, we
propose a new DUal Multi-head Co-Attention (DUMA) model, which is inspired by
human's transposition thinking process solving the multi-choice MRC problem:
respectively considering each other's focus from the standpoint of passage and
question. The proposed DUMA has been shown effective and is capable of
generally promoting PrLMs. Our proposed method is evaluated on two benchmark
multi-choice MRC tasks, DREAM and RACE, showing that in terms of powerful
PrLMs, DUMA can still boost the model to reach new state-of-the-art
performance.
| 2,022 | Computation and Language |
From Stock Prediction to Financial Relevance: Repurposing Attention
Weights to Assess News Relevance Without Manual Annotations | We present a method to automatically identify financially relevant news using
stock price movements and news headlines as input. The method repurposes the
attention weights of a neural network initially trained to predict stock prices
to assign a relevance score to each headline, eliminating the need for manually
labeled training data. Our experiments on the four most relevant US stock
indices and 1.5M news headlines show that the method ranks relevant news
highly, positively correlated with the accuracy of the initial stock price
prediction task.
| 2,021 | Computation and Language |
TaxoExpan: Self-supervised Taxonomy Expansion with Position-Enhanced
Graph Neural Network | Taxonomies consist of machine-interpretable semantics and provide valuable
knowledge for many web applications. For example, online retailers (e.g.,
Amazon and eBay) use taxonomies for product recommendation, and web search
engines (e.g., Google and Bing) leverage taxonomies to enhance query
understanding. Enormous efforts have been made on constructing taxonomies
either manually or semi-automatically. However, with the fast-growing volume of
web content, existing taxonomies will become outdated and fail to capture
emerging knowledge. Therefore, in many applications, dynamic expansions of an
existing taxonomy are in great demand. In this paper, we study how to expand an
existing taxonomy by adding a set of new concepts. We propose a novel
self-supervised framework, named TaxoExpan, which automatically generates a set
of <query concept, anchor concept> pairs from the existing taxonomy as training
data. Using such self-supervision data, TaxoExpan learns a model to predict
whether a query concept is the direct hyponym of an anchor concept. We develop
two innovative techniques in TaxoExpan: (1) a position-enhanced graph neural
network that encodes the local structure of an anchor concept in the existing
taxonomy, and (2) a noise-robust training objective that enables the learned
model to be insensitive to the label noise in the self-supervision data.
Extensive experiments on three large-scale datasets from different domains
demonstrate both the effectiveness and the efficiency of TaxoExpan for taxonomy
expansion.
| 2,020 | Computation and Language |
Retrospective Reader for Machine Reading Comprehension | Machine reading comprehension (MRC) is an AI challenge that requires machine
to determine the correct answers to questions based on a given passage. MRC
systems must not only answer question when necessary but also distinguish when
no answer is available according to the given passage and then tactfully
abstain from answering. When unanswerable questions are involved in the MRC
task, an essential verification module called verifier is especially required
in addition to the encoder, though the latest practice on MRC modeling still
most benefits from adopting well pre-trained language models as the encoder
block by only focusing on the "reading". This paper devotes itself to exploring
better verifier design for the MRC task with unanswerable questions. Inspired
by how humans solve reading comprehension questions, we proposed a
retrospective reader (Retro-Reader) that integrates two stages of reading and
verification strategies: 1) sketchy reading that briefly investigates the
overall interactions of passage and question, and yield an initial judgment; 2)
intensive reading that verifies the answer and gives the final prediction. The
proposed reader is evaluated on two benchmark MRC challenge datasets SQuAD2.0
and NewsQA, achieving new state-of-the-art results. Significance tests show
that our model is significantly better than the strong ELECTRA and ALBERT
baselines. A series of analysis is also conducted to interpret the
effectiveness of the proposed reader.
| 2,020 | Computation and Language |
Scaling Up Online Speech Recognition Using ConvNets | We design an online end-to-end speech recognition system based on Time-Depth
Separable (TDS) convolutions and Connectionist Temporal Classification (CTC).
We improve the core TDS architecture in order to limit the future context and
hence reduce latency while maintaining accuracy. The system has almost three
times the throughput of a well tuned hybrid ASR baseline while also having
lower latency and a better word error rate. Also important to the efficiency of
the recognizer is our highly optimized beam search decoder. To show the impact
of our design choices, we analyze throughput, latency, accuracy, and discuss
how these metrics can be tuned based on the user requirements.
| 2,020 | Computation and Language |
The POLAR Framework: Polar Opposites Enable Interpretability of
Pre-Trained Word Embeddings | We introduce POLAR - a framework that adds interpretability to pre-trained
word embeddings via the adoption of semantic differentials. Semantic
differentials are a psychometric construct for measuring the semantics of a
word by analysing its position on a scale between two polar opposites (e.g.,
cold -- hot, soft -- hard). The core idea of our approach is to transform
existing, pre-trained word embeddings via semantic differentials to a new
"polar" space with interpretable dimensions defined by such polar opposites.
Our framework also allows for selecting the most discriminative dimensions from
a set of polar dimensions provided by an oracle, i.e., an external source. We
demonstrate the effectiveness of our framework by deploying it to various
downstream tasks, in which our interpretable word embeddings achieve a
performance that is comparable to the original word embeddings. We also show
that the interpretable dimensions selected by our framework align with human
judgement. Together, these results demonstrate that interpretability can be
added to word embeddings without compromising performance. Our work is relevant
for researchers and engineers interested in interpreting pre-trained word
embeddings.
| 2,020 | Computation and Language |
Towards Quantifying the Distance between Opinions | Increasingly, critical decisions in public policy, governance, and business
strategy rely on a deeper understanding of the needs and opinions of
constituent members (e.g. citizens, shareholders). While it has become easier
to collect a large number of opinions on a topic, there is a necessity for
automated tools to help navigate the space of opinions. In such contexts
understanding and quantifying the similarity between opinions is key. We find
that measures based solely on text similarity or on overall sentiment often
fail to effectively capture the distance between opinions. Thus, we propose a
new distance measure for capturing the similarity between opinions that
leverages the nuanced observation -- similar opinions express similar sentiment
polarity on specific relevant entities-of-interest. Specifically, in an
unsupervised setting, our distance measure achieves significantly better
Adjusted Rand Index scores (up to 56x) and Silhouette coefficients (up to 21x)
compared to existing approaches. Similarly, in a supervised setting, our
opinion distance measure achieves considerably better accuracy (up to 20%
increase) compared to extant approaches that rely on text similarity, stance
similarity, and sentiment similarity
| 2,020 | Computation and Language |
PMIndia -- A Collection of Parallel Corpora of Languages of India | Parallel text is required for building high-quality machine translation (MT)
systems, as well as for other multilingual NLP applications. For many South
Asian languages, such data is in short supply. In this paper, we described a
new publicly available corpus (PMIndia) consisting of parallel sentences which
pair 13 major languages of India with English. The corpus includes up to 56000
sentences for each language pair. We explain how the corpus was constructed,
including an assessment of two different automatic sentence alignment methods,
and present some initial NMT results on the corpus.
| 2,020 | Computation and Language |
Towards a Human-like Open-Domain Chatbot | We present Meena, a multi-turn open-domain chatbot trained end-to-end on data
mined and filtered from public domain social media conversations. This 2.6B
parameter neural network is simply trained to minimize perplexity of the next
token. We also propose a human evaluation metric called Sensibleness and
Specificity Average (SSA), which captures key elements of a human-like
multi-turn conversation. Our experiments show strong correlation between
perplexity and SSA. The fact that the best perplexity end-to-end trained Meena
scores high on SSA (72% on multi-turn evaluation) suggests that a human-level
SSA of 86% is potentially within reach if we can better optimize perplexity.
Additionally, the full version of Meena (with a filtering mechanism and tuned
decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots
we evaluated.
| 2,020 | Computation and Language |
SemClinBr -- a multi institutional and multi specialty semantically
annotated corpus for Portuguese clinical NLP tasks | The high volume of research focusing on extracting patient's information from
electronic health records (EHR) has led to an increase in the demand for
annotated corpora, which are a very valuable resource for both the development
and evaluation of natural language processing (NLP) algorithms. The absence of
a multi-purpose clinical corpus outside the scope of the English language,
especially in Brazilian Portuguese, is glaring and severely impacts scientific
progress in the biomedical NLP field. In this study, we developed a
semantically annotated corpus using clinical texts from multiple medical
specialties, document types, and institutions. We present the following: (1) a
survey listing common aspects and lessons learned from previous research, (2) a
fine-grained annotation schema which could be replicated and guide other
annotation initiatives, (3) a web-based annotation tool focusing on an
annotation suggestion feature, and (4) both intrinsic and extrinsic evaluation
of the annotations. The result of this work is the SemClinBr, a corpus that has
1,000 clinical notes, labeled with 65,117 entities and 11,263 relations, and
can support a variety of clinical NLP tasks and boost the EHR's secondary use
for the Portuguese language.
| 2,022 | Computation and Language |
Guiding Corpus-based Set Expansion by Auxiliary Sets Generation and
Co-Expansion | Given a small set of seed entities (e.g., ``USA'', ``Russia''), corpus-based
set expansion is to induce an extensive set of entities which share the same
semantic class (Country in this example) from a given corpus. Set expansion
benefits a wide range of downstream applications in knowledge discovery, such
as web search, taxonomy construction, and query suggestion. Existing
corpus-based set expansion algorithms typically bootstrap the given seeds by
incorporating lexical patterns and distributional similarity. However, due to
no negative sets provided explicitly, these methods suffer from semantic drift
caused by expanding the seed set freely without guidance. We propose a new
framework, Set-CoExpan, that automatically generates auxiliary sets as negative
sets that are closely related to the target set of user's interest, and then
performs multiple sets co-expansion that extracts discriminative features by
comparing target set with auxiliary sets, to form multiple cohesive sets that
are distinctive from one another, thus resolving the semantic drift issue. In
this paper we demonstrate that by generating auxiliary sets, we can guide the
expansion process of target set to avoid touching those ambiguous areas around
the border with auxiliary sets, and we show that Set-CoExpan outperforms strong
baseline methods significantly.
| 2,020 | Computation and Language |
A Deep Neural Framework for Contextual Affect Detection | A short and simple text carrying no emotion can represent some strong
emotions when reading along with its context, i.e., the same sentence can
express extreme anger as well as happiness depending on its context. In this
paper, we propose a Contextual Affect Detection (CAD) framework which learns
the inter-dependence of words in a sentence, and at the same time the
inter-dependence of sentences in a dialogue. Our proposed CAD framework is
based on a Gated Recurrent Unit (GRU), which is further assisted by contextual
word embeddings and other diverse hand-crafted feature sets. Evaluation and
analysis suggest that our model outperforms the state-of-the-art methods by
5.49% and 9.14% on Friends and EmotionPush dataset, respectively.
| 2,019 | Computation and Language |
Extraction of Templates from Phrases Using Sequence Binary Decision
Diagrams | The extraction of templates such as ``regard X as Y'' from a set of related
phrases requires the identification of their internal structures. This paper
presents an unsupervised approach for extracting templates on-the-fly from only
tagged text by using a novel relaxed variant of the Sequence Binary Decision
Diagram (SeqBDD). A SeqBDD can compress a set of sequences into a graphical
structure equivalent to a minimal DFA, but more compact and better suited to
the task of template extraction. The main contribution of this paper is a
relaxed form of the SeqBDD construction algorithm that enables it to form
general representations from a small amount of data. The process of compression
of shared structures in the text during Relaxed SeqBDD construction, naturally
induces the templates we wish to extract. Experiments show that the method is
capable of high-quality extraction on tasks based on verb+preposition templates
from corpora and phrasal templates from short messages from social media.
| 2,018 | Computation and Language |
Multi-modal Sentiment Analysis using Super Characters Method on
Low-power CNN Accelerator Device | Recent years NLP research has witnessed the record-breaking accuracy
improvement by DNN models. However, power consumption is one of the practical
concerns for deploying NLP systems. Most of the current state-of-the-art
algorithms are implemented on GPUs, which is not power-efficient and the
deployment cost is also very high. On the other hand, CNN Domain Specific
Accelerator (CNN-DSA) has been in mass production providing low-power and low
cost computation power. In this paper, we will implement the Super Characters
method on the CNN-DSA. In addition, we modify the Super Characters method to
utilize the multi-modal data, i.e. text plus tabular data in the CL-Aff
sharedtask.
| 2,020 | Computation and Language |
Incorporating Joint Embeddings into Goal-Oriented Dialogues with
Multi-Task Learning | Attention-based encoder-decoder neural network models have recently shown
promising results in goal-oriented dialogue systems. However, these models
struggle to reason over and incorporate state-full knowledge while preserving
their end-to-end text generation functionality. Since such models can greatly
benefit from user intent and knowledge graph integration, in this paper we
propose an RNN-based end-to-end encoder-decoder architecture which is trained
with joint embeddings of the knowledge graph and the corpus as input. The model
provides an additional integration of user intent along with text generation,
trained with a multi-task learning paradigm along with an additional
regularization technique to penalize generating the wrong entity as output. The
model further incorporates a Knowledge Graph entity lookup during inference to
guarantee the generated output is state-full based on the local knowledge graph
provided. We finally evaluated the model using the BLEU score, empirical
evaluation depicts that our proposed architecture can aid in the betterment of
task-oriented dialogue system`s performance.
| 2,020 | Computation and Language |
Interpretable Rumor Detection in Microblogs by Attending to User
Interactions | We address rumor detection by learning to differentiate between the
community's response to real and fake claims in microblogs. Existing
state-of-the-art models are based on tree models that model conversational
trees. However, in social media, a user posting a reply might be replying to
the entire thread rather than to a specific user. We propose a post-level
attention model (PLAN) to model long distance interactions between tweets with
the multi-head attention mechanism in a transformer network. We investigated
variants of this model: (1) a structure aware self-attention model (StA-PLAN)
that incorporates tree structure information in the transformer network, and
(2) a hierarchical token and post-level attention model (StA-HiTPLAN) that
learns a sentence representation with token-level self-attention. To the best
of our knowledge, we are the first to evaluate our models on two rumor
detection data sets: the PHEME data set as well as the Twitter15 and Twitter16
data sets. We show that our best models outperform current state-of-the-art
models for both data sets. Moreover, the attention mechanism allows us to
explain rumor detection predictions at both token-level and post-level.
| 2,020 | Computation and Language |
AMR Similarity Metrics from Principles | Different metrics have been proposed to compare Abstract Meaning
Representation (AMR) graphs. The canonical Smatch metric (Cai and Knight, 2013)
aligns the variables of two graphs and assesses triple matches. The recent
SemBleu metric (Song and Gildea, 2019) is based on the machine-translation
metric Bleu (Papineni et al., 2002) and increases computational efficiency by
ablating the variable-alignment.
In this paper, i) we establish criteria that enable researchers to perform a
principled assessment of metrics comparing meaning representations like AMR;
ii) we undertake a thorough analysis of Smatch and SemBleu where we show that
the latter exhibits some undesirable properties. For example, it does not
conform to the identity of indiscernibles rule and introduces biases that are
hard to control; iii) we propose a novel metric S$^2$match that is more
benevolent to only very slight meaning deviations and targets the fulfilment of
all established criteria. We assess its suitability and show its advantages
over Smatch and SemBleu.
| 2,020 | Computation and Language |
Multimodal Story Generation on Plural Images | Traditionally, text generation models take in a sequence of text as input,
and iteratively generate the next most probable word using pre-trained
parameters. In this work, we propose the architecture to use images instead of
text as the input of the text generation model, called StoryGen. In the
architecture, we design a Relational Text Data Generator algorithm that relates
different features from multiple images. The output samples from the model
demonstrate the ability to generate meaningful paragraphs of text containing
the extracted features from the input images. This is an undergraduate project
report. Completed Dec. 2019 at the Cooper Union.
| 2,021 | Computation and Language |
Modeling Global and Local Node Contexts for Text Generation from
Knowledge Graphs | Recent graph-to-text models generate text from graph-based data using either
global or local aggregation to learn node representations. Global node encoding
allows explicit communication between two distant nodes, thereby neglecting
graph topology as all nodes are directly connected. In contrast, local node
encoding considers the relations between neighbor nodes capturing the graph
structure, but it can fail to capture long-range relations. In this work, we
gather both encoding strategies, proposing novel neural models which encode an
input graph combining both global and local node contexts, in order to learn
better contextualized node embeddings. In our experiments, we demonstrate that
our approaches lead to significant improvements on two graph-to-text datasets
achieving BLEU scores of 18.01 on AGENDA dataset, and 63.69 on the WebNLG
dataset for seen categories, outperforming state-of-the-art models by 3.7 and
3.1 points, respectively.
| 2,020 | Computation and Language |
ABSent: Cross-Lingual Sentence Representation Mapping with Bidirectional
GANs | A number of cross-lingual transfer learning approaches based on neural
networks have been proposed for the case when large amounts of parallel text
are at our disposal. However, in many real-world settings, the size of parallel
annotated training data is restricted. Additionally, prior cross-lingual
mapping research has mainly focused on the word level. This raises the question
of whether such techniques can also be applied to effortlessly obtain
cross-lingually aligned sentence representations. To this end, we propose an
Adversarial Bi-directional Sentence Embedding Mapping (ABSent) framework, which
learns mappings of cross-lingual sentence representations from limited
quantities of parallel data.
| 2,020 | Computation and Language |
Learning Robust and Multilingual Speech Representations | Unsupervised speech representation learning has shown remarkable success at
finding representations that correlate with phonetic structures and improve
downstream speech recognition performance. However, most research has been
focused on evaluating the representations in terms of their ability to improve
the performance of speech recognition systems on read English (e.g. Wall Street
Journal and LibriSpeech). This evaluation methodology overlooks two important
desiderata that speech representations should have: robustness to domain shifts
and transferability to other languages. In this paper we learn representations
from up to 8000 hours of diverse and noisy speech data and evaluate the
representations by looking at their robustness to domain shifts and their
ability to improve recognition performance in many languages. We find that our
representations confer significant robustness advantages to the resulting
recognition systems: we see significant improvements in out-of-domain transfer
relative to baseline feature sets and the features likewise provide
improvements in 25 phonetically diverse languages including tonal languages and
low-resource languages.
| 2,020 | Computation and Language |
The Secret is in the Spectra: Predicting Cross-lingual Task Performance
with Spectral Similarity Measures | Performance in cross-lingual NLP tasks is impacted by the (dis)similarity of
languages at hand: e.g., previous work has suggested there is a connection
between the expected success of bilingual lexicon induction (BLI) and the
assumption of (approximate) isomorphism between monolingual embedding spaces.
In this work we present a large-scale study focused on the correlations between
monolingual embedding space similarity and task performance, covering thousands
of language pairs and four different tasks: BLI, parsing, POS tagging and MT.
We hypothesize that statistics of the spectrum of each monolingual embedding
space indicate how well they can be aligned. We then introduce several
isomorphism measures between two embedding spaces, based on the relevant
statistics of their individual spectra. We empirically show that 1) language
similarity scores derived from such spectral isomorphism measures are strongly
associated with performance observed in different cross-lingual tasks, and 2)
our spectral-based measures consistently outperform previous standard
isomorphism measures, while being computationally more tractable and easier to
interpret. Finally, our measures capture complementary information to
typologically driven language distance measures, and the combination of
measures from the two families yields even higher task performance
correlations.
| 2,020 | Computation and Language |
On the Importance of Word Order Information in Cross-lingual Sequence
Labeling | Word order variances generally exist in different languages. In this paper,
we hypothesize that cross-lingual models that fit into the word order of the
source language might fail to handle target languages. To verify this
hypothesis, we investigate whether making models insensitive to the word order
of the source language can improve the adaptation performance in target
languages. To do so, we reduce the source language word order information
fitted to sequence encoders and observe the performance changes. In addition,
based on this hypothesis, we propose a new method for fine-tuning multilingual
BERT in downstream cross-lingual sequence labeling tasks. Experimental results
on dialogue natural language understanding, part-of-speech tagging, and named
entity recognition tasks show that reducing word order information fitted to
the model can achieve better zero-shot cross-lingual performance. Furthermore,
our proposed methods can also be applied to strong cross-lingual baselines, and
improve their performances.
| 2,020 | Computation and Language |
Introducing the diagrammatic semiotic mode | As the use and diversity of diagrams across many disciplines grows, there is
an increasing interest in the diagrams research community concerning how such
diversity might be documented and explained. In this article, we argue that one
way of achieving increased reliability, coverage, and utility for a general
classification of diagrams is to draw on recently developed semiotic principles
developed within the field of multimodality. To this end, we sketch out the
internal details of what may tentatively be termed the diagrammatic semiotic
mode. This provides a natural account of how diagrammatic representations may
integrate natural language, various forms of graphics, diagrammatic elements
such as arrows, lines and other expressive resources into coherent
organisations, while still respecting the crucial diagrammatic contributions of
visual organisation. We illustrate the proposed approach using two recent
diagram corpora and show how a multimodal approach supports the empirical
analysis of diagrammatic representations, especially in identifying
diagrammatic constituents and describing their interrelations in a manner that
may be generalised across diagram types and be used to characterise distinct
kinds of functionality.
| 2,022 | Computation and Language |
Harnessing Code Switching to Transcend the Linguistic Barrier | Code mixing (or code switching) is a common phenomenon observed in
social-media content generated by a linguistically diverse user-base. Studies
show that in the Indian sub-continent, a substantial fraction of social media
posts exhibit code switching. While the difficulties posed by code mixed
documents to further downstream analyses are well-understood, lending
visibility to code mixed documents under certain scenarios may have utility
that has been previously overlooked. For instance, a document written in a
mixture of multiple languages can be partially accessible to a wider audience;
this could be particularly useful if a considerable fraction of the audience
lacks fluency in one of the component languages. In this paper, we provide a
systematic approach to sample code mixed documents leveraging a polyglot
embedding based method that requires minimal supervision. In the context of the
2019 India-Pakistan conflict triggered by the Pulwama terror attack, we
demonstrate an untapped potential of harnessing code mixing for human
well-being: starting from an existing hostility diffusing \emph{hope speech}
classifier solely trained on English documents, code mixed documents are
utilized as a bridge to retrieve \emph{hope speech} content written in a
low-resource but widely used language - Romanized Hindi. Our proposed pipeline
requires minimal supervision and holds promise in substantially reducing web
moderation efforts.
| 2,020 | Computation and Language |
Data Mining in Clinical Trial Text: Transformers for Classification and
Question Answering Tasks | This research on data extraction methods applies recent advances in natural
language processing to evidence synthesis based on medical texts. Texts of
interest include abstracts of clinical trials in English and in multilingual
contexts. The main focus is on information characterized via the Population,
Intervention, Comparator, and Outcome (PICO) framework, but data extraction is
not limited to these fields. Recent neural network architectures based on
transformers show capacities for transfer learning and increased performance on
downstream natural language processing tasks such as universal reading
comprehension, brought forward by this architecture's use of contextualized
word embeddings and self-attention mechanisms. This paper contributes to
solving problems related to ambiguity in PICO sentence prediction tasks, as
well as highlighting how annotations for training named entity recognition
systems are used to train a high-performing, but nevertheless flexible
architecture for question answering in systematic review automation.
Additionally, it demonstrates how the problem of insufficient amounts of
training annotations for PICO entity extraction is tackled by augmentation. All
models in this paper were created with the aim to support systematic review
(semi)automation. They achieve high F1 scores, and demonstrate the feasibility
of applying transformer-based classification methods to support data mining in
the biomedical literature.
| 2,020 | Computation and Language |
LowResourceEval-2019: a shared task on morphological analysis for
low-resource languages | The paper describes the results of the first shared task on morphological
analysis for the languages of Russia, namely, Evenki, Karelian, Selkup, and
Veps. For the languages in question, only small-sized corpora are available.
The tasks include morphological analysis, word form generation and morpheme
segmentation. Four teams participated in the shared task. Most of them use
machine-learning approaches, outperforming the existing rule-based ones. The
article describes the datasets prepared for the shared tasks and contains
analysis of the participants' solutions. Language corpora having different
formats were transformed into CONLL-U format. The universal format makes the
datasets comparable to other language corpura and facilitates using them in
other NLP tasks.
| 2,019 | Computation and Language |
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework
for Natural Language Generation | Current pre-training works in natural language generation pay little
attention to the problem of exposure bias on downstream tasks. To address this
issue, we propose an enhanced multi-flow sequence to sequence pre-training and
fine-tuning framework named ERNIE-GEN, which bridges the discrepancy between
training and inference with an infilling generation mechanism and a noise-aware
generation method. To make generation closer to human writing patterns, this
framework introduces a span-by-span generation flow that trains the model to
predict semantically-complete spans consecutively rather than predicting word
by word. Unlike existing pre-training methods, ERNIE-GEN incorporates
multi-granularity target sampling to construct pre-training data, which
enhances the correlation between encoder and decoder. Experimental results
demonstrate that ERNIE-GEN achieves state-of-the-art results with a much
smaller amount of pre-training data and parameters on a range of language
generation tasks, including abstractive summarization (Gigaword and
CNN/DailyMail), question generation (SQuAD), dialogue generation (Persona-Chat)
and generative question answering (CoQA).
| 2,020 | Computation and Language |
Iterative Batch Back-Translation for Neural Machine Translation: A
Conceptual Model | An effective method to generate a large number of parallel sentences for
training improved neural machine translation (NMT) systems is the use of
back-translations of the target-side monolingual data. Recently, iterative
back-translation has been shown to outperform standard back-translation albeit
on some language pairs. This work proposes the iterative batch back-translation
that is aimed at enhancing the standard iterative back-translation and enabling
the efficient utilization of more monolingual data. After each iteration,
improved back-translations of new sentences are added to the parallel data that
will be used to train the final forward model. The work presents a conceptual
model of the proposed approach.
| 2,020 | Computation and Language |
Generaci\'on autom\'atica de frases literarias en espa\~nol | In this work we present a state of the art in the area of Computational
Creativity (CC). In particular, we address the automatic generation of literary
sentences in Spanish. We propose three models of text generation based mainly
on statistical algorithms and shallow parsing analysis. We also present some
rather encouraging preliminary results.
| 2,020 | Computation and Language |
Intweetive Text Summarization | The amount of user generated contents from various social medias allows
analyst to handle a wide view of conversations on several topics related to
their business. Nevertheless keeping up-to-date with this amount of information
is not humanly feasible. Automatic Summarization then provides an interesting
mean to digest the dynamics and the mass volume of contents. In this paper, we
address the issue of tweets summarization which remains scarcely explored. We
propose to automatically generated summaries of Micro-Blogs conversations
dealing with public figures E-Reputation. These summaries are generated using
key-word queries or sample tweet and offer a focused view of the whole
Micro-Blog network. Since state-of-the-art is lacking on this point we conduct
and evaluate our experiments over the multilingual CLEF RepLab Topic-Detection
dataset according to an experimental evaluation process.
| 2,016 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.