Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Rumor Detection on Twitter Using Multiloss Hierarchical BiLSTM with an
Attenuation Factor
|
Social media platforms such as Twitter have become a breeding ground for
unverified information or rumors. These rumors can threaten people's health,
endanger the economy, and affect the stability of a country. Many researchers
have developed models to classify rumors using traditional machine learning or
vanilla deep learning models. However, previous studies on rumor detection have
achieved low precision and are time consuming. Inspired by the hierarchical
model and multitask learning, a multiloss hierarchical BiLSTM model with an
attenuation factor is proposed in this paper. The model is divided into two
BiLSTM modules: post level and event level. By means of this hierarchical
structure, the model can extract deep in-formation from limited quantities of
text. Each module has a loss function that helps to learn bilateral features
and reduce the training time. An attenuation fac-tor is added at the post level
to increase the accuracy. The results on two rumor datasets demonstrate that
our model achieves better performance than that of state-of-the-art machine
learning and vanilla deep learning models.
| 2,020 |
Computation and Language
|
Neural Coreference Resolution for Arabic
|
No neural coreference resolver for Arabic exists, in fact we are not aware of
any learning-based coreference resolver for Arabic since (Bjorkelund and Kuhn,
2014). In this paper, we introduce a coreference resolution system for Arabic
based on Lee et al's end to end architecture combined with the Arabic version
of bert and an external mention detector. As far as we know, this is the first
neural coreference resolution system aimed specifically to Arabic, and it
substantially outperforms the existing state of the art on OntoNotes 5.0 with a
gain of 15.2 points conll F1. We also discuss the current limitations of the
task for Arabic and possible approaches that can tackle these challenges.
| 2,020 |
Computation and Language
|
Method of the coherence evaluation of Ukrainian text
|
Due to the growing role of the SEO technologies, it is necessary to perform
an automated analysis of the article's quality. Such approach helps both to
return the most intelligible pages for the user's query and to raise the web
sites positions to the top of query results. An automated assessment of a
coherence is a part of the complex analysis of the text. In this article, main
methods for text coherence measurements for Ukrainian language are analyzed.
Expediency of using the semantic similarity graph method in comparison with
other methods are explained. It is suggested the improvement of that method by
the pre-training of the neural network for vector representations of sentences.
Experimental examination of the original method and its modifications is made.
Training and examination procedures are made on the corpus of Ukrainian texts,
which were previously retrieved from abstracts and full texts of Ukrainian
scientific articles. The testing procedure is implemented by performing of two
typical tasks for the text coherence assessment: document discrimination task
and insertion task. Accordingly to the analysis it is defined the most
effective combination of method's modification and its parameter for the
measurement of the text coherence.
| 2,018 |
Computation and Language
|
Effective Approach to Develop a Sentiment Annotator For Legal Domain in
a Low Resource Setting
|
Analyzing the sentiments of legal opinions available in Legal Opinion Texts
can facilitate several use cases such as legal judgement prediction,
contradictory statements identification and party-based sentiment analysis.
However, the task of developing a legal domain specific sentiment annotator is
challenging due to resource constraints such as lack of domain specific
labelled data and domain expertise. In this study, we propose novel techniques
that can be used to develop a sentiment annotator for the legal domain while
minimizing the need for manual annotations of data.
| 2,020 |
Computation and Language
|
Pick a Fight or Bite your Tongue: Investigation of Gender Differences in
Idiomatic Language Usage
|
A large body of research on gender-linked language has established
foundations regarding cross-gender differences in lexical, emotional, and
topical preferences, along with their sociological underpinnings. We compile a
novel, large and diverse corpus of spontaneous linguistic productions annotated
with speakers' gender, and perform a first large-scale empirical study of
distinctions in the usage of \textit{figurative language} between male and
female authors. Our analyses suggest that (1) idiomatic choices reflect
gender-specific lexical and semantic preferences in general language, (2) men's
and women's idiomatic usages express higher emotion than their literal
language, with detectable, albeit more subtle, differences between male and
female authors along the dimension of dominance compared to similar
distinctions in their literal utterances, and (3) contextual analysis of
idiomatic expressions reveals considerable differences, reflecting subtle
divergences in usage environments, shaped by cross-gender communication styles
and semantic biases.
| 2,020 |
Computation and Language
|
Aspectuality Across Genre: A Distributional Semantics Approach
|
The interpretation of the lexical aspect of verbs in English plays a crucial
role for recognizing textual entailment and learning discourse-level
inferences. We show that two elementary dimensions of aspectual class, states
vs. events, and telic vs. atelic events, can be modelled effectively with
distributional semantics. We find that a verb's local context is most
indicative of its aspectual class, and demonstrate that closed class words tend
to be stronger discriminating contexts than content words. Our approach
outperforms previous work on three datasets. Lastly, we contribute a dataset of
human--human conversations annotated with lexical aspect and present
experiments that show the correlation of telicity with genre and discourse
goals.
| 2,020 |
Computation and Language
|
Efficient Arabic emotion recognition using deep neural networks
|
Emotion recognition from speech signal based on deep learning is an active
research area. Convolutional neural networks (CNNs) may be the dominant method
in this area. In this paper, we implement two neural architectures to address
this problem. The first architecture is an attention-based CNN-LSTM-DNN model.
In this novel architecture, the convolutional layers extract salient features
and the bi-directional long short-term memory (BLSTM) layers handle the
sequential phenomena of the speech signal. This is followed by an attention
layer, which extracts a summary vector that is fed to the fully connected dense
layer (DNN), which finally connects to a softmax output layer. The second
architecture is based on a deep CNN model. The results on an Arabic speech
emotion recognition task show that our innovative approach can lead to
significant improvements (2.2% absolute improvements) over a strong deep CNN
baseline system. On the other hand, the deep CNN models are significantly
faster than the attention based CNN-LSTM-DNN models in training and
classification.
| 2,020 |
Computation and Language
|
Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification
|
Text classification is a critical research topic with broad applications in
natural language processing. Recently, graph neural networks (GNNs) have
received increasing attention in the research community and demonstrated their
promising results on this canonical task. Despite the success, their
performance could be largely jeopardized in practice since they are: (1) unable
to capture high-order interaction between words; (2) inefficient to handle
large datasets and new documents. To address those issues, in this paper, we
propose a principled model -- hypergraph attention networks (HyperGAT), which
can obtain more expressive power with less computational consumption for text
representation learning. Extensive experiments on various benchmark datasets
demonstrate the efficacy of the proposed approach on the text classification
task.
| 2,020 |
Computation and Language
|
Investigation of BERT Model on Biomedical Relation Extraction Based on
Revised Fine-tuning Mechanism
|
With the explosive growth of biomedical literature, designing automatic tools
to extract information from the literature has great significance in biomedical
research. Recently, transformer-based BERT models adapted to the biomedical
domain have produced leading results. However, all the existing BERT models for
relation classification only utilize partial knowledge from the last layer. In
this paper, we will investigate the method of utilizing the entire layer in the
fine-tuning process of BERT model. To the best of our knowledge, we are the
first to explore this method. The experimental results illustrate that our
method improves the BERT model performance and outperforms the state-of-the-art
methods on three benchmark datasets for different relation extraction tasks. In
addition, further analysis indicates that the key knowledge about the relations
can be learned from the last layer of BERT model.
| 2,020 |
Computation and Language
|
Towards A Friendly Online Community: An Unsupervised Style Transfer
Framework for Profanity Redaction
|
Offensive and abusive language is a pressing problem on social media
platforms. In this work, we propose a method for transforming offensive
comments, statements containing profanity or offensive language, into
non-offensive ones. We design a RETRIEVE, GENERATE and EDIT unsupervised style
transfer pipeline to redact the offensive comments in a word-restricted manner
while maintaining a high level of fluency and preserving the content of the
original text. We extensively evaluate our method's performance and compare it
to previous style transfer models using both automatic metrics and human
evaluations. Experimental results show that our method outperforms other models
on human evaluations and is the only approach that consistently performs well
on all automatic evaluation metrics.
| 2,020 |
Computation and Language
|
Non-Autoregressive Predictive Coding for Learning Speech Representations
from Local Dependencies
|
Self-supervised speech representations have been shown to be effective in a
variety of speech applications. However, existing representation learning
methods generally rely on the autoregressive model and/or observed global
dependencies while generating the representation. In this work, we propose
Non-Autoregressive Predictive Coding (NPC), a self-supervised method, to learn
a speech representation in a non-autoregressive manner by relying only on local
dependencies of speech. NPC has a conceptually simple objective and can be
implemented easily with the introduced Masked Convolution Blocks. NPC offers a
significant speedup for inference since it is parallelizable in time and has a
fixed inference time for each time step regardless of the input sequence
length. We discuss and verify the effectiveness of NPC by theoretically and
empirically comparing it with other methods. We show that the NPC
representation is comparable to other methods in speech experiments on phonetic
and speaker classification while being more efficient.
| 2,020 |
Computation and Language
|
Deep Learning for Text Style Transfer: A Survey
|
Text style transfer is an important task in natural language generation,
which aims to control certain attributes in the generated text, such as
politeness, emotion, humor, and many others. It has a long history in the field
of natural language processing, and recently has re-gained significant
attention thanks to the promising performance brought by deep neural models. In
this paper, we present a systematic survey of the research on neural text style
transfer, spanning over 100 representative articles since the first neural text
style transfer work in 2017. We discuss the task formulation, existing datasets
and subtasks, evaluation, as well as the rich methodologies in the presence of
parallel and non-parallel data. We also provide discussions on a variety of
important topics regarding the future development of this task. Our curated
paper list is at https://github.com/zhijing-jin/Text_Style_Transfer_Survey
| 2,021 |
Computation and Language
|
Analyzing the Effect of Multi-task Learning for Biomedical Named Entity
Recognition
|
Developing high-performing systems for detecting biomedical named entities
has major implications. State-of-the-art deep-learning based solutions for
entity recognition often require large annotated datasets, which is not
available in the biomedical domain. Transfer learning and multi-task learning
have been shown to improve performance for low-resource domains. However, the
applications of these methods are relatively scarce in the biomedical domain,
and a theoretical understanding of why these methods improve the performance is
lacking. In this study, we performed an extensive analysis to understand the
transferability between different biomedical entity datasets. We found useful
measures to predict transferability between these datasets. Besides, we propose
combining transfer learning and multi-task learning to improve the performance
of biomedical named entity recognition systems, which is not applied before to
the best of our knowledge.
| 2,020 |
Computation and Language
|
Improving Cyberbully Detection with User Interaction
|
Cyberbullying, identified as intended and repeated online bullying behavior,
has become increasingly prevalent in the past few decades. Despite the
significant progress made thus far, the focus of most existing work on
cyberbullying detection lies in the independent content analysis of different
comments within a social media session. We argue that such leading notions of
analysis suffer from three key limitations: they overlook the temporal
correlations among different comments; they only consider the content within a
single comment rather than the topic coherence across comments; they remain
generic and exploit limited interactions between social media users. In this
work, we observe that user comments in the same session may be inherently
related, e.g., discussing similar topics, and their interaction may evolve over
time. We also show that modeling such topic coherence and temporal interaction
are critical to capture the repetitive characteristics of bullying behavior,
thus leading to better predicting performance. To achieve the goal, we first
construct a unified temporal graph for each social media session. Drawing on
recent advances in graph neural network, we then propose a principled
graph-based approach for modeling the temporal dynamics and topic coherence
throughout user interactions. We empirically evaluate the effectiveness of our
approach with the tasks of session-level bullying detection and comment-level
case study. Our code is released to public.
| 2,021 |
Computation and Language
|
Fake or Real? A Study of Arabic Satirical Fake News
|
One very common type of fake news is satire which comes in a form of a news
website or an online platform that parodies reputable real news agencies to
create a sarcastic version of reality. This type of fake news is often
disseminated by individuals on their online platforms as it has a much stronger
effect in delivering criticism than through a straightforward message. However,
when the satirical text is disseminated via social media without mention of its
source, it can be mistaken for real news. This study conducts several
exploratory analyses to identify the linguistic properties of Arabic fake news
with satirical content. We exploit these features to build a number of machine
learning models capable of identifying satirical fake news with an accuracy of
up to 98.6%.
| 2,020 |
Computation and Language
|
Seeing Both the Forest and the Trees: Multi-head Attention for Joint
Classification on Different Compositional Levels
|
In natural languages, words are used in association to construct sentences.
It is not words in isolation, but the appropriate combination of hierarchical
structures that conveys the meaning of the whole sentence. Neural networks can
capture expressive language features; however, insights into the link between
words and sentences are difficult to acquire automatically. In this work, we
design a deep neural network architecture that explicitly wires lower and
higher linguistic components; we then evaluate its ability to perform the same
task at different hierarchical levels. Settling on broad text classification
tasks, we show that our model, MHAL, learns to simultaneously solve them at
different levels of granularity by fluidly transferring knowledge between
hierarchies. Using a multi-head attention mechanism to tie the representations
between single words and full sentences, MHAL systematically outperforms
equivalent models that are not incentivized towards developing compositional
representations. Moreover, we demonstrate that, with the proposed architecture,
the sentence information flows naturally to individual words, allowing the
model to behave like a sequence labeller (which is a lower, word-level task)
even without any word supervision, in a zero-shot fashion.
| 2,020 |
Computation and Language
|
Opinion Transmission Network for Jointly Improving Aspect-oriented
Opinion Words Extraction and Sentiment Classification
|
Aspect-level sentiment classification (ALSC) and aspect oriented opinion
words extraction (AOWE) are two highly relevant aspect-based sentiment analysis
(ABSA) subtasks. They respectively aim to detect the sentiment polarity and
extract the corresponding opinion words toward a given aspect in a sentence.
Previous works separate them and focus on one of them by training neural models
on small-scale labeled data, while neglecting the connections between them. In
this paper, we propose a novel joint model, Opinion Transmission Network (OTN),
to exploit the potential bridge between ALSC and AOWE to achieve the goal of
facilitating them simultaneously. Specifically, we design two tailor-made
opinion transmission mechanisms to control opinion clues flow bidirectionally,
respectively from ALSC to AOWE and AOWE to ALSC. Experiment results on two
benchmark datasets show that our joint model outperforms strong baselines on
the two tasks. Further analysis also validates the effectiveness of opinion
transmission mechanisms.
| 2,020 |
Computation and Language
|
Transformer-based Multi-Aspect Modeling for Multi-Aspect Multi-Sentiment
Analysis
|
Aspect-based sentiment analysis (ABSA) aims at analyzing the sentiment of a
given aspect in a sentence. Recently, neural network-based methods have
achieved promising results in existing ABSA datasets. However, these datasets
tend to degenerate to sentence-level sentiment analysis because most sentences
contain only one aspect or multiple aspects with the same sentiment polarity.
To facilitate the research of ABSA, NLPCC 2020 Shared Task 2 releases a new
large-scale Multi-Aspect Multi-Sentiment (MAMS) dataset. In the MAMS dataset,
each sentence contains at least two different aspects with different sentiment
polarities, which makes ABSA more complex and challenging. To address the
challenging dataset, we re-formalize ABSA as a problem of multi-aspect
sentiment analysis, and propose a novel Transformer-based Multi-aspect Modeling
scheme (TMM), which can capture potential relations between multiple aspects
and simultaneously detect the sentiment of all aspects in a sentence.
Experiment results on the MAMS dataset show that our method achieves noticeable
improvements compared with strong baselines such as BERT and RoBERTa, and
finally ranks the 2nd in NLPCC 2020 Shared Task 2 Evaluation.
| 2,020 |
Computation and Language
|
Deconstruct to Reconstruct a Configurable Evaluation Metric for
Open-Domain Dialogue Systems
|
Many automatic evaluation metrics have been proposed to score the overall
quality of a response in open-domain dialogue. Generally, the overall quality
is comprised of various aspects, such as relevancy, specificity, and empathy,
and the importance of each aspect differs according to the task. For instance,
specificity is mandatory in a food-ordering dialogue task, whereas fluency is
preferred in a language-teaching dialogue system. However, existing metrics are
not designed to cope with such flexibility. For example, BLEU score
fundamentally relies only on word overlapping, whereas BERTScore relies on
semantic similarity between reference and candidate response. Thus, they are
not guaranteed to capture the required aspects, i.e., specificity. To design a
metric that is flexible to a task, we first propose making these qualities
manageable by grouping them into three groups: understandability, sensibleness,
and likability, where likability is a combination of qualities that are
essential for a task. We also propose a simple method to composite metrics of
each aspect to obtain a single metric called USL-H, which stands for
Understandability, Sensibleness, and Likability in Hierarchy. We demonstrated
that USL-H score achieves good correlations with human judgment and maintains
its configurability towards different aspects and metrics.
| 2,020 |
Computation and Language
|
CHIME: Cross-passage Hierarchical Memory Network for Generative Review
Question Answering
|
We introduce CHIME, a cross-passage hierarchical memory network for question
answering (QA) via text generation. It extends XLNet introducing an auxiliary
memory module consisting of two components: the context memory collecting
cross-passage evidences, and the answer memory working as a buffer continually
refining the generated answers. Empirically, we show the efficacy of the
proposed architecture in the multi-passage generative QA, outperforming the
state-of-the-art baselines with better syntactically well-formed answers and
increased precision in addressing the questions of the AmazonQA review dataset.
An additional qualitative analysis revealed the interpretability introduced by
the memory module.
| 2,020 |
Computation and Language
|
Deep Diacritization: Efficient Hierarchical Recurrence for Improved
Arabic Diacritization
|
We propose a novel architecture for labelling character sequences that
achieves state-of-the-art results on the Tashkeela Arabic diacritization
benchmark. The core is a two-level recurrence hierarchy that operates on the
word and character levels separately---enabling faster training and inference
than comparable traditional models. A cross-level attention module further
connects the two, and opens the door for network interpretability. The task
module is a softmax classifier that enumerates valid combinations of
diacritics. This architecture can be extended with a recurrent decoder that
optionally accepts priors from partially diacritized text, which improves
results. We employ extra tricks such as sentence dropout and majority voting to
further boost the final result. Our best model achieves a WER of 5.34%,
outperforming the previous state-of-the-art with a 30.56% relative error
reduction.
| 2,020 |
Computation and Language
|
Semantic coordinates analysis reveals language changes in the AI field
|
Semantic shifts can reflect changes in beliefs across hundreds of years, but
it is less clear whether trends in fast-changing communities across a short
time can be detected. We propose semantic coordinates analysis, a method based
on semantic shifts, that reveals changes in language within publications of a
field (we use AI as example) across a short time span. We use GloVe-style
probability ratios to quantify the shifting directions and extents from
multiple viewpoints. We show that semantic coordinates analysis can detect
shifts echoing changes of research interests (e.g., "deep" shifted further from
"rigorous" to "neural"), and developments of research activities (e,g.,
"collaboration" contains less "competition" than "collaboration"), based on
publications spanning as short as 10 years.
| 2,020 |
Computation and Language
|
SMRT Chatbots: Improving Non-Task-Oriented Dialog with Simulated
Multiple Reference Training
|
Non-task-oriented dialog models suffer from poor quality and non-diverse
responses. To overcome limited conversational data, we apply Simulated Multiple
Reference Training (SMRT; Khayrallah et al., 2020), and use a paraphraser to
simulate multiple responses per training prompt. We find SMRT improves over a
strong Transformer baseline as measured by human and automatic quality scores
and lexical diversity. We also find SMRT is comparable to pretraining in human
evaluation quality, and outperforms pretraining on automatic quality and
lexical diversity, without requiring related-domain dialog data.
| 2,021 |
Computation and Language
|
WLV-RIT at HASOC-Dravidian-CodeMix-FIRE2020: Offensive Language
Identification in Code-switched YouTube Comments
|
This paper describes the WLV-RIT entry to the Hate Speech and Offensive
Content Identification in Indo-European Languages (HASOC) shared task 2020. The
HASOC 2020 organizers provided participants with annotated datasets containing
social media posts of code-mixed in Dravidian languages (Malayalam-English and
Tamil-English). We participated in task 1: Offensive comment identification in
Code-mixed Malayalam Youtube comments. In our methodology, we take advantage of
available English data by applying cross-lingual contextual word embeddings and
transfer learning to make predictions to Malayalam data. We further improve the
results using various fine tuning strategies. Our system achieved 0.89 weighted
average F1 score for the test set and it ranked 5th place out of 12
participants.
| 2,020 |
Computation and Language
|
Recent Neural Methods on Slot Filling and Intent Classification for
Task-Oriented Dialogue Systems: A Survey
|
In recent years, fostered by deep learning technologies and by the high
demand for conversational AI, various approaches have been proposed that
address the capacity to elicit and understand user's needs in task-oriented
dialogue systems. We focus on two core tasks, slot filling (SF) and intent
classification (IC), and survey how neural-based models have rapidly evolved to
address natural language understanding in dialogue systems. We introduce three
neural architectures: independent model, which model SF and IC separately,
joint models, which exploit the mutual benefit of the two tasks simultaneously,
and transfer learning models, that scale the model to new domains. We discuss
the current state of the research in SF and IC and highlight challenges that
still require attention.
| 2,020 |
Computation and Language
|
ASAD: A Twitter-based Benchmark Arabic Sentiment Analysis Dataset
|
This paper provides a detailed description of a new Twitter-based benchmark
dataset for Arabic Sentiment Analysis (ASAD), which is launched in a
competition3, sponsored by KAUST for awarding 10000 USD, 5000 USD and 2000 USD
to the first, second and third place winners, respectively. Compared to other
publicly released Arabic datasets, ASAD is a large, high-quality annotated
dataset(including 95K tweets), with three-class sentiment labels (positive,
negative and neutral). We presents the details of the data collection process
and annotation process. In addition, we implement several baseline models for
the competition task and report the results as a reference for the participants
to the competition.
| 2,021 |
Computation and Language
|
A Unifying Theory of Transition-based and Sequence Labeling Parsing
|
We define a mapping from transition-based parsing algorithms that read
sentences from left to right to sequence labeling encodings of syntactic trees.
This not only establishes a theoretical relation between transition-based
parsing and sequence-labeling parsing, but also provides a method to obtain new
encodings for fast and simple sequence labeling parsing from the many existing
transition-based parsers for different formalisms. Applying it to dependency
parsing, we implement sequence labeling versions of four algorithms, showing
that they are learnable and obtain comparable performance to existing
encodings.
| 2,020 |
Computation and Language
|
Vec2Sent: Probing Sentence Embeddings with Natural Language Generation
|
We introspect black-box sentence embeddings by conditionally generating from
them with the objective to retrieve the underlying discrete sentence. We
perceive of this as a new unsupervised probing task and show that it correlates
well with downstream task performance. We also illustrate how the language
generated from different encoders differs. We apply our approach to generate
sentence analogies from sentence embeddings.
| 2,020 |
Computation and Language
|
MixKD: Towards Efficient Distillation of Large-scale Language Models
|
Large-scale language models have recently demonstrated impressive empirical
performance. Nevertheless, the improved results are attained at the price of
bigger models, more power consumption, and slower inference, which hinder their
applicability to low-resource (both memory and computation) platforms.
Knowledge distillation (KD) has been demonstrated as an effective framework for
compressing such big models. However, large-scale neural network systems are
prone to memorize training instances, and thus tend to make inconsistent
predictions when the data distribution is altered slightly. Moreover, the
student model has few opportunities to request useful information from the
teacher model when there is limited task-specific data available. To address
these issues, we propose MixKD, a data-agnostic distillation framework that
leverages mixup, a simple yet efficient data augmentation approach, to endow
the resulting model with stronger generalization ability. Concretely, in
addition to the original training examples, the student model is encouraged to
mimic the teacher's behavior on the linear interpolation of example pairs as
well. We prove from a theoretical perspective that under reasonable conditions
MixKD gives rise to a smaller gap between the generalization error and the
empirical error. To verify its effectiveness, we conduct experiments on the
GLUE benchmark, where MixKD consistently leads to significant gains over the
standard KD training, and outperforms several competitive baselines.
Experiments under a limited-data setting and ablation studies further
demonstrate the advantages of the proposed approach.
| 2,021 |
Computation and Language
|
Bracketing Encodings for 2-Planar Dependency Parsing
|
We present a bracketing-based encoding that can be used to represent any
2-planar dependency tree over a sentence of length n as a sequence of n labels,
hence providing almost total coverage of crossing arcs in sequence labeling
parsing. First, we show that existing bracketing encodings for parsing as
labeling can only handle a very mild extension of projective trees. Second, we
overcome this limitation by taking into account the well-known property of
2-planarity, which is present in the vast majority of dependency syntactic
structures in treebanks, i.e., the arcs of a dependency tree can be split into
two planes such that arcs in a given plane do not cross. We take advantage of
this property to design a method that balances the brackets and that encodes
the arcs belonging to each of those planes, allowing for almost unrestricted
non-projectivity (round 99.9% coverage) in sequence labeling parsing. The
experiments show that our linearizations improve over the accuracy of the
original bracketing encoding in highly non-projective treebanks (on average by
0.4 LAS), while achieving a similar speed. Also, they are especially suitable
when PoS tags are not used as input parameters to the models.
| 2,021 |
Computation and Language
|
Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning
|
The interaction of conversational systems with users poses an exciting
opportunity for improving them after deployment, but little evidence has been
provided of its feasibility. In most applications, users are not able to
provide the correct answer to the system, but they are able to provide binary
(correct, incorrect) feedback. In this paper we propose feedback-weighted
learning based on importance sampling to improve upon an initial supervised
system using binary user feedback. We perform simulated experiments on document
classification (for development) and Conversational Question Answering datasets
like QuAC and DoQA, where binary user feedback is derived from gold
annotations. The results show that our method is able to improve over the
initial supervised system, getting close to a fully-supervised system that has
access to the same labeled examples in in-domain experiments (QuAC), and even
matching in out-of-domain experiments (DoQA). Our work opens the prospect to
exploit interactions with real users and improve conversational systems after
deployment.
| 2,020 |
Computation and Language
|
Social Chemistry 101: Learning to Reason about Social and Moral Norms
|
Social norms -- the unspoken commonsense rules about acceptable social
behavior -- are crucial in understanding the underlying causes and intents of
people's actions in narratives. For example, underlying an action such as
"wanting to call cops on my neighbors" are social norms that inform our
conduct, such as "It is expected that you report crimes."
We present Social Chemistry, a new conceptual formalism to study people's
everyday social norms and moral judgments over a rich spectrum of real life
situations described in natural language. We introduce Social-Chem-101, a
large-scale corpus that catalogs 292k rules-of-thumb such as "it is rude to run
a blender at 5am" as the basic conceptual units. Each rule-of-thumb is further
broken down with 12 different dimensions of people's judgments, including
social judgments of good and bad, moral foundations, expected cultural
pressure, and assumed legality, which together amount to over 4.5 million
annotations of categorical labels and free-text descriptions.
Comprehensive empirical results based on state-of-the-art neural models
demonstrate that computational modeling of social norms is a promising research
direction. Our model framework, Neural Norm Transformer, learns and generalizes
Social-Chem-101 to successfully reason about previously unseen situations,
generating relevant (and potentially novel) attribute-aware social
rules-of-thumb.
| 2,021 |
Computation and Language
|
Aspect-Based Argument Mining
|
Computational Argumentation in general and Argument Mining in particular are
important research fields. In previous works, many of the challenges to
automatically extract and to some degree reason over natural language arguments
were addressed. The tools to extract argument units are increasingly available
and further open problems can be addressed. In this work, we are presenting the
task of Aspect-Based Argument Mining (ABAM), with the essential subtasks of
Aspect Term Extraction (ATE) and Nested Segmentation (NS). At the first
instance, we create and release an annotated corpus with aspect information on
the token-level. We consider aspects as the main point(s) argument units are
addressing. This information is important for further downstream tasks such as
argument ranking, argument summarization and generation, as well as the search
for counter-arguments on the aspect-level. We present several experiments using
state-of-the-art supervised architectures and demonstrate their performance for
both of the subtasks. The annotated benchmark is available at
https://github.com/trtm/ABAM.
| 2,020 |
Computation and Language
|
Reasoning Over History: Context Aware Visual Dialog
|
While neural models have been shown to exhibit strong performance on
single-turn visual question answering (VQA) tasks, extending VQA to a
multi-turn, conversational setting remains a challenge. One way to address this
challenge is to augment existing strong neural VQA models with the mechanisms
that allow them to retain information from previous dialog turns. One strong
VQA model is the MAC network, which decomposes a task into a series of
attention-based reasoning steps. However, since the MAC network is designed for
single-turn question answering, it is not capable of referring to past dialog
turns. More specifically, it struggles with tasks that require reasoning over
the dialog history, particularly coreference resolution. We extend the MAC
network architecture with Context-aware Attention and Memory (CAM), which
attends over control states in past dialog turns to determine the necessary
reasoning operations for the current question. MAC nets with CAM achieve up to
98.25% accuracy on the CLEVR-Dialog dataset, beating the existing
state-of-the-art by 30% (absolute). Our error analysis indicates that with CAM,
the model's performance particularly improved on questions that required
coreference resolution.
| 2,020 |
Computation and Language
|
A Targeted Attack on Black-Box Neural Machine Translation with Parallel
Data Poisoning
|
As modern neural machine translation (NMT) systems have been widely deployed,
their security vulnerabilities require close scrutiny. Most recently, NMT
systems have been found vulnerable to targeted attacks which cause them to
produce specific, unsolicited, and even harmful translations. These attacks are
usually exploited in a white-box setting, where adversarial inputs causing
targeted translations are discovered for a known target system. However, this
approach is less viable when the target system is black-box and unknown to the
adversary (e.g., secured commercial systems). In this paper, we show that
targeted attacks on black-box NMT systems are feasible, based on poisoning a
small fraction of their parallel training data. We show that this attack can be
realised practically via targeted corruption of web documents crawled to form
the system's training data. We then analyse the effectiveness of the targeted
poisoning in two common NMT training scenarios: the from-scratch training and
the pre-train & fine-tune paradigm. Our results are alarming: even on the
state-of-the-art systems trained with massive parallel data (tens of millions),
the attacks are still successful (over 50% success rate) under surprisingly low
poisoning budgets (e.g., 0.006%). Lastly, we discuss potential defences to
counter such attacks.
| 2,021 |
Computation and Language
|
IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model
for Indonesian NLP
|
Although the Indonesian language is spoken by almost 200 million people and
the 10th most spoken language in the world, it is under-represented in NLP
research. Previous work on Indonesian has been hampered by a lack of annotated
datasets, a sparsity of language resources, and a lack of resource
standardization. In this work, we release the IndoLEM dataset comprising seven
tasks for the Indonesian language, spanning morpho-syntax, semantics, and
discourse. We additionally release IndoBERT, a new pre-trained language model
for Indonesian, and evaluate it over IndoLEM, in addition to benchmarking it
against existing resources. Our experiments show that IndoBERT achieves
state-of-the-art performance over most of the tasks in IndoLEM.
| 2,020 |
Computation and Language
|
Investigating Catastrophic Forgetting During Continual Training for
Neural Machine Translation
|
Neural machine translation (NMT) models usually suffer from catastrophic
forgetting during continual training where the models tend to gradually forget
previously learned knowledge and swing to fit the newly added data which may
have a different distribution, e.g. a different domain. Although many methods
have been proposed to solve this problem, we cannot get to know what causes
this phenomenon yet. Under the background of domain adaptation, we investigate
the cause of catastrophic forgetting from the perspectives of modules and
parameters (neurons). The investigation on the modules of the NMT model shows
that some modules have tight relation with the general-domain knowledge while
some other modules are more essential in the domain adaptation. And the
investigation on the parameters shows that some parameters are important for
both the general-domain and in-domain translation and the great change of them
during continual training brings about the performance decline in
general-domain. We conduct experiments across different language pairs and
domains to ensure the validity and reliability of our findings.
| 2,020 |
Computation and Language
|
Liputan6: A Large-scale Indonesian Dataset for Text Summarization
|
In this paper, we introduce a large-scale Indonesian summarization dataset.
We harvest articles from Liputan6.com, an online news portal, and obtain
215,827 document-summary pairs. We leverage pre-trained language models to
develop benchmark extractive and abstractive summarization methods over the
dataset with multilingual and monolingual BERT-based models. We include a
thorough error analysis by examining machine-generated summaries that have low
ROUGE scores, and expose both issues with ROUGE it-self, as well as with
extractive and abstractive summarization models.
| 2,020 |
Computation and Language
|
Event-Related Bias Removal for Real-time Disaster Events
|
Social media has become an important tool to share information about crisis
events such as natural disasters and mass attacks. Detecting actionable posts
that contain useful information requires rapid analysis of huge volume of data
in real-time. This poses a complex problem due to the large amount of posts
that do not contain any actionable information. Furthermore, the classification
of information in real-time systems requires training on out-of-domain data, as
we do not have any data from a new emerging crisis. Prior work focuses on
models pre-trained on similar event types. However, those models capture
unnecessary event-specific biases, like the location of the event, which affect
the generalizability and performance of the classifiers on new unseen data from
an emerging new event. In our work, we train an adversarial neural model to
remove latent event-specific biases and improve the performance on tweet
importance classification.
| 2,020 |
Computation and Language
|
Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora
|
Reflexive anaphora present a challenge for semantic interpretation: their
meaning varies depending on context in a way that appears to require abstract
variables. Past work has raised doubts about the ability of recurrent networks
to meet this challenge. In this paper, we explore this question in the context
of a fragment of English that incorporates the relevant sort of contextual
variability. We consider sequence-to-sequence architectures with recurrent
units and show that such networks are capable of learning semantic
interpretations for reflexive anaphora which generalize to novel antecedents.
We explore the effect of attention mechanisms and different recurrent unit
types on the type of training data that is needed for success as measured in
two ways: how much lexical support is needed to induce an abstract reflexive
meaning (i.e., how many distinct reflexive antecedents must occur during
training) and what contexts must a noun phrase occur in to support
generalization of reflexive interpretation to this noun phrase?
| 2,020 |
Computation and Language
|
How Domain Terminology Affects Meeting Summarization Performance
|
Meetings are essential to modern organizations. Numerous meetings are held
and recorded daily, more than can ever be comprehended. A meeting summarization
system that identifies salient utterances from the transcripts to automatically
generate meeting minutes can help. It empowers users to rapidly search and sift
through large meeting collections. To date, the impact of domain terminology on
the performance of meeting summarization remains understudied, despite that
meetings are rich with domain knowledge. In this paper, we create gold-standard
annotations for domain terminology on a sizable meeting corpus; they are known
as jargon terms. We then analyze the performance of a meeting summarization
system with and without jargon terms. Our findings reveal that domain
terminology can have a substantial impact on summarization performance. We
publicly release all domain terminology to advance research in meeting
summarization.
| 2,020 |
Computation and Language
|
ABNIRML: Analyzing the Behavior of Neural IR Models
|
Pretrained contextualized language models such as BERT and T5 have
established a new state-of-the-art for ad-hoc search. However, it is not yet
well-understood why these methods are so effective, what makes some variants
more effective than others, and what pitfalls they may have. We present a new
comprehensive framework for Analyzing the Behavior of Neural IR ModeLs
(ABNIRML), which includes new types of diagnostic probes that allow us to test
several characteristics -- such as writing styles, factuality, sensitivity to
paraphrasing and word order -- that are not addressed by previous techniques.
To demonstrate the value of the framework, we conduct an extensive empirical
study that yields insights into the factors that contribute to the neural
model's gains, and identify potential unintended biases the models exhibit.
Some of our results confirm conventional wisdom, like that recent neural
ranking models rely less on exact term overlap with the query, and instead
leverage richer linguistic information, evidenced by their higher sensitivity
to word and sentence order. Other results are more surprising, such as that
some models (e.g., T5 and ColBERT) are biased towards factually correct (rather
than simply relevant) texts. Further, some characteristics vary even for the
same base language model, and other characteristics can appear due to random
variations during model training.
| 2,023 |
Computation and Language
|
Semi-supervised Autoencoding Projective Dependency Parsing
|
We describe two end-to-end autoencoding models for semi-supervised
graph-based projective dependency parsing. The first model is a Locally
Autoencoding Parser (LAP) encoding the input using continuous latent variables
in a sequential manner; The second model is a Globally Autoencoding Parser
(GAP) encoding the input into dependency trees as latent variables, with exact
inference. Both models consist of two parts: an encoder enhanced by deep neural
networks (DNN) that can utilize the contextual information to encode the input
into latent variables, and a decoder which is a generative model able to
reconstruct the input. Both LAP and GAP admit a unified structure with
different loss functions for labeled and unlabeled data with shared parameters.
We conducted experiments on WSJ and UD dependency parsing data sets, showing
that our models can exploit the unlabeled data to improve the performance given
a limited amount of labeled data, and outperform a previously proposed
semi-supervised model.
| 2,020 |
Computation and Language
|
Influence Patterns for Explaining Information Flow in BERT
|
While attention is all you need may be proving true, we do not know why:
attention-based transformer models such as BERT are superior but how
information flows from input tokens to output predictions are unclear. We
introduce influence patterns, abstractions of sets of paths through a
transformer model. Patterns quantify and localize the flow of information to
paths passing through a sequence of model nodes. Experimentally, we find that
significant portion of information flow in BERT goes through skip connections
instead of attention heads. We further show that consistency of patterns across
instances is an indicator of BERT's performance. Finally, We demonstrate that
patterns account for far more model performance than previous attention-based
and layer-based methods.
| 2,021 |
Computation and Language
|
Dual-decoder Transformer for Joint Automatic Speech Recognition and
Multilingual Speech Translation
|
We introduce dual-decoder Transformer, a new model architecture that jointly
performs automatic speech recognition (ASR) and multilingual speech translation
(ST). Our models are based on the original Transformer architecture (Vaswani et
al., 2017) but consist of two decoders, each responsible for one task (ASR or
ST). Our major contribution lies in how these decoders interact with each
other: one decoder can attend to different information sources from the other
via a dual-attention mechanism. We propose two variants of these architectures
corresponding to two different levels of dependencies between the decoders,
called the parallel and cross dual-decoder Transformers, respectively.
Extensive experiments on the MuST-C dataset show that our models outperform the
previously-reported highest translation performance in the multilingual
settings, and outperform as well bilingual one-to-one results. Furthermore, our
parallel models demonstrate no trade-off between ASR and ST compared to the
vanilla multi-task architecture. Our code and pre-trained models are available
at https://github.com/formiel/speech-translation.
| 2,020 |
Computation and Language
|
\'UFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN
|
We present PERIN, a novel permutation-invariant approach to sentence-to-graph
semantic parsing. PERIN is a versatile, cross-framework and language
independent architecture for universal modeling of semantic structures. Our
system participated in the CoNLL 2020 shared task, Cross-Framework Meaning
Representation Parsing (MRP 2020), where it was evaluated on five different
frameworks (AMR, DRG, EDS, PTG and UCCA) across four languages. PERIN was one
of the winners of the shared task. The source code and pretrained models are
available at https://github.com/ufal/perin.
| 2,020 |
Computation and Language
|
I Know What You Asked: Graph Path Learning using AMR for Commonsense
Reasoning
|
CommonsenseQA is a task in which a correct answer is predicted through
commonsense reasoning with pre-defined knowledge. Most previous works have
aimed to improve the performance with distributed representation without
considering the process of predicting the answer from the semantic
representation of the question. To shed light upon the semantic interpretation
of the question, we propose an AMR-ConceptNet-Pruned (ACP) graph. The ACP graph
is pruned from a full integrated graph encompassing Abstract Meaning
Representation (AMR) graph generated from input questions and an external
commonsense knowledge graph, ConceptNet (CN). Then the ACP graph is exploited
to interpret the reasoning path as well as to predict the correct answer on the
CommonsenseQA task. This paper presents the manner in which the commonsense
reasoning process can be interpreted with the relations and concepts provided
by the ACP graph. Moreover, ACP-based models are shown to outperform the
baselines.
| 2,020 |
Computation and Language
|
Reducing Confusion in Active Learning for Part-Of-Speech Tagging
|
Active learning (AL) uses a data selection algorithm to select useful
training samples to minimize annotation cost. This is now an essential tool for
building low-resource syntactic analyzers such as part-of-speech (POS) taggers.
Existing AL heuristics are generally designed on the principle of selecting
uncertain yet representative training instances, where annotating these
instances may reduce a large number of errors. However, in an empirical study
across six typologically diverse languages (German, Swedish, Galician, North
Sami, Persian, and Ukrainian), we found the surprising result that even in an
oracle scenario where we know the true uncertainty of predictions, these
current heuristics are far from optimal. Based on this analysis, we pose the
problem of AL as selecting instances which maximally reduce the confusion
between particular pairs of output tags. Extensive experimentation on the
aforementioned languages shows that our proposed AL strategy outperforms other
AL strategies by a significant margin. We also present auxiliary results
demonstrating the importance of proper calibration of models, which we ensure
through cross-view training, and analysis demonstrating how our proposed
strategy selects examples that more closely follow the oracle data
distribution.
| 2,020 |
Computation and Language
|
Context-Aware Cross-Attention for Non-Autoregressive Translation
|
Non-autoregressive translation (NAT) significantly accelerates the inference
process by predicting the entire target sequence. However, due to the lack of
target dependency modelling in the decoder, the conditional generation process
heavily depends on the cross-attention. In this paper, we reveal a localness
perception problem in NAT cross-attention, for which it is difficult to
adequately capture source context. To alleviate this problem, we propose to
enhance signals of neighbour source tokens into conventional cross-attention.
Experimental results on several representative datasets show that our approach
can consistently improve translation quality over strong NAT baselines.
Extensive analyses demonstrate that the enhanced cross-attention achieves
better exploitation of source contexts by leveraging both local and global
information.
| 2,020 |
Computation and Language
|
COSMO: Conditional SEQ2SEQ-based Mixture Model for Zero-Shot Commonsense
Question Answering
|
Commonsense reasoning refers to the ability of evaluating a social situation
and acting accordingly. Identification of the implicit causes and effects of a
social context is the driving capability which can enable machines to perform
commonsense reasoning. The dynamic world of social interactions requires
context-dependent on-demand systems to infer such underlying information.
However, current approaches in this realm lack the ability to perform
commonsense reasoning upon facing an unseen situation, mostly due to
incapability of identifying a diverse range of implicit social relations. Hence
they fail to estimate the correct reasoning path. In this paper, we present
Conditional SEQ2SEQ-based Mixture model (COSMO), which provides us with the
capabilities of dynamic and diverse content generation. We use COSMO to
generate context-dependent clauses, which form a dynamic Knowledge Graph (KG)
on-the-fly for commonsense reasoning. To show the adaptability of our model to
context-dependant knowledge generation, we address the task of zero-shot
commonsense question answering. The empirical results indicate an improvement
of up to +5.2% over the state-of-the-art models.
| 2,020 |
Computation and Language
|
Adapting Pretrained Transformer to Lattices for Spoken Language
Understanding
|
Lattices are compact representations that encode multiple hypotheses, such as
speech recognition results or different word segmentations. It is shown that
encoding lattices as opposed to 1-best results generated by automatic speech
recognizer (ASR) boosts the performance of spoken language understanding (SLU).
Recently, pretrained language models with the transformer architecture have
achieved the state-of-the-art results on natural language understanding, but
their ability of encoding lattices has not been explored. Therefore, this paper
aims at adapting pretrained transformers to lattice inputs in order to perform
understanding tasks specifically for spoken language. Our experiments on the
benchmark ATIS dataset show that fine-tuning pretrained transformers with
lattice inputs yields clear improvement over fine-tuning with 1-best results.
Further evaluation demonstrates the effectiveness of our methods under
different acoustic conditions. Our code is available at
https://github.com/MiuLab/Lattice-SLU
| 2,020 |
Computation and Language
|
Context Dependent Semantic Parsing: A Survey
|
Semantic parsing is the task of translating natural language utterances into
machine-readable meaning representations. Currently, most semantic parsing
methods are not able to utilize contextual information (e.g. dialogue and
comments history), which has a great potential to boost semantic parsing
performance. To address this issue, context dependent semantic parsing has
recently drawn a lot of attention. In this survey, we investigate progress on
the methods for the context dependent semantic parsing, together with the
current datasets and tasks. We then point out open problems and challenges for
future research in this area. The collected resources for this topic are
available
at:https://github.com/zhuang-li/Contextual-Semantic-Parsing-Paper-List.
| 2,020 |
Computation and Language
|
Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation
|
Review rating prediction of text reviews is a rapidly growing technology with
a wide range of applications in natural language processing. However, most
existing methods either use hand-crafted features or learn features using deep
learning with simple text corpus as input for review rating prediction,
ignoring the hierarchies among data. In this paper, we propose a Hierarchical
bi-directional self-attention Network framework (HabNet) for paper review
rating prediction and recommendation, which can serve as an effective
decision-making tool for the academic paper review process. Specifically, we
leverage the hierarchical structure of the paper reviews with three levels of
encoders: sentence encoder (level one), intra-review encoder (level two) and
inter-review encoder (level three). Each encoder first derives contextual
representation of each level, then generates a higher-level representation, and
after the learning process, we are able to identify useful predictors to make
the final acceptance decision, as well as to help discover the inconsistency
between numerical review ratings and text sentiment conveyed by reviewers.
Furthermore, we introduce two new metrics to evaluate models in data imbalance
situations. Extensive experiments on a publicly available dataset (PeerRead)
and our own collected dataset (OpenReview) demonstrate the superiority of the
proposed approach compared with state-of-the-art methods.
| 2,020 |
Computation and Language
|
Comparison by Conversion: Reverse-Engineering UCCA from Syntax and
Lexical Semantics
|
Building robust natural language understanding systems will require a clear
characterization of whether and how various linguistic meaning representations
complement each other. To perform a systematic comparative analysis, we
evaluate the mapping between meaning representations from different frameworks
using two complementary methods: (i) a rule-based converter, and (ii) a
supervised delexicalized parser that parses to one framework using only
information from the other as features. We apply these methods to convert the
STREUSLE corpus (with syntactic and lexical semantic annotations) to UCCA (a
graph-structured full-sentence meaning representation). Both methods yield
surprisingly accurate target representations, close to fully supervised UCCA
parser quality---indicating that UCCA annotations are partially redundant with
STREUSLE annotations. Despite this substantial convergence between frameworks,
we find several important areas of divergence.
| 2,020 |
Computation and Language
|
Emergent Communication Pretraining for Few-Shot Machine Translation
|
While state-of-the-art models that rely upon massively multilingual
pretrained encoders achieve sample efficiency in downstream applications, they
still require abundant amounts of unlabelled text. Nevertheless, most of the
world's languages lack such resources. Hence, we investigate a more radical
form of unsupervised knowledge transfer in the absence of linguistic data. In
particular, for the first time we pretrain neural networks via emergent
communication from referential games. Our key assumption is that grounding
communication on images---as a crude approximation of real-world
environments---inductively biases the model towards learning natural languages.
On the one hand, we show that this substantially benefits machine translation
in few-shot settings. On the other hand, this also provides an extrinsic
evaluation protocol to probe the properties of emergent languages ex vitro.
Intuitively, the closer they are to natural languages, the higher the gains
from pretraining on them should be. For instance, in this work we measure the
influence of communication success and maximum sequence length on downstream
performances. Finally, we introduce a customised adapter layer and annealing
strategies for the regulariser of maximum-a-posteriori inference during
fine-tuning. These turn out to be crucial to facilitate knowledge transfer and
prevent catastrophic forgetting. Compared to a recurrent baseline, our method
yields gains of $59.0\%$$\sim$$147.6\%$ in BLEU score with only $500$ NMT
training instances and $65.1\%$$\sim$$196.7\%$ with $1,000$ NMT training
instances across four language pairs. These proof-of-concept results reveal the
potential of emergent communication pretraining for both natural language
processing tasks in resource-poor settings and extrinsic evaluation of
artificial languages.
| 2,020 |
Computation and Language
|
How Far Does BERT Look At:Distance-based Clustering and Analysis of
BERT$'$s Attention
|
Recent research on the multi-head attention mechanism, especially that in
pre-trained models such as BERT, has shown us heuristics and clues in analyzing
various aspects of the mechanism. As most of the research focus on probing
tasks or hidden states, previous works have found some primitive patterns of
attention head behavior by heuristic analytical methods, but a more systematic
analysis specific on the attention patterns still remains primitive. In this
work, we clearly cluster the attention heatmaps into significantly different
patterns through unsupervised clustering on top of a set of proposed features,
which corroborates with previous observations. We further study their
corresponding functions through analytical study. In addition, our proposed
features can be used to explain and calibrate different attention heads in
Transformer models.
| 2,020 |
Computation and Language
|
An Empirical Study of Contextual Data Augmentation for Japanese Zero
Anaphora Resolution
|
One critical issue of zero anaphora resolution (ZAR) is the scarcity of
labeled data. This study explores how effectively this problem can be
alleviated by data augmentation. We adopt a state-of-the-art data augmentation
method, called the contextual data augmentation (CDA), that generates labeled
training instances using a pretrained language model. The CDA has been reported
to work well for several other natural language processing tasks, including
text classification and machine translation. This study addresses two
underexplored issues on CDA, that is, how to reduce the computational cost of
data augmentation and how to ensure the quality of the generated data. We also
propose two methods to adapt CDA to ZAR: [MASK]-based augmentation and
linguistically-controlled masking. Consequently, the experimental results on
Japanese ZAR show that our methods contribute to both the accuracy gain and the
computation cost reduction. Our closer analysis reveals that the proposed
method can improve the quality of the augmented training data when compared to
the conventional CDA.
| 2,020 |
Computation and Language
|
A Closer Look at Linguistic Knowledge in Masked Language Models: The
Case of Relative Clauses in American English
|
Transformer-based language models achieve high performance on various tasks,
but we still lack understanding of the kind of linguistic knowledge they learn
and rely on. We evaluate three models (BERT, RoBERTa, and ALBERT), testing
their grammatical and semantic knowledge by sentence-level probing, diagnostic
cases, and masked prediction tasks. We focus on relative clauses (in American
English) as a complex phenomenon needing contextual information and antecedent
identification to be resolved. Based on a naturalistic dataset, probing shows
that all three models indeed capture linguistic knowledge about grammaticality,
achieving high performance. Evaluation on diagnostic cases and masked
prediction tasks considering fine-grained linguistic knowledge, however, shows
pronounced model-specific weaknesses especially on semantic knowledge, strongly
impacting models' performance. Our results highlight the importance of (a)model
comparison in evaluation task and (b) building up claims of model performance
and the linguistic knowledge they capture beyond purely probing-based
evaluations.
| 2,020 |
Computation and Language
|
Combining Event Semantics and Degree Semantics for Natural Language
Inference
|
In formal semantics, there are two well-developed semantic frameworks: event
semantics, which treats verbs and adverbial modifiers using the notion of
event, and degree semantics, which analyzes adjectives and comparatives using
the notion of degree. However, it is not obvious whether these frameworks can
be combined to handle cases in which the phenomena in question are interacting
with each other. Here, we study this issue by focusing on natural language
inference (NLI). We implement a logic-based NLI system that combines event
semantics and degree semantics and their interaction with lexical knowledge. We
evaluate the system on various NLI datasets containing linguistically
challenging problems. The results show that the system achieves high accuracies
on these datasets in comparison with previous logic-based systems and
deep-learning-based systems. This suggests that the two semantic frameworks can
be combined consistently to handle various combinations of linguistic phenomena
without compromising the advantage of either framework.
| 2,020 |
Computation and Language
|
DNN-Based Semantic Model for Rescoring N-best Speech Recognition List
|
The word error rate (WER) of an automatic speech recognition (ASR) system
increases when a mismatch occurs between the training and the testing
conditions due to the noise, etc. In this case, the acoustic information can be
less reliable. This work aims to improve ASR by modeling long-term semantic
relations to compensate for distorted acoustic features. We propose to perform
this through rescoring of the ASR N-best hypotheses list. To achieve this, we
train a deep neural network (DNN). Our DNN rescoring model is aimed at
selecting hypotheses that have better semantic consistency and therefore lower
WER. We investigate two types of representations as part of input features to
our DNN model: static word embeddings (from word2vec) and dynamic contextual
embeddings (from BERT). Acoustic and linguistic features are also included. We
perform experiments on the publicly available dataset TED-LIUM mixed with real
noise. The proposed rescoring approaches give significant improvement of the
WER over the ASR system without rescoring models in two noisy conditions and
with n-gram and RNNLM.
| 2,020 |
Computation and Language
|
Biased TextRank: Unsupervised Graph-Based Content Extraction
|
We introduce Biased TextRank, a graph-based content extraction method
inspired by the popular TextRank algorithm that ranks text spans according to
their importance for language processing tasks and according to their relevance
to an input "focus." Biased TextRank enables focused content extraction for
text by modifying the random restarts in the execution of TextRank. The random
restart probabilities are assigned based on the relevance of the graph nodes to
the focus of the task. We present two applications of Biased TextRank: focused
summarization and explanation extraction, and show that our algorithm leads to
improved performance on two different datasets by significant ROUGE-N score
margins. Much like its predecessor, Biased TextRank is unsupervised, easy to
implement and orders of magnitude faster and lighter than current
state-of-the-art Natural Language Processing methods for similar tasks.
| 2,020 |
Computation and Language
|
Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of
Reasoning Steps
|
A multi-hop question answering (QA) dataset aims to test reasoning and
inference skills by requiring a model to read multiple paragraphs to answer a
given question. However, current datasets do not provide a complete explanation
for the reasoning process from the question to the answer. Further, previous
studies revealed that many examples in existing multi-hop datasets do not
require multi-hop reasoning to answer a question. In this study, we present a
new multi-hop QA dataset, called 2WikiMultiHopQA, which uses structured and
unstructured data. In our dataset, we introduce the evidence information
containing a reasoning path for multi-hop questions. The evidence information
has two benefits: (i) providing a comprehensive explanation for predictions and
(ii) evaluating the reasoning skills of a model. We carefully design a pipeline
and a set of templates when generating a question-answer pair that guarantees
the multi-hop steps and the quality of the questions. We also exploit the
structured format in Wikidata and use logical rules to create questions that
are natural but still require multi-hop reasoning. Through experiments, we
demonstrate that our dataset is challenging for multi-hop models and it ensures
that multi-hop reasoning is required.
| 2,020 |
Computation and Language
|
Enabling Zero-shot Multilingual Spoken Language Translation with
Language-Specific Encoders and Decoders
|
Current end-to-end approaches to Spoken Language Translation (SLT) rely on
limited training resources, especially for multilingual settings. On the other
hand, Multilingual Neural Machine Translation (MultiNMT) approaches rely on
higher-quality and more massive data sets. Our proposed method extends a
MultiNMT architecture based on language-specific encoders-decoders to the task
of Multilingual SLT (MultiSLT). Our method entirely eliminates the dependency
from MultiSLT data and it is able to translate while training only on ASR and
MultiNMT data.
Our experiments on four different languages show that coupling the speech
encoder to the MultiNMT architecture produces similar quality translations
compared to a bilingual baseline ($\pm 0.2$ BLEU) while effectively allowing
for zero-shot MultiSLT. Additionally, we propose using an Adapter module for
coupling the speech inputs. This Adapter module produces consistent
improvements up to +6 BLEU points on the proposed architecture and +1 BLEU
point on the end-to-end baseline.
| 2,021 |
Computation and Language
|
Exploring Question-Specific Rewards for Generating Deep Questions
|
Recent question generation (QG) approaches often utilize the
sequence-to-sequence framework (Seq2Seq) to optimize the log-likelihood of
ground-truth questions using teacher forcing. However, this training objective
is inconsistent with actual question quality, which is often reflected by
certain global properties such as whether the question can be answered by the
document. As such, we directly optimize for QG-specific objectives via
reinforcement learning to improve question quality. We design three different
rewards that target to improve the fluency, relevance, and answerability of
generated questions. We conduct both automatic and human evaluations in
addition to a thorough analysis to explore the effect of each QG-specific
reward. We find that optimizing question-specific rewards generally leads to
better performance in automatic evaluation metrics. However, only the rewards
that correlate well with human judgement (e.g., relevance) lead to real
improvement in question quality. Optimizing for the others, especially
answerability, introduces incorrect bias to the model, resulting in poor
question quality. Our code is publicly available at
https://github.com/YuxiXie/RL-for-Question-Generation.
| 2,020 |
Computation and Language
|
Generating Knowledge Graphs by Employing Natural Language Processing and
Machine Learning Techniques within the Scholarly Domain
|
The continuous growth of scientific literature brings innovations and, at the
same time, raises new challenges. One of them is related to the fact that its
analysis has become difficult due to the high volume of published papers for
which manual effort for annotations and management is required. Novel
technological infrastructures are needed to help researchers, research policy
makers, and companies to time-efficiently browse, analyse, and forecast
scientific research. Knowledge graphs i.e., large networks of entities and
relationships, have proved to be effective solution in this space. Scientific
knowledge graphs focus on the scholarly domain and typically contain metadata
describing research publications such as authors, venues, organizations,
research topics, and citations. However, the current generation of knowledge
graphs lacks of an explicit representation of the knowledge presented in the
research papers. As such, in this paper, we present a new architecture that
takes advantage of Natural Language Processing and Machine Learning methods for
extracting entities and relationships from research publications and integrates
them in a large-scale knowledge graph. Within this research work, we i) tackle
the challenge of knowledge extraction by employing several state-of-the-art
Natural Language Processing and Text Mining tools, ii) describe an approach for
integrating entities and relationships generated by these tools, iii) show the
advantage of such an hybrid system over alternative approaches, and vi) as a
chosen use case, we generated a scientific knowledge graph including 109,105
triples, extracted from 26,827 abstracts of papers within the Semantic Web
domain. As our approach is general and can be applied to any domain, we expect
that it can facilitate the management, analysis, dissemination, and processing
of scientific knowledge.
| 2,020 |
Computation and Language
|
Improving Variational Autoencoder for Text Modelling with Timestep-Wise
Regularisation
|
The Variational Autoencoder (VAE) is a popular and powerful model applied to
text modelling to generate diverse sentences. However, an issue known as
posterior collapse (or KL loss vanishing) happens when the VAE is used in text
modelling, where the approximate posterior collapses to the prior, and the
model will totally ignore the latent variables and be degraded to a plain
language model during text generation. Such an issue is particularly prevalent
when RNN-based VAE models are employed for text modelling. In this paper, we
propose a simple, generic architecture called Timestep-Wise Regularisation VAE
(TWR-VAE), which can effectively avoid posterior collapse and can be applied to
any RNN-based VAE models. The effectiveness and versatility of our model are
demonstrated in different tasks, including language modelling and dialogue
response generation.
| 2,020 |
Computation and Language
|
Automated Transcription of Non-Latin Script Periodicals: A Case Study in
the Ottoman Turkish Print Archive
|
Our study utilizes deep learning methods for the automated transcription of
late nineteenth- and early twentieth-century periodicals written in Arabic
script Ottoman Turkish (OT) using the Transkribus platform. We discuss the
historical situation of OT text collections and how they were excluded for the
most part from the late twentieth century corpora digitization that took place
in many Latin script languages. This exclusion has two basic reasons: the
technical challenges of OCR for Arabic script languages, and the rapid
abandonment of that very script in the Turkish historical context. In the
specific case of OT, opening periodical collections to digital tools require
training HTR models to generate transcriptions in the Latin writing system of
contemporary readers of Turkish, and not, as some may expect, in right-to-left
Arabic script text. In the paper we discuss the challenges of training such
models where one-to-one correspondence between the writing systems do not
exist, and we report results based on our HTR experiments with two OT
periodicals from the early twentieth century. Finally, we reflect on potential
domain bias of HTR models in historical languages exhibiting spatio-temporal
variance as well as the significance of working between writing systems for
language communities that have experienced language reform and script change.
| 2,020 |
Computation and Language
|
Introducing various Semantic Models for Amharic: Experimentation and
Evaluation with multiple Tasks and Datasets
|
The availability of different pre-trained semantic models enabled the quick
development of machine learning components for downstream applications. Despite
the availability of abundant text data for low resource languages, only a few
semantic models are publicly available. Publicly available pre-trained models
are usually built as a multilingual version of semantic models that can not fit
well for each language due to context variations. In this work, we introduce
different semantic models for Amharic. After we experiment with the existing
pre-trained semantic models, we trained and fine-tuned nine new different
models using a monolingual text corpus. The models are build using word2Vec
embeddings, distributional thesaurus (DT), contextual embeddings, and DT
embeddings obtained via network embedding algorithms. Moreover, we employ these
models for different NLP tasks and investigate their impact. We find that newly
trained models perform better than pre-trained multilingual models.
Furthermore, models based on contextual embeddings from RoBERTA perform better
than the word2Vec models.
| 2,021 |
Computation and Language
|
QMUL-SDS @ SardiStance: Leveraging Network Interactions to Boost
Performance on Stance Detection using Knowledge Graphs
|
This paper presents our submission to the SardiStance 2020 shared task,
describing the architecture used for Task A and Task B. While our submission
for Task A did not exceed the baseline, retraining our model using all the
training tweets, showed promising results leading to (f-avg 0.601) using
bidirectional LSTM with BERT multilingual embedding for Task A. For our
submission for Task B, we ranked 6th (f-avg 0.709). With further investigation,
our best experimented settings increased performance from (f-avg 0.573) to
(f-avg 0.733) with same architecture and parameter settings and after only
incorporating social interaction features -- highlighting the impact of social
interaction on the model's performance.
| 2,020 |
Computation and Language
|
The Devil is in the Details: Evaluating Limitations of Transformer-based
Methods for Granular Tasks
|
Contextual embeddings derived from transformer-based neural language models
have shown state-of-the-art performance for various tasks such as question
answering, sentiment analysis, and textual similarity in recent years.
Extensive work shows how accurately such models can represent abstract,
semantic information present in text. In this expository work, we explore a
tangent direction and analyze such models' performance on tasks that require a
more granular level of representation. We focus on the problem of textual
similarity from two perspectives: matching documents on a granular level
(requiring embeddings to capture fine-grained attributes in the text), and an
abstract level (requiring embeddings to capture overall textual semantics). We
empirically demonstrate, across two datasets from different domains, that
despite high performance in abstract document matching as expected, contextual
embeddings are consistently (and at times, vastly) outperformed by simple
baselines like TF-IDF for more granular tasks. We then propose a simple but
effective method to incorporate TF-IDF into models that use contextual
embeddings, achieving relative improvements of up to 36% on granular tasks.
| 2,020 |
Computation and Language
|
Automatic Detection of Machine Generated Text: A Critical Survey
|
Text generative models (TGMs) excel in producing text that matches the style
of human language reasonably well. Such TGMs can be misused by adversaries,
e.g., by automatically generating fake news and fake product reviews that can
look authentic and fool humans. Detectors that can distinguish text generated
by TGM from human written text play a vital role in mitigating such misuse of
TGMs. Recently, there has been a flurry of works from both natural language
processing (NLP) and machine learning (ML) communities to build accurate
detectors for English. Despite the importance of this problem, there is
currently no work that surveys this fast-growing literature and introduces
newcomers to important research challenges. In this work, we fill this void by
providing a critical survey and review of this literature to facilitate a
comprehensive understanding of this problem. We conduct an in-depth error
analysis of the state-of-the-art detector and discuss research directions to
guide future work in this exciting area.
| 2,020 |
Computation and Language
|
Supervised Contrastive Learning for Pre-trained Language Model
Fine-tuning
|
State-of-the-art natural language understanding classification models follow
two-stages: pre-training a large language model on an auxiliary task, and then
fine-tuning the model on a task-specific labeled dataset using cross-entropy
loss. However, the cross-entropy loss has several shortcomings that can lead to
sub-optimal generalization and instability. Driven by the intuition that good
generalization requires capturing the similarity between examples in one class
and contrasting them with examples in other classes, we propose a supervised
contrastive learning (SCL) objective for the fine-tuning stage. Combined with
cross-entropy, our proposed SCL loss obtains significant improvements over a
strong RoBERTa-Large baseline on multiple datasets of the GLUE benchmark in
few-shot learning settings, without requiring specialized architecture, data
augmentations, memory banks, or additional unsupervised data. Our proposed
fine-tuning objective leads to models that are more robust to different levels
of noise in the fine-tuning training data, and can generalize better to related
tasks with limited labeled data.
| 2,021 |
Computation and Language
|
WSL-DS: Weakly Supervised Learning with Distant Supervision for Query
Focused Multi-Document Abstractive Summarization
|
In the Query Focused Multi-Document Summarization (QF-MDS) task, a set of
documents and a query are given where the goal is to generate a summary from
these documents based on the given query. However, one major challenge for this
task is the lack of availability of labeled training datasets. To overcome this
issue, in this paper, we propose a novel weakly supervised learning approach
via utilizing distant supervision. In particular, we use datasets similar to
the target dataset as the training data where we leverage pre-trained sentence
similarity models to generate the weak reference summary of each individual
document in a document set from the multi-document gold reference summaries.
Then, we iteratively train our summarization model on each single-document to
alleviate the computational complexity issue that occurs while training neural
summarization models in multiple documents (i.e., long sequences) at once.
Experimental results in Document Understanding Conferences (DUC) datasets show
that our proposed approach sets a new state-of-the-art result in terms of
various evaluation metrics.
| 2,020 |
Computation and Language
|
Meta-Learning for Natural Language Understanding under Continual
Learning Framework
|
Neural network has been recognized with its accomplishments on tackling
various natural language understanding (NLU) tasks. Methods have been developed
to train a robust model to handle multiple tasks to gain a general
representation of text. In this paper, we implement the model-agnostic
meta-learning (MAML) and Online aware Meta-learning (OML) meta-objective under
the continual framework for NLU tasks. We validate our methods on selected
SuperGLUE and GLUE benchmark.
| 2,020 |
Computation and Language
|
Weakly- and Semi-supervised Evidence Extraction
|
For many prediction tasks, stakeholders desire not only predictions but also
supporting evidence that a human can use to verify its correctness. However, in
practice, additional annotations marking supporting evidence may only be
available for a minority of training examples (if available at all). In this
paper, we propose new methods to combine few evidence annotations (strong
semi-supervision) with abundant document-level labels (weak supervision) for
the task of evidence extraction. Evaluating on two classification tasks that
feature evidence annotations, we find that our methods outperform baselines
adapted from the interpretability literature to our task. Our approach yields
substantial gains with as few as hundred evidence annotations. Code and
datasets to reproduce our work are available at
https://github.com/danishpruthi/evidence-extraction.
| 2,020 |
Computation and Language
|
Layer-Wise Multi-View Learning for Neural Machine Translation
|
Traditional neural machine translation is limited to the topmost encoder
layer's context representation and cannot directly perceive the lower encoder
layers. Existing solutions usually rely on the adjustment of network
architecture, making the calculation more complicated or introducing additional
structural restrictions. In this work, we propose layer-wise multi-view
learning to solve this problem, circumventing the necessity to change the model
structure. We regard each encoder layer's off-the-shelf output, a by-product in
layer-by-layer encoding, as the redundant view for the input sentence. In this
way, in addition to the topmost encoder layer (referred to as the primary
view), we also incorporate an intermediate encoder layer as the auxiliary view.
We feed the two views to a partially shared decoder to maintain independent
prediction. Consistency regularization based on KL divergence is used to
encourage the two views to learn from each other. Extensive experimental
results on five translation tasks show that our approach yields stable
improvements over multiple strong baselines. As another bonus, our method is
agnostic to network architectures and can maintain the same inference speed as
the original model.
| 2,020 |
Computation and Language
|
BioNerFlair: biomedical named entity recognition using flair embedding
and sequence tagger
|
Motivation: The proliferation of Biomedical research articles has made the
task of information retrieval more important than ever. Scientists and
Researchers are having difficulty in finding articles that contain information
relevant to them. Proper extraction of biomedical entities like Disease,
Drug/chem, Species, Gene/protein, can considerably improve the filtering of
articles resulting in better extraction of relevant information. Performance on
BioNer benchmarks has progressively improved because of progression in
transformers-based models like BERT, XLNet, OpenAI, GPT2, etc. These models
give excellent results; however, they are computationally expensive and we can
achieve better scores for domain-specific tasks using other contextual
string-based models and LSTM-CRF based sequence tagger. Results: We introduce
BioNerFlair, a method to train models for biomedical named entity recognition
using Flair plus GloVe embeddings and Bidirectional LSTM-CRF based sequence
tagger. With almost the same generic architecture widely used for named entity
recognition, BioNerFlair outperforms previous state-of-the-art models. I
performed experiments on 8 benchmarks datasets for biomedical named entity
recognition. Compared to current state-of-the-art models, BioNerFlair achieves
the best F1-score of 90.17 beyond 84.72 on the BioCreative II gene mention
(BC2GM) corpus, best F1-score of 94.03 beyond 92.36 on the BioCreative IV
chemical and drug (BC4CHEMD) corpus, best F1-score of 88.73 beyond 78.58 on the
JNLPBA corpus, best F1-score of 91.1 beyond 89.71 on the NCBI disease corpus,
best F1-score of 85.48 beyond 78.98 on the Species-800 corpus, while near best
results was observed on BC5CDR-chem, BC3CDR-disease, and LINNAEUS corpus.
| 2,020 |
Computation and Language
|
CharBERT: Character-aware Pre-trained Language Model
|
Most pre-trained language models (PLMs) construct word representations at
subword level with Byte-Pair Encoding (BPE) or its variations, by which OOV
(out-of-vocab) words are almost avoidable. However, those methods split a word
into subword units and make the representation incomplete and fragile. In this
paper, we propose a character-aware pre-trained language model named CharBERT
improving on the previous methods (such as BERT, RoBERTa) to tackle these
problems. We first construct the contextual word embedding for each token from
the sequential character representations, then fuse the representations of
characters and the subword representations by a novel heterogeneous interaction
module. We also propose a new pre-training task named NLM (Noisy LM) for
unsupervised character representation learning. We evaluate our method on
question answering, sequence labeling, and text classification tasks, both on
the original datasets and adversarial misspelling test sets. The experimental
results show that our method can significantly improve the performance and
robustness of PLMs simultaneously. Pretrained models, evaluation sets, and code
are available at https://github.com/wtma/CharBERT
| 2,021 |
Computation and Language
|
TransQuest: Translation Quality Estimation with Cross-lingual
Transformers
|
Recent years have seen big advances in the field of sentence-level quality
estimation (QE), largely as a result of using neural-based architectures.
However, the majority of these methods work only on the language pair they are
trained on and need retraining for new language pairs. This process can prove
difficult from a technical point of view and is usually computationally
expensive. In this paper we propose a simple QE framework based on
cross-lingual transformers, and we use it to implement and evaluate two
different neural architectures. Our evaluation shows that the proposed methods
achieve state-of-the-art results outperforming current open-source quality
estimation frameworks when trained on datasets from WMT. In addition, the
framework proves very useful in transfer learning settings, especially when
dealing with low-resourced languages, allowing us to obtain very competitive
results.
| 2,020 |
Computation and Language
|
DAGA: Data Augmentation with a Generation Approach for Low-resource
Tagging Tasks
|
Data augmentation techniques have been widely used to improve machine
learning performance as they enhance the generalization capability of models.
In this work, to generate high quality synthetic data for low-resource tagging
tasks, we propose a novel augmentation method with language models trained on
the linearized labeled sentences. Our method is applicable to both supervised
and semi-supervised settings. For the supervised settings, we conduct extensive
experiments on named entity recognition (NER), part of speech (POS) tagging and
end-to-end target based sentiment analysis (E2E-TBSA) tasks. For the
semi-supervised settings, we evaluate our method on the NER task under the
conditions of given unlabeled data only and unlabeled data plus a knowledge
base. The results show that our method can consistently outperform the
baselines, particularly when the given gold training data are less.
| 2,020 |
Computation and Language
|
AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings
|
Recent work has shown that distributional word vector spaces often encode
human biases like sexism or racism. In this work, we conduct an extensive
analysis of biases in Arabic word embeddings by applying a range of recently
introduced bias tests on a variety of embedding spaces induced from corpora in
Arabic. We measure the presence of biases across several dimensions, namely:
embedding models (Skip-Gram, CBOW, and FastText) and vector sizes, types of
text (encyclopedic text, and news vs. user-generated content), dialects
(Egyptian Arabic vs. Modern Standard Arabic), and time (diachronic analyses
over corpora from different time periods). Our analysis yields several
interesting findings, e.g., that implicit gender bias in embeddings trained on
Arabic news corpora steadily increases over time (between 2007 and 2017). We
make the Arabic bias specifications (AraWEAT) publicly available.
| 2,020 |
Computation and Language
|
Creating a Domain-diverse Corpus for Theory-based Argument Quality
Assessment
|
Computational models of argument quality (AQ) have focused primarily on
assessing the overall quality or just one specific characteristic of an
argument, such as its convincingness or its clarity. However, previous work has
claimed that assessment based on theoretical dimensions of argumentation could
benefit writers, but developing such models has been limited by the lack of
annotated data. In this work, we describe GAQCorpus, the first large,
domain-diverse annotated corpus of theory-based AQ. We discuss how we designed
the annotation task to reliably collect a large number of judgments with
crowdsourcing, formulating theory-based guidelines that helped make subjective
judgments of AQ more objective. We demonstrate how to identify arguments and
adapt the annotation task for three diverse domains. Our work will inform
research on theory-based argumentation annotation and enable the creation of
more diverse corpora to support computational AQ assessment.
| 2,020 |
Computation and Language
|
Experiencers, Stimuli, or Targets: Which Semantic Roles Enable Machine
Learning to Infer the Emotions?
|
Emotion recognition is predominantly formulated as text classification in
which textual units are assigned to an emotion from a predefined inventory
(e.g., fear, joy, anger, disgust, sadness, surprise, trust, anticipation). More
recently, semantic role labeling approaches have been developed to extract
structures from the text to answer questions like: "who is described to feel
the emotion?" (experiencer), "what causes this emotion?" (stimulus), and at
which entity is it directed?" (target). Though it has been shown that jointly
modeling stimulus and emotion category prediction is beneficial for both
subtasks, it remains unclear which of these semantic roles enables a classifier
to infer the emotion. Is it the experiencer, because the identity of a person
is biased towards a particular emotion (X is always happy)? Is it a particular
target (everybody loves X) or a stimulus (doing X makes everybody sad)? We
answer these questions by training emotion classification models on five
available datasets annotated with at least one semantic role by masking the
fillers of these roles in the text in a controlled manner and find that across
multiple corpora, stimuli and targets carry emotion information, while the
experiencer might be considered a confounder. Further, we analyze if informing
the model about the position of the role improves the classification decision.
Particularly on literature corpora we find that the role information improves
the emotion classification.
| 2,020 |
Computation and Language
|
XED: A Multilingual Dataset for Sentiment Analysis and Emotion Detection
|
We introduce XED, a multilingual fine-grained emotion dataset. The dataset
consists of human-annotated Finnish (25k) and English sentences (30k), as well
as projected annotations for 30 additional languages, providing new resources
for many low-resource languages. We use Plutchik's core emotions to annotate
the dataset with the addition of neutral to create a multilabel multiclass
dataset. The dataset is carefully evaluated using language-specific BERT models
and SVMs to show that XED performs on par with other similar datasets and is
therefore a useful tool for sentiment analysis and emotion detection.
| 2,020 |
Computation and Language
|
A Benchmark of Rule-Based and Neural Coreference Resolution in Dutch
Novels and News
|
We evaluate a rule-based (Lee et al., 2013) and neural (Lee et al., 2018)
coreference system on Dutch datasets of two domains: literary novels and
news/Wikipedia text. The results provide insight into the relative strengths of
data-driven and knowledge-driven systems, as well as the influence of domain,
document length, and annotation schemes. The neural system performs best on
news/Wikipedia text, while the rule-based system performs best on literature.
The neural system shows weaknesses with limited training data and long
documents, while the rule-based system is affected by annotation differences.
The code and models used in this paper are available at
https://github.com/andreasvc/crac2020
| 2,020 |
Computation and Language
|
Results of a Single Blind Literary Taste Test with Short Anonymized
Novel Fragments
|
It is an open question to what extent perceptions of literary quality are
derived from text-intrinsic versus social factors. While supervised models can
predict literary quality ratings from textual factors quite successfully, as
shown in the Riddle of Literary Quality project (Koolen et al., 2020), this
does not prove that social factors are not important, nor can we assume that
readers make judgments on literary quality in the same way and based on the
same information as machine learning models. We report the results of a pilot
study to gauge the effect of textual features on literary ratings of
Dutch-language novels by participants in a controlled experiment with 48
participants. In an exploratory analysis, we compare the ratings to those from
the large reader survey of the Riddle in which social factors were not
excluded, and to machine learning predictions of those literary ratings. We
find moderate to strong correlations of questionnaire ratings with the survey
ratings, but the predictions are closer to the survey ratings. Code and data:
https://github.com/andreasvc/litquest
| 2,020 |
Computation and Language
|
Joint Entity and Relation Extraction with Set Prediction Networks
|
The joint entity and relation extraction task aims to extract all relational
triples from a sentence. In essence, the relational triples contained in a
sentence are unordered. However, previous seq2seq based models require to
convert the set of triples into a sequence in the training phase. To break this
bottleneck, we treat joint entity and relation extraction as a direct set
prediction problem, so that the extraction model can get rid of the burden of
predicting the order of multiple triples. To solve this set prediction problem,
we propose networks featured by transformers with non-autoregressive parallel
decoding. Unlike autoregressive approaches that generate triples one by one in
a certain order, the proposed networks directly output the final set of triples
in one shot. Furthermore, we also design a set-based loss that forces unique
predictions via bipartite matching. Compared with cross-entropy loss that
highly penalizes small shifts in triple order, the proposed bipartite matching
loss is invariant to any permutation of predictions; thus, it can provide the
proposed networks with a more accurate training signal by ignoring triple order
and focusing on relation types and entities. Experiments on two benchmark
datasets show that our proposed model significantly outperforms current
state-of-the-art methods. Training code and trained models will be available at
http://github.com/DianboWork/SPN4RE.
| 2,020 |
Computation and Language
|
Cross-lingual Word Embeddings beyond Zero-shot Machine Translation
|
We explore the transferability of a multilingual neural machine translation
model to unseen languages when the transfer is grounded solely on the
cross-lingual word embeddings. Our experimental results show that the
translation knowledge can transfer weakly to other languages and that the
degree of transferability depends on the languages' relatedness. We also
discuss the limiting aspects of the multilingual architectures that cause weak
translation transfer and suggest how to mitigate the limitations.
| 2,020 |
Computation and Language
|
Data-to-Text Generation with Iterative Text Editing
|
We present a novel approach to data-to-text generation based on iterative
text editing. Our approach maximizes the completeness and semantic accuracy of
the output text while leveraging the abilities of recent pre-trained models for
text editing (LaserTagger) and language modeling (GPT-2) to improve the text
fluency. To this end, we first transform data items to text using trivial
templates, and then we iteratively improve the resulting text by a neural model
trained for the sentence fusion task. The output of the model is filtered by a
simple heuristic and reranked with an off-the-shelf pre-trained language model.
We evaluate our approach on two major data-to-text datasets (WebNLG, Cleaned
E2E) and analyze its caveats and benefits. Furthermore, we show that our
formulation of data-to-text generation opens up the possibility for zero-shot
domain adaptation using a general-domain dataset for sentence fusion.
| 2,021 |
Computation and Language
|
Towards Automated Anamnesis Summarization: BERT-based Models for Symptom
Extraction
|
Professionals in modern healthcare systems are increasingly burdened by
documentation workloads. Documentation of the initial patient anamnesis is
particularly relevant, forming the basis of successful further diagnostic
measures. However, manually prepared notes are inherently unstructured and
often incomplete. In this paper, we investigate the potential of modern NLP
techniques to support doctors in this matter. We present a dataset of German
patient monologues, and formulate a well-defined information extraction task
under the constraints of real-world utility and practicality. In addition, we
propose BERT-based models in order to solve said task. We can demonstrate
promising performance of the models in both symptom identification and symptom
attribute extraction, significantly outperforming simpler baselines.
| 2,020 |
Computation and Language
|
Subword Segmentation and a Single Bridge Language Affect Zero-Shot
Neural Machine Translation
|
Zero-shot neural machine translation is an attractive goal because of the
high cost of obtaining data and building translation systems for new
translation directions. However, previous papers have reported mixed success in
zero-shot translation. It is hard to predict in which settings it will be
effective, and what limits performance compared to a fully supervised system.
In this paper, we investigate zero-shot performance of a multilingual
EN$\leftrightarrow${FR,CS,DE,FI} system trained on WMT data. We find that
zero-shot performance is highly unstable and can vary by more than 6 BLEU
between training runs, making it difficult to reliably track improvements. We
observe a bias towards copying the source in zero-shot translation, and
investigate how the choice of subword segmentation affects this bias. We find
that language-specific subword segmentation results in less subword copying at
training time, and leads to better zero-shot performance compared to jointly
trained segmentation. A recent trend in multilingual models is to not train on
parallel data between all language pairs, but have a single bridge language,
e.g. English. We find that this negatively affects zero-shot translation and
leads to a failure mode where the model ignores the language tag and instead
produces English output in zero-shot directions. We show that this bias towards
English can be effectively reduced with even a small amount of parallel data in
some of the non-English pairs.
| 2,020 |
Computation and Language
|
Modeling Event Salience in Narratives via Barthes' Cardinal Functions
|
Events in a narrative differ in salience: some are more important to the
story than others. Estimating event salience is useful for tasks such as story
generation, and as a tool for text analysis in narratology and folkloristics.
To compute event salience without any annotations, we adopt Barthes' definition
of event salience and propose several unsupervised methods that require only a
pre-trained language model. Evaluating the proposed methods on folktales with
event salience annotation, we show that the proposed methods outperform
baseline methods and find fine-tuning a language model on narrative texts is a
key factor in improving the proposed methods.
| 2,020 |
Computation and Language
|
Semi-Supervised Cleansing of Web Argument Corpora
|
Debate portals and similar web platforms constitute one of the main text
sources in computational argumentation research and its applications. While the
corpora built upon these sources are rich of argumentatively relevant content
and structure, they also include text that is irrelevant, or even detrimental,
to their purpose. In this paper, we present a precision-oriented approach to
detecting such irrelevant text in a semi-supervised way. Given a few seed
examples, the approach automatically learns basic lexical patterns of relevance
and irrelevance and then incrementally bootstraps new patterns from sentences
matching the patterns. In the existing args.me corpus with 400k argumentative
texts, our approach detects almost 87k irrelevant sentences, at a precision of
0.97 according to manual evaluation. With low effort, the approach can be
adapted to other web argument corpora, providing a generic way to improve
corpus quality.
| 2,020 |
Computation and Language
|
The Gap on GAP: Tackling the Problem of Differing Data Distributions in
Bias-Measuring Datasets
|
Diagnostic datasets that can detect biased models are an important
prerequisite for bias reduction within natural language processing. However,
undesired patterns in the collected data can make such tests incorrect. For
example, if the feminine subset of a gender-bias-measuring coreference
resolution dataset contains sentences with a longer average distance between
the pronoun and the correct candidate, an RNN-based model may perform worse on
this subset due to long-term dependencies. In this work, we introduce a
theoretically grounded method for weighting test samples to cope with such
patterns in the test data. We demonstrate the method on the GAP dataset for
coreference resolution. We annotate GAP with spans of all personal names and
show that examples in the female subset contain more personal names and a
longer distance between pronouns and their referents, potentially affecting the
bias score in an undesired way. Using our weighting method, we find the set of
weights on the test instances that should be used for coping with these
correlations, and we re-evaluate 16 recently released coreference models.
| 2,021 |
Computation and Language
|
Detecting Word Sense Disambiguation Biases in Machine Translation for
Model-Agnostic Adversarial Attacks
|
Word sense disambiguation is a well-known source of translation errors in
NMT. We posit that some of the incorrect disambiguation choices are due to
models' over-reliance on dataset artifacts found in training data, specifically
superficial word co-occurrences, rather than a deeper understanding of the
source text. We introduce a method for the prediction of disambiguation errors
based on statistical data properties, demonstrating its effectiveness across
several domains and model types. Moreover, we develop a simple adversarial
attack strategy that minimally perturbs sentences in order to elicit
disambiguation errors to further probe the robustness of translation models.
Our findings indicate that disambiguation robustness varies substantially
between domains and that different models trained on the same data are
vulnerable to different attacks.
| 2,020 |
Computation and Language
|
Finding Friends and Flipping Frenemies: Automatic Paraphrase Dataset
Augmentation Using Graph Theory
|
Most NLP datasets are manually labeled, so suffer from inconsistent labeling
or limited size. We propose methods for automatically improving datasets by
viewing them as graphs with expected semantic properties. We construct a
paraphrase graph from the provided sentence pair labels, and create an
augmented dataset by directly inferring labels from the original sentence pairs
using a transitivity property. We use structural balance theory to identify
likely mislabelings in the graph, and flip their labels. We evaluate our
methods on paraphrase models trained using these datasets starting from a
pretrained BERT model, and find that the automatically-enhanced training sets
result in more accurate models.
| 2,020 |
Computation and Language
|
Decoupling entrainment from consistency using deep neural networks
|
Human interlocutors tend to engage in adaptive behavior known as entrainment
to become more similar to each other. Isolating the effect of consistency,
i.e., speakers adhering to their individual styles, is a critical part of the
analysis of entrainment. We propose to treat speakers' initial vocal features
as confounds for the prediction of subsequent outputs. Using two existing
neural approaches to deconfounding, we define new measures of entrainment that
control for consistency. These successfully discriminate real interactions from
fake ones. Interestingly, our stricter methods correlate with social variables
in opposite direction from previous measures that do not account for
consistency. These results demonstrate the advantages of using neural networks
to model entrainment, and raise questions regarding how to interpret prior
associations of conversation quality with entrainment measures that do not
account for consistency.
| 2,020 |
Computation and Language
|
DeL-haTE: A Deep Learning Tunable Ensemble for Hate Speech Detection
|
Online hate speech on social media has become a fast-growing problem in
recent times. Nefarious groups have developed large content delivery networks
across several main-stream (Twitter and Facebook) and fringe (Gab, 4chan,
8chan, etc.) outlets to deliver cascades of hate messages directed both at
individuals and communities. Thus addressing these issues has become a top
priority for large-scale social media outlets. Three key challenges in
automated detection and classification of hateful content are the lack of
clearly labeled data, evolving vocabulary and lexicon - hashtags, emojis, etc.
- and the lack of baseline models for fringe outlets such as Gab. In this work,
we propose a novel framework with three major contributions. (a) We engineer an
ensemble of deep learning models that combines the strengths of
state-of-the-art approaches, (b) we incorporate a tuning factor into this
framework that leverages transfer learning to conduct automated hate speech
classification on unlabeled datasets, like Gab, and (c) we develop a weak
supervised learning methodology that allows our framework to train on unlabeled
data. Our ensemble models achieve an 83% hate recall on the HON dataset,
surpassing the performance of the state-of-the-art deep models. We demonstrate
that weak supervised training in combination with classifier tuning
significantly increases model performance on unlabeled data from Gab, achieving
a hate recall of 67%.
| 2,020 |
Computation and Language
|
Warped Language Models for Noise Robust Language Understanding
|
Masked Language Models (MLM) are self-supervised neural networks trained to
fill in the blanks in a given sentence with masked tokens. Despite the
tremendous success of MLMs for various text based tasks, they are not robust
for spoken language understanding, especially for spontaneous conversational
speech recognition noise. In this work we introduce Warped Language Models
(WLM) in which input sentences at training time go through the same
modifications as in MLM, plus two additional modifications, namely inserting
and dropping random tokens. These two modifications extend and contract the
sentence in addition to the modifications in MLMs, hence the word "warped" in
the name. The insertion and drop modification of the input text during training
of WLM resemble the types of noise due to Automatic Speech Recognition (ASR)
errors, and as a result WLMs are likely to be more robust to ASR noise. Through
computational results we show that natural language understanding systems built
on top of WLMs perform better compared to those built based on MLMs, especially
in the presence of ASR errors.
| 2,020 |
Computation and Language
|
Towards Code-switched Classification Exploiting Constituent Language
Resources
|
Code-switching is a commonly observed communicative phenomenon denoting a
shift from one language to another within the same speech exchange. The
analysis of code-switched data often becomes an assiduous task, owing to the
limited availability of data. We propose converting code-switched data into its
constituent high resource languages for exploiting both monolingual and
cross-lingual settings in this work. This conversion allows us to utilize the
higher resource availability for its constituent languages for multiple
downstream tasks.
We perform experiments for two downstream tasks, sarcasm detection and hate
speech detection, in the English-Hindi code-switched setting. These experiments
show an increase in 22% and 42.5% in F1-score for sarcasm detection and hate
speech detection, respectively, compared to the state-of-the-art.
| 2,020 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.