Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
PVG at WASSA 2021: A Multi-Input, Multi-Task, Transformer-Based
Architecture for Empathy and Distress Prediction
|
Active research pertaining to the affective phenomenon of empathy and
distress is invaluable for improving human-machine interaction. Predicting
intensities of such complex emotions from textual data is difficult, as these
constructs are deeply rooted in the psychological theory. Consequently, for
better prediction, it becomes imperative to take into account ancillary factors
such as the psychological test scores, demographic features, underlying latent
primitive emotions, along with the text's undertone and its psychological
complexity. This paper proffers team PVG's solution to the WASSA 2021 Shared
Task on Predicting Empathy and Emotion in Reaction to News Stories. Leveraging
the textual data, demographic features, psychological test score, and the
intrinsic interdependencies of primitive emotions and empathy, we propose a
multi-input, multi-task framework for the task of empathy score prediction.
Here, the empathy score prediction is considered the primary task, while
emotion and empathy classification are considered secondary auxiliary tasks.
For the distress score prediction task, the system is further boosted by the
addition of lexical features. Our submission ranked 1$^{st}$ based on the
average correlation (0.545) as well as the distress correlation (0.574), and
2$^{nd}$ for the empathy Pearson correlation (0.517).
| 2,021 |
Computation and Language
|
Neural model robustness for skill routing in large-scale conversational
AI systems: A design choice exploration
|
Current state-of-the-art large-scale conversational AI or intelligent digital
assistant systems in industry comprises a set of components such as Automatic
Speech Recognition (ASR) and Natural Language Understanding (NLU). For some of
these systems that leverage a shared NLU ontology (e.g., a centralized
intent/slot schema), there exists a separate skill routing component to
correctly route a request to an appropriate skill, which is either a
first-party or third-party application that actually executes on a user
request. The skill routing component is needed as there are thousands of skills
that can either subscribe to the same intent and/or subscribe to an intent
under specific contextual conditions (e.g., device has a screen). Ensuring
model robustness or resilience in the skill routing component is an important
problem since skills may dynamically change their subscription in the ontology
after the skill routing model has been deployed to production. We show how
different modeling design choices impact the model robustness in the context of
skill routing on a state-of-the-art commercial conversational AI system,
specifically on the choices around data augmentation, model architecture, and
optimization method. We show that applying data augmentation can be a very
effective and practical way to drastically improve model robustness.
| 2,021 |
Computation and Language
|
Enhanced Aspect-Based Sentiment Analysis Models with Progressive
Self-supervised Attention Learning
|
In aspect-based sentiment analysis (ABSA), many neural models are equipped
with an attention mechanism to quantify the contribution of each context word
to sentiment prediction. However, such a mechanism suffers from one drawback:
only a few frequent words with sentiment polarities are tended to be taken into
consideration for final sentiment decision while abundant infrequent sentiment
words are ignored by models. To deal with this issue, we propose a progressive
self-supervised attention learning approach for attentional ABSA models. In
this approach, we iteratively perform sentiment prediction on all training
instances, and continually learn useful attention supervision information in
the meantime. During training, at each iteration, context words with the
highest impact on sentiment prediction, identified based on their attention
weights or gradients, are extracted as words with active/misleading influence
on the correct/incorrect prediction for each instance. Words extracted in this
way are masked for subsequent iterations. To exploit these extracted words for
refining ABSA models, we augment the conventional training objective with a
regularization term that encourages ABSA models to not only take full advantage
of the extracted active context words but also decrease the weights of those
misleading words. We integrate the proposed approach into three
state-of-the-art neural ABSA models. Experiment results and in-depth analyses
show that our approach yields better attention results and significantly
enhances the performance of all three models. We release the source code and
trained models at https://github.com/DeepLearnXMU/PSSAttention.
| 2,021 |
Computation and Language
|
Syntactic and Semantic-driven Learning for Open Information Extraction
|
One of the biggest bottlenecks in building accurate, high coverage neural
open IE systems is the need for large labelled corpora. The diversity of open
domain corpora and the variety of natural language expressions further
exacerbate this problem. In this paper, we propose a syntactic and
semantic-driven learning approach, which can learn neural open IE models
without any human-labelled data by leveraging syntactic and semantic knowledge
as noisier, higher-level supervisions. Specifically, we first employ syntactic
patterns as data labelling functions and pretrain a base model using the
generated labels. Then we propose a syntactic and semantic-driven reinforcement
learning algorithm, which can effectively generalize the base model to open
situations with high accuracy. Experimental results show that our approach
significantly outperforms the supervised counterparts, and can even achieve
competitive performance to supervised state-of-the-art (SoA) model
| 2,020 |
Computation and Language
|
IOT: Instance-wise Layer Reordering for Transformer Structures
|
With sequentially stacked self-attention, (optional) encoder-decoder
attention, and feed-forward layers, Transformer achieves big success in natural
language processing (NLP), and many variants have been proposed. Currently,
almost all these models assume that the layer order is fixed and kept the same
across data samples. We observe that different data samples actually favor
different orders of the layers. Based on this observation, in this work, we
break the assumption of the fixed layer order in the Transformer and introduce
instance-wise layer reordering into the model structure. Our Instance-wise
Ordered Transformer (IOT) can model variant functions by reordered layers,
which enables each sample to select the better one to improve the model
performance under the constraint of almost the same number of parameters. To
achieve this, we introduce a light predictor with negligible parameter and
inference cost to decide the most capable and favorable layer order for any
input sequence. Experiments on 3 tasks (neural machine translation, abstractive
summarization, and code generation) and 9 datasets demonstrate consistent
improvements of our method. We further show that our method can also be applied
to other architectures beyond Transformer. Our code is released at Github.
| 2,021 |
Computation and Language
|
Dual Pointer Network for Fast Extraction of Multiple Relations in a
Sentence
|
Relation extraction is a type of information extraction task that recognizes
semantic relationships between entities in a sentence. Many previous studies
have focused on extracting only one semantic relation between two entities in a
single sentence. However, multiple entities in a sentence are associated
through various relations. To address this issue, we propose a relation
extraction model based on a dual pointer network with a multi-head attention
mechanism. The proposed model finds n-to-1 subject-object relations using a
forward object decoder. Then, it finds 1-to-n subject-object relations using a
backward subject decoder. Our experiments confirmed that the proposed model
outperformed previous models, with an F1-score of 80.8% for the ACE-2005 corpus
and an F1-score of 78.3% for the NYT corpus.
| 2,020 |
Computation and Language
|
Multilingual Byte2Speech Models for Scalable Low-resource Speech
Synthesis
|
To scale neural speech synthesis to various real-world languages, we present
a multilingual end-to-end framework that maps byte inputs to spectrograms, thus
allowing arbitrary input scripts. Besides strong results on 40+ languages, the
framework demonstrates capabilities to adapt to new languages under extreme
low-resource and even few-shot scenarios of merely 40s transcribed recording,
without the need of per-language resources like lexicon, extra corpus,
auxiliary models, or linguistic expertise, thus ensuring scalability. While it
retains satisfactory intelligibility and naturalness matching rich-resource
models. Exhaustive comparative and ablation studies are performed to reveal the
potential of the framework for low-resource languages. Furthermore, we propose
a novel method to extract language-specific sub-networks in a multilingual
model for a better understanding of its mechanism.
| 2,021 |
Computation and Language
|
Transfer Learning based Speech Affect Recognition in Urdu
|
It has been established that Speech Affect Recognition for low resource
languages is a difficult task. Here we present a Transfer learning based Speech
Affect Recognition approach in which: we pre-train a model for high resource
language affect recognition task and fine tune the parameters for low resource
language using Deep Residual Network. Here we use standard four data sets to
demonstrate that transfer learning can solve the problem of data scarcity for
Affect Recognition task. We demonstrate that our approach is efficient by
achieving 74.7 percent UAR on RAVDESS as source and Urdu data set as a target.
Through an ablation study, we have identified that pre-trained model adds most
of the features information, improvement in results and solves less data
issues. Using this knowledge, we have also experimented on SAVEE and EMO-DB
data set by setting Urdu as target language where only 400 utterances of data
is available. This approach achieves high Unweighted Average Recall (UAR) when
compared with existing algorithms.
| 2,021 |
Computation and Language
|
Graph-Based Tri-Attention Network for Answer Ranking in CQA
|
In community-based question answering (CQA) platforms, automatic answer
ranking for a given question is critical for finding potentially popular
answers in early times. The mainstream approaches learn to generate answer
ranking scores based on the matching degree between question and answer
representations as well as the influence of respondents. However, they
encounter two main limitations: (1) Correlations between answers in the same
question are often overlooked. (2) Question and respondent representations are
built independently of specific answers before affecting answer
representations. To address the limitations, we devise a novel graph-based
tri-attention network, namely GTAN, which has two innovations. First, GTAN
proposes to construct a graph for each question and learn answer correlations
from each graph through graph neural networks (GNNs). Second, based on the
representations learned from GNNs, an alternating tri-attention method is
developed to alternatively build target-aware respondent representations,
answer-specific question representations, and context-aware answer
representations by attention computation. GTAN finally integrates the above
representations to generate answer ranking scores. Experiments on three
real-world CQA datasets demonstrate GTAN significantly outperforms
state-of-the-art answer ranking methods, validating the rationality of the
network architecture.
| 2,021 |
Computation and Language
|
Hierarchical Transformer for Multilingual Machine Translation
|
The choice of parameter sharing strategy in multilingual machine translation
models determines how optimally parameter space is used and hence, directly
influences ultimate translation quality. Inspired by linguistic trees that show
the degree of relatedness between different languages, the new general approach
to parameter sharing in multilingual machine translation was suggested
recently. The main idea is to use these expert language hierarchies as a basis
for multilingual architecture: the closer two languages are, the more
parameters they share. In this work, we test this idea using the Transformer
architecture and show that despite the success in previous work there are
problems inherent to training such hierarchical models. We demonstrate that in
case of carefully chosen training strategy the hierarchical architecture can
outperform bilingual models and multilingual models with full parameter
sharing.
| 2,021 |
Computation and Language
|
WordBias: An Interactive Visual Tool for Discovering Intersectional
Biases Encoded in Word Embeddings
|
Intersectional bias is a bias caused by an overlap of multiple social factors
like gender, sexuality, race, disability, religion, etc. A recent study has
shown that word embedding models can be laden with biases against
intersectional groups like African American females, etc. The first step
towards tackling such intersectional biases is to identify them. However,
discovering biases against different intersectional groups remains a
challenging task. In this work, we present WordBias, an interactive visual tool
designed to explore biases against intersectional groups encoded in static word
embeddings. Given a pretrained static word embedding, WordBias computes the
association of each word along different groups based on race, age, etc. and
then visualizes them using a novel interactive interface. Using a case study,
we demonstrate how WordBias can help uncover biases against intersectional
groups like Black Muslim Males, Poor Females, etc. encoded in word embedding.
In addition, we also evaluate our tool using qualitative feedback from expert
interviews. The source code for this tool can be publicly accessed for
reproducibility at github.com/bhavyaghai/WordBias.
| 2,021 |
Computation and Language
|
Parsing Indonesian Sentence into Abstract Meaning Representation using
Machine Learning Approach
|
Abstract Meaning Representation (AMR) provides many information of a sentence
such as semantic relations, coreferences, and named entity relation in one
representation. However, research on AMR parsing for Indonesian sentence is
fairly limited. In this paper, we develop a system that aims to parse an
Indonesian sentence using a machine learning approach. Based on Zhang et al.
work, our system consists of three steps: pair prediction, label prediction,
and graph construction. Pair prediction uses dependency parsing component to
get the edges between the words for the AMR. The result of pair prediction is
passed to the label prediction process which used a supervised learning
algorithm to predict the label between the edges of the AMR. We used simple
sentence dataset that is gathered from articles and news article sentences. Our
model achieved the SMATCH score of 0.820 for simple sentence test data.
| 2,021 |
Computation and Language
|
Fine-tuning Pretrained Multilingual BERT Model for Indonesian
Aspect-based Sentiment Analysis
|
Although previous research on Aspect-based Sentiment Analysis (ABSA) for
Indonesian reviews in hotel domain has been conducted using CNN and XGBoost,
its model did not generalize well in test data and high number of OOV words
contributed to misclassification cases. Nowadays, most state-of-the-art results
for wide array of NLP tasks are achieved by utilizing pretrained language
representation. In this paper, we intend to incorporate one of the foremost
language representation model, BERT, to perform ABSA in Indonesian reviews
dataset. By combining multilingual BERT (m-BERT) with task transformation
method, we manage to achieve significant improvement by 8% on the F1-score
compared to the result from our previous study.
| 2,021 |
Computation and Language
|
Multi-document Summarization using Semantic Role Labeling and Semantic
Graph for Indonesian News Article
|
In this paper, we proposed a multi-document summarization system using
semantic role labeling (SRL) and semantic graph for Indonesian news articles.
In order to improve existing summarizer, our system modified summarizer that
employed subject, predicate, object, and adverbial (SVOA) extraction for
predicate argument structure (PAS) extraction. SVOA extraction is replaced with
SRL model for Indonesian. We also replace the genetic algorithm to identify
important PAS with the decision tree classifier since the summarizer without
genetic algorithm gave better performance. The decision tree model is employed
to identify important PAS. The decision tree model with 10 features achieved
better performance than decision tree with 4 sentence features. Experiments and
evaluations are conducted to generate 100 words summary and 200 words summary.
The evaluation shows the proposed model get 0.313 average ROUGE-2 recall in 100
words summary and 0.394 average ROUGE-2 recall in 200 words summary.
| 2,021 |
Computation and Language
|
Leveraging Recursive Processing for Neural-Symbolic Affect-Target
Associations
|
Explaining the outcome of deep learning decisions based on affect is
challenging but necessary if we expect social companion robots to interact with
users on an emotional level. In this paper, we present a commonsense approach
that utilizes an interpretable hybrid neural-symbolic system to associate
extracted targets, noun chunks determined to be associated with the expressed
emotion, with affective labels from a natural language expression. We leverage
a pre-trained neural network that is well adapted to tree and sub-tree
processing, the Dependency Tree-LSTM, to learn the affect labels of dynamic
targets, determined through symbolic rules, in natural language. We find that
making use of the unique properties of the recursive network provides higher
accuracy and interpretability when compared to other unstructured and
sequential methods for determining target-affect associations in an
aspect-based sentiment analysis task.
| 2,021 |
Computation and Language
|
There Once Was a Really Bad Poet, It Was Automated but You Didn't Know
It
|
Limerick generation exemplifies some of the most difficult challenges faced
in poetry generation, as the poems must tell a story in only five lines, with
constraints on rhyme, stress, and meter. To address these challenges, we
introduce LimGen, a novel and fully automated system for limerick generation
that outperforms state-of-the-art neural network-based poetry models, as well
as prior rule-based poetry models. LimGen consists of three important pieces:
the Adaptive Multi-Templated Constraint algorithm that constrains our search to
the space of realistic poems, the Multi-Templated Beam Search algorithm which
searches efficiently through the space, and the probabilistic Storyline
algorithm that provides coherent storylines related to a user-provided prompt
word. The resulting limericks satisfy poetic constraints and have thematically
coherent storylines, which are sometimes even funny (when we are lucky).
| 2,021 |
Computation and Language
|
AnswerQuest: A System for Generating Question-Answer Items from
Multi-Paragraph Documents
|
One strategy for facilitating reading comprehension is to present information
in a question-and-answer format. We demo a system that integrates the tasks of
question answering (QA) and question generation (QG) in order to produce Q&A
items that convey the content of multi-paragraph documents. We report some
experiments for QA and QG that yield improvements on both tasks, and assess how
they interact to produce a list of Q&A items for a text. The demo is accessible
at qna.sdl.com.
| 2,021 |
Computation and Language
|
Overcoming Poor Word Embeddings with Word Definitions
|
Modern natural language understanding models depend on pretrained subword
embeddings, but applications may need to reason about words that were never or
rarely seen during pretraining. We show that examples that depend critically on
a rarer word are more challenging for natural language inference models. Then
we explore how a model could learn to use definitions, provided in natural
text, to overcome this handicap. Our model's understanding of a definition is
usually weaker than a well-modeled word embedding, but it recovers most of the
performance gap from using a completely untrained word.
| 2,021 |
Computation and Language
|
Putting Humans in the Natural Language Processing Loop: A Survey
|
How can we design Natural Language Processing (NLP) systems that learn from
human feedback? There is a growing research body of Human-in-the-loop (HITL)
NLP frameworks that continuously integrate human feedback to improve the model
itself. HITL NLP research is nascent but multifarious -- solving various NLP
problems, collecting diverse feedback from different people, and applying
different methods to learn from collected feedback. We present a survey of HITL
NLP work from both Machine Learning (ML) and Human-Computer Interaction (HCI)
communities that highlights its short yet inspiring history, and thoroughly
summarize recent frameworks focusing on their tasks, goals, human interactions,
and feedback learning methods. Finally, we discuss future directions for
integrating human feedback in the NLP development loop.
| 2,021 |
Computation and Language
|
Improving Zero-Shot Entity Retrieval through Effective Dense
Representations
|
Entity Linking (EL) seeks to align entity mentions in text to entries in a
knowledge-base and is usually comprised of two phases: candidate generation and
candidate ranking. While most methods focus on the latter, it is the candidate
generation phase that sets an upper bound to both time and accuracy performance
of the overall EL system. This work's contribution is a significant improvement
in candidate generation which thus raises the performance threshold for EL, by
generating candidates that include the gold entity in the least candidate set
(top-K). We propose a simple approach that efficiently embeds mention-entity
pairs in dense space through a BERT-based bi-encoder. Specifically, we extend
(Wu et al., 2020) by introducing a new pooling function and incorporating
entity type side-information. We achieve a new state-of-the-art 84.28% accuracy
on top-50 candidates on the Zeshel dataset, compared to the previous 82.06% on
the top-64 of (Wu et al., 2020). We report the results from extensive
experimentation using our proposed model on both seen and unseen entity
datasets. Our results suggest that our method could be a useful complement to
existing EL approaches.
| 2,021 |
Computation and Language
|
Changing the Narrative Perspective: From Deictic to Anaphoric Point of
View
|
We introduce the task of changing the narrative point of view, where
characters are assigned a narrative perspective that is different from the one
originally used by the writer. The resulting shift in the narrative point of
view alters the reading experience and can be used as a tool in fiction writing
or to generate types of text ranging from educational to self-help and
self-diagnosis. We introduce a benchmark dataset containing a wide range of
types of narratives annotated with changes in point of view from deictic (first
or second person) to anaphoric (third person) and describe a pipeline for
processing raw text that relies on a neural architecture for mention selection.
Evaluations on the new benchmark dataset show that the proposed architecture
substantially outperforms the baselines by generating mentions that are less
ambiguous and more natural.
| 2,021 |
Computation and Language
|
Neural networks can understand compositional functions that humans do
not, in the context of emergent communication
|
We show that it is possible to craft transformations that, applied to
compositional grammars, result in grammars that neural networks can learn
easily, but humans do not. This could explain the disconnect between current
metrics of compositionality, that are arguably human-centric, and the ability
of neural networks to generalize to unseen examples. We propose to use the
transformations as a benchmark, ICY, which could be used to measure aspects of
the compositional inductive bias of networks, and to search for networks with
similar compositional inductive biases to humans. As an example of this
approach, we propose a hierarchical model, HU-RNN, which shows an inductive
bias towards position-independent, word-like groups of tokens.
| 2,021 |
Computation and Language
|
TypeShift: A User Interface for Visualizing the Typing Production
Process
|
TypeShift is a tool for visualizing linguistic patterns in the timing of
typing production. Language production is a complex process which draws on
linguistic, cognitive and motor skills. By visualizing holistic trends in the
typing process, TypeShift aims to elucidate the often noisy information signals
that are used to represent typing patterns, both at the word-level and
character-level. It accomplishes this by enabling a researcher to compare and
contrast specific linguistic phenomena, and compare an individual typing
session to multiple group averages. Finally, although TypeShift was originally
designed for typing data, it can easy be adapted to accommodate speech data, as
well. A web demo is available at https://angoodkind.shinyapps.io/TypeShift/.
The source code can be accessed at https://github.com/angoodkind/TypeShift.
| 2,021 |
Computation and Language
|
Translating the Unseen? Yoruba-English MT in Low-Resource,
Morphologically-Unmarked Settings
|
Translating between languages where certain features are marked
morphologically in one but absent or marked contextually in the other is an
important test case for machine translation. When translating into English
which marks (in)definiteness morphologically, from Yor\`ub\'a which uses bare
nouns but marks these features contextually, ambiguities arise. In this work,
we perform fine-grained analysis on how an SMT system compares with two NMT
systems (BiLSTM and Transformer) when translating bare nouns in Yor\`ub\'a into
English. We investigate how the systems what extent they identify BNs,
correctly translate them, and compare with human translation patterns. We also
analyze the type of errors each model makes and provide a linguistic
description of these errors. We glean insights for evaluating model performance
in low-resource settings. In translating bare nouns, our results show the
transformer model outperforms the SMT and BiLSTM models for 4 categories, the
BiLSTM outperforms the SMT model for 3 categories while the SMT outperforms the
NMT models for 1 category.
| 2,021 |
Computation and Language
|
MTLHealth: A Deep Learning System for Detecting Disturbing Content in
Student Essays
|
Essay submissions to standardized tests like the ACT occasionally include
references to bullying, self-harm, violence, and other forms of disturbing
content. Graders must take great care to identify cases like these and decide
whether to alert authorities on behalf of students who may be in danger. There
is a growing need for robust computer systems to support human decision-makers
by automatically flagging potential instances of disturbing content. This paper
describes MTLHealth, a disturbing content detection pipeline built around
recent advances from computational linguistics, particularly pre-trained
language model Transformer networks.
| 2,021 |
Computation and Language
|
Orthogonal Attention: A Cloze-Style Approach to Negation Scope
Resolution
|
Negation Scope Resolution is an extensively researched problem, which is used
to locate the words affected by a negation cue in a sentence. Recent works have
shown that simply finetuning transformer-based architectures yield
state-of-the-art results on this task. In this work, we look at Negation Scope
Resolution as a Cloze-Style task, with the sentence as the Context and the cue
words as the Query. We also introduce a novel Cloze-Style Attention mechanism
called Orthogonal Attention, which is inspired by Self Attention. First, we
propose a framework for developing Orthogonal Attention variants, and then
propose 4 Orthogonal Attention variants: OA-C, OA-CA, OA-EM, and OA-EMB. Using
these Orthogonal Attention layers on top of an XLNet backbone, we outperform
the finetuned XLNet state-of-the-art for Negation Scope Resolution, achieving
the best results to date on all 4 datasets we experiment with: BioScope
Abstracts, BioScope Full Papers, SFU Review Corpus and the *sem 2012 Dataset
(Sherlock).
| 2,021 |
Computation and Language
|
Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
|
Pre-trained language models like BERT achieve superior performances in
various NLP tasks without explicit consideration of syntactic information.
Meanwhile, syntactic information has been proved to be crucial for the success
of NLP applications. However, how to incorporate the syntax trees effectively
and efficiently into pre-trained Transformers is still unsettled. In this
paper, we address this problem by proposing a novel framework named
Syntax-BERT. This framework works in a plug-and-play mode and is applicable to
an arbitrary pre-trained checkpoint based on Transformer architecture.
Experiments on various datasets of natural language understanding verify the
effectiveness of syntax trees and achieve consistent improvement over multiple
pre-trained models, including BERT, RoBERTa, and T5.
| 2,021 |
Computation and Language
|
Empathetic BERT2BERT Conversational Model: Learning Arabic Language
Generation with Little Data
|
Enabling empathetic behavior in Arabic dialogue agents is an important aspect
of building human-like conversational models. While Arabic Natural Language
Processing has seen significant advances in Natural Language Understanding
(NLU) with language models such as AraBERT, Natural Language Generation (NLG)
remains a challenge. The shortcomings of NLG encoder-decoder models are
primarily due to the lack of Arabic datasets suitable to train NLG models such
as conversational agents. To overcome this issue, we propose a
transformer-based encoder-decoder initialized with AraBERT parameters. By
initializing the weights of the encoder and decoder with AraBERT pre-trained
weights, our model was able to leverage knowledge transfer and boost
performance in response generation. To enable empathy in our conversational
model, we train it using the ArabicEmpatheticDialogues dataset and achieve high
performance in empathetic response generation. Specifically, our model achieved
a low perplexity value of 17.0 and an increase in 5 BLEU points compared to the
previous state-of-the-art model. Also, our proposed model was rated highly by
85 human evaluators, validating its high capability in exhibiting empathy while
generating relevant and fluent responses in open-domain settings.
| 2,021 |
Computation and Language
|
Automatic Difficulty Classification of Arabic Sentences
|
In this paper, we present a Modern Standard Arabic (MSA) Sentence difficulty
classifier, which predicts the difficulty of sentences for language learners
using either the CEFR proficiency levels or the binary classification as simple
or complex. We compare the use of sentence embeddings of different kinds
(fastText, mBERT , XLM-R and Arabic-BERT), as well as traditional language
features such as POS tags, dependency trees, readability scores and frequency
lists for language learners. Our best results have been achieved using
fined-tuned Arabic-BERT. The accuracy of our 3-way CEFR classification is F-1
of 0.80 and 0.75 for Arabic-Bert and XLM-R classification respectively and 0.71
Spearman correlation for regression. Our binary difficulty classifier reaches
F-1 0.94 and F-1 0.98 for sentence-pair semantic similarity classifier.
| 2,021 |
Computation and Language
|
Improving Text-to-SQL with Schema Dependency Learning
|
Text-to-SQL aims to map natural language questions to SQL queries. The
sketch-based method combined with execution-guided (EG) decoding strategy has
shown a strong performance on the WikiSQL benchmark. However, execution-guided
decoding relies on database execution, which significantly slows down the
inference process and is hence unsatisfactory for many real-world applications.
In this paper, we present the Schema Dependency guided multi-task Text-to-SQL
model (SDSQL) to guide the network to effectively capture the interactions
between questions and schemas. The proposed model outperforms all existing
methods in both the settings with or without EG. We show the schema dependency
learning partially cover the benefit from EG and alleviates the need for it.
SDSQL without EG significantly reduces time consumption during inference,
sacrificing only a small amount of performance and provides more flexibility
for downstream applications.
| 2,021 |
Computation and Language
|
Local word statistics affect reading times independently of surprisal
|
Surprisal theory has provided a unifying framework for understanding many
phenomena in sentence processing (Hale, 2001; Levy, 2008a), positing that a
word's conditional probability given all prior context fully determines
processing difficulty. Problematically for this claim, one local statistic,
word frequency, has also been shown to affect processing, even when conditional
probability given context is held constant. Here, we ask whether other local
statistics have a role in processing, or whether word frequency is a special
case. We present the first clear evidence that more complex local statistics,
word bigram and trigram probability, also affect processing independently of
surprisal. These findings suggest a significant and independent role of local
statistics in processing. Further, it motivates research into new
generalizations of surprisal that can also explain why local statistical
information should have an outsized effect.
| 2,021 |
Computation and Language
|
"Sharks are not the threat humans are": Argument Component Segmentation
in School Student Essays
|
Argument mining is often addressed by a pipeline method where segmentation of
text into argumentative units is conducted first and proceeded by an argument
component identification task. In this research, we apply a token-level
classification to identify claim and premise tokens from a new corpus of
argumentative essays written by middle school students. To this end, we compare
a variety of state-of-the-art models such as discrete features and deep
learning architectures (e.g., BiLSTM networks and BERT-based architectures) to
identify the argument components. We demonstrate that a BERT-based multi-task
learning architecture (i.e., token and sentence level classification)
adaptively pretrained on a relevant unlabeled dataset obtains the best results
| 2,021 |
Computation and Language
|
MCR-Net: A Multi-Step Co-Interactive Relation Network for Unanswerable
Questions on Machine Reading Comprehension
|
Question answering systems usually use keyword searches to retrieve potential
passages related to a question, and then extract the answer from passages with
the machine reading comprehension methods. However, many questions tend to be
unanswerable in the real world. In this case, it is significant and challenging
how the model determines when no answer is supported by the passage and
abstains from answering. Most of the existing systems design a simple
classifier to determine answerability implicitly without explicitly modeling
mutual interaction and relation between the question and passage, leading to
the poor performance for determining the unanswerable questions. To tackle this
problem, we propose a Multi-Step Co-Interactive Relation Network (MCR-Net) to
explicitly model the mutual interaction and locate key clues from coarse to
fine by introducing a co-interactive relation module. The co-interactive
relation module contains a stack of interaction and fusion blocks to
continuously integrate and fuse history-guided and current-query-guided clues
in an explicit way. Experiments on the SQuAD 2.0 and DuReader datasets show
that our model achieves a remarkable improvement, outperforming the BERT-style
baselines in literature. Visualization analysis also verifies the importance of
the mutual interaction between the question and passage.
| 2,021 |
Computation and Language
|
Semiotically-grounded distant viewing of diagrams: insights from two
multimodal corpora
|
In this article, we bring together theories of multimodal communication and
computational methods to study how primary school science diagrams combine
multiple expressive resources. We position our work within the field of digital
humanities, and show how annotations informed by multimodality research, which
target expressive resources and discourse structure, allow imposing structure
on the output of computational methods. We illustrate our approach by analysing
two multimodal diagram corpora: the first corpus is intended to support
research on automatic diagram processing, whereas the second is oriented
towards studying diagrams as a mode of communication. Our results show that
multimodally-informed annotations can bring out structural patterns in the
diagrams, which also extend across diagrams that deal with different topics.
| 2,021 |
Computation and Language
|
InFillmore: Frame-Guided Language Generation with Bidirectional Context
|
We propose a structured extension to bidirectional-context conditional
language generation, or "infilling," inspired by Frame Semantic theory
(Fillmore, 1976). Guidance is provided through two approaches: (1) model
fine-tuning, conditioning directly on observed symbolic frames, and (2) a novel
extension to disjunctive lexically constrained decoding that leverages frame
semantic lexical units. Automatic and human evaluations confirm that
frame-guided generation allows for explicit manipulation of intended infill
semantics, with minimal loss in distinguishability from human-generated text.
Our methods flexibly apply to a variety of use scenarios, and we provide a
codebase and interactive demo available from
https://nlp.jhu.edu/demos/infillmore.
| 2,022 |
Computation and Language
|
Fast and Effective Biomedical Entity Linking Using a Dual Encoder
|
Biomedical entity linking is the task of identifying mentions of biomedical
concepts in text documents and mapping them to canonical entities in a target
thesaurus. Recent advancements in entity linking using BERT-based models follow
a retrieve and rerank paradigm, where the candidate entities are first selected
using a retriever model, and then the retrieved candidates are ranked by a
reranker model. While this paradigm produces state-of-the-art results, they are
slow both at training and test time as they can process only one mention at a
time. To mitigate these issues, we propose a BERT-based dual encoder model that
resolves multiple mentions in a document in one shot. We show that our proposed
model is multiple times faster than existing BERT-based models while being
competitive in accuracy for biomedical entity linking. Additionally, we modify
our dual encoder model for end-to-end biomedical entity linking that performs
both mention span detection and entity disambiguation and out-performs two
recently proposed models.
| 2,021 |
Computation and Language
|
Domain Controlled Title Generation with Human Evaluation
|
We study automatic title generation and present a method for generating
domain-controlled titles for scientific articles. A good title allows you to
get the attention that your research deserves. A title can be interpreted as a
high-compression description of a document containing information on the
implemented process. For domain-controlled titles, we used the pre-trained
text-to-text transformer model and the additional token technique. Title tokens
are sampled from a local distribution (which is a subset of global vocabulary)
of the domain-specific vocabulary and not global vocabulary, thereby generating
a catchy title and closely linking it to its corresponding abstract. Generated
titles looked realistic, convincing, and very close to the ground truth. We
have performed automated evaluation using ROUGE metric and human evaluation
using five parameters to make a comparison between human and machine-generated
titles. The titles produced were considered acceptable with higher metric
ratings in contrast to the original titles. Thus we concluded that our research
proposes a promising method for domain-controlled title generation.
| 2,021 |
Computation and Language
|
Text Simplification by Tagging
|
Edit-based approaches have recently shown promising results on multiple
monolingual sequence transduction tasks. In contrast to conventional
sequence-to-sequence (Seq2Seq) models, which learn to generate text from
scratch as they are trained on parallel corpora, these methods have proven to
be much more effective since they are able to learn to make fast and accurate
transformations while leveraging powerful pre-trained language models. Inspired
by these ideas, we present TST, a simple and efficient Text Simplification
system based on sequence Tagging, leveraging pre-trained Transformer-based
encoders. Our system makes simplistic data augmentations and tweaks in training
and inference on a pre-existing system, which makes it less reliant on large
amounts of parallel training data, provides more control over the outputs and
enables faster inference speeds. Our best model achieves near state-of-the-art
performance on benchmark test datasets for the task. Since it is fully
non-autoregressive, it achieves faster inference speeds by over 11 times than
the current state-of-the-art text simplification system.
| 2,022 |
Computation and Language
|
Few-Shot Learning of an Interleaved Text Summarization Model by
Pretraining with Synthetic Data
|
Interleaved texts, where posts belonging to different threads occur in a
sequence, commonly occur in online chat posts, so that it can be time-consuming
to quickly obtain an overview of the discussions. Existing systems first
disentangle the posts by threads and then extract summaries from those threads.
A major issue with such systems is error propagation from the disentanglement
component. While end-to-end trainable summarization system could obviate
explicit disentanglement, such systems require a large amount of labeled data.
To address this, we propose to pretrain an end-to-end trainable hierarchical
encoder-decoder system using synthetic interleaved texts. We show that by
fine-tuning on a real-world meeting dataset (AMI), such a system out-performs a
traditional two-step system by 22%. We also compare against transformer models
and observed that pretraining with synthetic data both the encoder and decoder
outperforms the BertSumExtAbs transformer model which pretrains only the
encoder on a large dataset.
| 2,021 |
Computation and Language
|
AfriVEC: Word Embedding Models for African Languages. Case Study of Fon
and Nobiin
|
From Word2Vec to GloVe, word embedding models have played key roles in the
current state-of-the-art results achieved in Natural Language Processing.
Designed to give significant and unique vectorized representations of words and
entities, those models have proven to efficiently extract similarities and
establish relationships reflecting semantic and contextual meaning among words
and entities. African Languages, representing more than 31% of the worldwide
spoken languages, have recently been subject to lots of research. However, to
the best of our knowledge, there are currently very few to none word embedding
models for those languages words and entities, and none for the languages under
study in this paper. After describing Glove, Word2Vec, and Poincar\'e
embeddings functionalities, we build Word2Vec and Poincar\'e word embedding
models for Fon and Nobiin, which show promising results. We test the
applicability of transfer learning between these models as a landmark for
African Languages to jointly involve in mitigating the scarcity of their
resources, and attempt to provide linguistic and social interpretations of our
results. Our main contribution is to arouse more interest in creating word
embedding models proper to African Languages, ready for use, and that can
significantly improve the performances of Natural Language Processing
downstream tasks on them. The official repository and implementation is at
https://github.com/bonaventuredossou/afrivec
| 2,021 |
Computation and Language
|
A Topological Approach to Compare Document Semantics Based on a New
Variant of Syntactic N-grams
|
This paper delivers a new perspective of thinking and utilizing syntactic
n-grams (sn-grams). Sn-grams are a type of non-linear n-grams which have been
playing a critical role in many NLP tasks. Introducing sn-grams to comparing
document semantics thus is an appealing application, and few studies have
reported progress at this. However, when proceeding on this application, we
found three major issues of sn-grams: lack of significance, being sensitive to
word orders and failing on capture indirect syntactic relations. To address
these issues, we propose a new variant of sn-grams named generalized phrases
(GPs). Then based on GPs we propose a topological approach, named DSCoH, to
compute document semantic similarities. DSCoH has been extensively tested on
the document semantics comparison and the document clustering tasks. The
experimental results show that DSCoH can outperform state-of-the-art
embedding-based methods.
| 2,021 |
Computation and Language
|
Contrastive Semi-supervised Learning for ASR
|
Pseudo-labeling is the most adopted method for pre-training automatic speech
recognition (ASR) models. However, its performance suffers from the supervised
teacher model's degrading quality in low-resource setups and under domain
transfer. Inspired by the successes of contrastive representation learning for
computer vision and speech applications, and more recently for supervised
learning of visual objects, we propose Contrastive Semi-supervised Learning
(CSL). CSL eschews directly predicting teacher-generated pseudo-labels in favor
of utilizing them to select positive and negative examples. In the challenging
task of transcribing public social media videos, using CSL reduces the WER by
8% compared to the standard Cross-Entropy pseudo-labeling (CE-PL) when 10hr of
supervised data is used to annotate 75,000hr of videos. The WER reduction jumps
to 19% under the ultra low-resource condition of using 1hr labels for teacher
supervision. CSL generalizes much better in out-of-domain conditions, showing
up to 17% WER reduction compared to the best CE-PL pre-trained model.
| 2,021 |
Computation and Language
|
Improving Document-Level Sentiment Classification Using Importance of
Sentences
|
Previous researchers have considered sentiment analysis as a document
classification task, in which input documents are classified into predefined
sentiment classes. Although there are sentences in a document that support
important evidences for sentiment analysis and sentences that do not, they have
treated the document as a bag of sentences. In other words, they have not
considered the importance of each sentence in the document. To effectively
determine polarity of a document, each sentence in the document should be dealt
with different degrees of importance. To address this problem, we propose a
document-level sentence classification model based on deep neural networks, in
which the importance degrees of sentences in documents are automatically
determined through gate mechanisms. To verify our new sentiment analysis model,
we conducted experiments using the sentiment datasets in the four different
domains such as movie reviews, hotel reviews, restaurant reviews, and music
reviews. In the experiments, the proposed model outperformed previous
state-of-the-art models that do not consider importance differences of
sentences in a document. The experimental results show that the importance of
sentences should be considered in a document-level sentiment classification
task.
| 2,020 |
Computation and Language
|
Self-supervised Regularization for Text Classification
|
Text classification is a widely studied problem and has broad applications.
In many real-world problems, the number of texts for training classification
models is limited, which renders these models prone to overfitting. To address
this problem, we propose SSL-Reg, a data-dependent regularization approach
based on self-supervised learning (SSL). SSL is an unsupervised learning
approach which defines auxiliary tasks on input data without using any
human-provided labels and learns data representations by solving these
auxiliary tasks. In SSL-Reg, a supervised classification task and an
unsupervised SSL task are performed simultaneously. The SSL task is
unsupervised, which is defined purely on input texts without using any
human-provided labels. Training a model using an SSL task can prevent the model
from being overfitted to a limited number of class labels in the classification
task. Experiments on 17 text classification datasets demonstrate the
effectiveness of our proposed method.
| 2,021 |
Computation and Language
|
BERTese: Learning to Speak to BERT
|
Large pre-trained language models have been shown to encode large amounts of
world and commonsense knowledge in their parameters, leading to substantial
interest in methods for extracting that knowledge. In past work, knowledge was
extracted by taking manually-authored queries and gathering paraphrases for
them using a separate pipeline. In this work, we propose a method for
automatically rewriting queries into "BERTese", a paraphrase query that is
directly optimized towards better knowledge extraction. To encourage meaningful
rewrites, we add auxiliary loss functions that encourage the query to
correspond to actual language tokens. We empirically show our approach
outperforms competing baselines, obviating the need for complex pipelines.
Moreover, BERTese provides some insight into the type of language that helps
language models perform knowledge extraction.
| 2,021 |
Computation and Language
|
Detecting Inappropriate Messages on Sensitive Topics that Could Harm a
Company's Reputation
|
Not all topics are equally "flammable" in terms of toxicity: a calm
discussion of turtles or fishing less often fuels inappropriate toxic dialogues
than a discussion of politics or sexual minorities. We define a set of
sensitive topics that can yield inappropriate and toxic messages and describe
the methodology of collecting and labeling a dataset for appropriateness. While
toxicity in user-generated data is well-studied, we aim at defining a more
fine-grained notion of inappropriateness. The core of inappropriateness is that
it can harm the reputation of a speaker. This is different from toxicity in two
respects: (i) inappropriateness is topic-related, and (ii) inappropriate
message is not toxic but still unacceptable. We collect and release two
datasets for Russian: a topic-labeled dataset and an appropriateness-labeled
dataset. We also release pre-trained classification models trained on this
data.
| 2,021 |
Computation and Language
|
Comparing Approaches to Dravidian Language Identification
|
This paper describes the submissions by team HWR to the Dravidian Language
Identification (DLI) shared task organized at VarDial 2021 workshop. The DLI
training set includes 16,674 YouTube comments written in Roman script
containing code-mixed text with English and one of the three South Dravidian
languages: Kannada, Malayalam, and Tamil. We submitted results generated using
two models, a Naive Bayes classifier with adaptive language models, which has
shown to obtain competitive performance in many language and dialect
identification tasks, and a transformer-based model which is widely regarded as
the state-of-the-art in a number of NLP tasks. Our first submission was sent in
the closed submission track using only the training set provided by the shared
task organisers, whereas the second submission is considered to be open as it
used a pretrained model trained with external data. Our team attained shared
second position in the shared task with the submission based on Naive Bayes.
Our results reinforce the idea that deep learning methods are not as
competitive in language identification related tasks as they are in many other
text classification tasks.
| 2,021 |
Computation and Language
|
An Amharic News Text classification Dataset
|
In NLP, text classification is one of the primary problems we try to solve
and its uses in language analyses are indisputable. The lack of labeled
training data made it harder to do these tasks in low resource languages like
Amharic. The task of collecting, labeling, annotating, and making valuable this
kind of data will encourage junior researchers, schools, and machine learning
practitioners to implement existing classification models in their language. In
this short paper, we aim to introduce the Amharic text classification dataset
that consists of more than 50k news articles that were categorized into 6
classes. This dataset is made available with easy baseline performances to
encourage studies and better performance experiments.
| 2,021 |
Computation and Language
|
Combining Context-Free and Contextualized Representations for Arabic
Sarcasm Detection and Sentiment Identification
|
Since their inception, transformer-based language models have led to
impressive performance gains across multiple natural language processing tasks.
For Arabic, the current state-of-the-art results on most datasets are achieved
by the AraBERT language model. Notwithstanding these recent advancements,
sarcasm and sentiment detection persist to be challenging tasks in Arabic,
given the language's rich morphology, linguistic disparity and dialectal
variations. This paper proffers team SPPU-AASM's submission for the WANLP
ArSarcasm shared-task 2021, which centers around the sarcasm and sentiment
polarity detection of Arabic tweets. The study proposes a hybrid model,
combining sentence representations from AraBERT with static word vectors
trained on Arabic social media corpora. The proposed system achieves a
F1-sarcastic score of 0.62 and a F-PN score of 0.715 for the sarcasm and
sentiment detection tasks, respectively. Simulation results show that the
proposed system outperforms multiple existing approaches for both the tasks,
suggesting that the amalgamation of context-free and context-dependent text
representations can help capture complementary facets of word meaning in
Arabic. The system ranked second and tenth in the respective sub-tasks of
sarcasm detection and sentiment identification.
| 2,021 |
Computation and Language
|
Tell Me Why You Feel That Way: Processing Compositional Dependency for
Tree-LSTM Aspect Sentiment Triplet Extraction (TASTE)
|
Sentiment analysis has transitioned from classifying the sentiment of an
entire sentence to providing the contextual information of what targets exist
in a sentence, what sentiment the individual targets have, and what the causal
words responsible for that sentiment are. However, this has led to elaborate
requirements being placed on the datasets needed to train neural networks on
the joint triplet task of determining an entity, its sentiment, and the causal
words for that sentiment. Requiring this kind of data for training systems is
problematic, as they suffer from stacking subjective annotations and domain
over-fitting leading to poor model generalisation when applied in new contexts.
These problems are also likely to be compounded as we attempt to jointly
determine additional contextual elements in the future. To mitigate these
problems, we present a hybrid neural-symbolic method utilising a Dependency
Tree-LSTM's compositional sentiment parse structure and complementary symbolic
rules to correctly extract target-sentiment-cause triplets from sentences
without the need for triplet training data. We show that this method has the
potential to perform in line with state-of-the-art approaches while also
simplifying the data required and providing a degree of interpretability
through the Tree-LSTM.
| 2,021 |
Computation and Language
|
ELLA: Exploration through Learned Language Abstraction
|
Building agents capable of understanding language instructions is critical to
effective and robust human-AI collaboration. Recent work focuses on training
these agents via reinforcement learning in environments with synthetic
language; however, instructions often define long-horizon, sparse-reward tasks,
and learning policies requires many episodes of experience. We introduce ELLA:
Exploration through Learned Language Abstraction, a reward shaping approach
geared towards boosting sample efficiency in sparse reward environments by
correlating high-level instructions with simpler low-level constituents. ELLA
has two key elements: 1) A termination classifier that identifies when agents
complete low-level instructions, and 2) A relevance classifier that correlates
low-level instructions with success on high-level tasks. We learn the
termination classifier offline from pairs of instructions and terminal states.
Notably, in departure from prior work in language and abstraction, we learn the
relevance classifier online, without relying on an explicit decomposition of
high-level instructions to low-level instructions. On a suite of complex BabyAI
environments with varying instruction complexities and reward sparsity, ELLA
shows gains in sample efficiency relative to language-based shaping and
traditional RL methods.
| 2,021 |
Computation and Language
|
Interpretable bias mitigation for textual data: Reducing gender bias in
patient notes while maintaining classification performance
|
Medical systems in general, and patient treatment decisions and outcomes in
particular, are affected by bias based on gender and other demographic
elements. As language models are increasingly applied to medicine, there is a
growing interest in building algorithmic fairness into processes impacting
patient care. Much of the work addressing this question has focused on biases
encoded in language models -- statistical estimates of the relationships
between concepts derived from distant reading of corpora. Building on this
work, we investigate how word choices made by healthcare practitioners and
language models interact with regards to bias. We identify and remove gendered
language from two clinical-note datasets and describe a new debiasing procedure
using BERT-based gender classifiers. We show minimal degradation in health
condition classification tasks for low- to medium-levels of bias removal via
data augmentation. Finally, we compare the bias semantically encoded in the
language models with the bias empirically observed in health records. This work
outlines an interpretable approach for using data augmentation to identify and
reduce the potential for bias in natural language processing pipelines.
| 2,021 |
Computation and Language
|
DeepCPCFG: Deep Learning and Context Free Grammars for End-to-End
Information Extraction
|
We address the challenge of extracting structured information from business
documents without detailed annotations. We propose Deep Conditional
Probabilistic Context Free Grammars (DeepCPCFG) to parse two-dimensional
complex documents and use Recursive Neural Networks to create an end-to-end
system for finding the most probable parse that represents the structured
information to be extracted. This system is trained end-to-end with scanned
documents as input and only relational-records as labels. The
relational-records are extracted from existing databases avoiding the cost of
annotating documents by hand. We apply this approach to extract information
from scanned invoices achieving state-of-the-art results despite using no
hand-annotations.
| 2,021 |
Computation and Language
|
How does Truth Evolve into Fake News? An Empirical Study of Fake News
Evolution
|
Automatically identifying fake news from the Internet is a challenging
problem in deception detection tasks. Online news is modified constantly during
its propagation, e.g., malicious users distort the original truth and make up
fake news. However, the continuous evolution process would generate
unprecedented fake news and cheat the original model. We present the Fake News
Evolution (FNE) dataset: a new dataset tracking the fake news evolution
process. Our dataset is composed of 950 paired data, each of which consists of
articles representing the three significant phases of the evolution process,
which are the truth, the fake news, and the evolved fake news. We observe the
features during the evolution and they are the disinformation techniques, text
similarity, top 10 keywords, classification accuracy, parts of speech, and
sentiment properties.
| 2,021 |
Computation and Language
|
Self-Learning for Zero Shot Neural Machine Translation
|
Neural Machine Translation (NMT) approaches employing monolingual data are
showing steady improvements in resource rich conditions. However, evaluations
using real-world low-resource languages still result in unsatisfactory
performance. This work proposes a novel zero-shot NMT modeling approach that
learns without the now-standard assumption of a pivot language sharing parallel
data with the zero-shot source and target languages. Our approach is based on
three stages: initialization from any pre-trained NMT model observing at least
the target language, augmentation of source sides leveraging target monolingual
data, and learning to optimize the initial model to the zero-shot pair, where
the latter two constitute a self-learning cycle. Empirical findings involving
four diverse (in terms of a language family, script and relatedness) zero-shot
pairs show the effectiveness of our approach with up to +5.93 BLEU improvement
against a supervised bilingual baseline. Compared to unsupervised NMT,
consistent improvements are observed even in a domain-mismatch setting,
attesting to the usability of our method.
| 2,021 |
Computation and Language
|
A Result based Portable Framework for Spoken Language Understanding
|
Spoken language understanding (SLU), which is a core component of the
task-oriented dialogue system, has made substantial progress in the research of
single-turn dialogue. However, the performance in multi-turn dialogue is still
not satisfactory in the sense that the existing multi-turn SLU methods have low
portability and compatibility for other single-turn SLU models. Further,
existing multi-turn SLU methods do not exploit the historical predicted results
when predicting the current utterance, which wastes helpful information. To gap
those shortcomings, in this paper, we propose a novel Result-based Portable
Framework for SLU (RPFSLU). RPFSLU allows most existing single-turn SLU models
to obtain the contextual information from multi-turn dialogues and takes full
advantage of predicted results in the dialogue history during the current
prediction. Experimental results on the public dataset KVRET have shown that
all SLU models in baselines acquire enhancement by RPFSLU on multi-turn SLU
tasks.
| 2,021 |
Computation and Language
|
Team Phoenix at WASSA 2021: Emotion Analysis on News Stories with
Pre-Trained Language Models
|
Emotion is fundamental to humanity. The ability to perceive, understand and
respond to social interactions in a human-like manner is one of the most
desired capabilities in artificial agents, particularly in social-media bots.
Over the past few years, computational understanding and detection of emotional
aspects in language have been vital in advancing human-computer interaction.
The WASSA Shared Task 2021 released a dataset of news-stories across two
tracks, Track-1 for Empathy and Distress Prediction and Track-2 for
Multi-Dimension Emotion prediction at the essay-level. We describe our system
entry for the WASSA 2021 Shared Task (for both Track-1 and Track-2), where we
leveraged the information from Pre-trained language models for Track-specific
Tasks. Our proposed models achieved an Average Pearson Score of 0.417 and a
Macro-F1 Score of 0.502 in Track 1 and Track 2, respectively. In the Shared
Task leaderboard, we secured 4th rank in Track 1 and 2nd rank in Track 2.
| 2,021 |
Computation and Language
|
Knowledge-based Extraction of Cause-Effect Relations from Biomedical
Text
|
We propose a knowledge-based approach for extraction of Cause-Effect (CE)
relations from biomedical text. Our approach is a combination of an
unsupervised machine learning technique to discover causal triggers and a set
of high-precision linguistic rules to identify cause/effect arguments of these
causal triggers. We evaluate our approach using a corpus of 58,761
Leukaemia-related PubMed abstracts consisting of 568,528 sentences. We could
extract 152,655 CE triplets from this corpus where each triplet consists of a
cause phrase, an effect phrase and a causal trigger. As compared to the
existing knowledge base - SemMedDB (Kilicoglu et al., 2012), the number of
extractions are almost twice. Moreover, the proposed approach outperformed the
existing technique SemRep (Rindflesch and Fiszman, 2003) on a dataset of 500
sentences.
| 2,021 |
Computation and Language
|
Techniques for Jointly Extracting Entities and Relations: A Survey
|
Relation Extraction is an important task in Information Extraction which
deals with identifying semantic relations between entity mentions.
Traditionally, relation extraction is carried out after entity extraction in a
"pipeline" fashion, so that relation extraction only focuses on determining
whether any semantic relation exists between a pair of extracted entity
mentions. This leads to propagation of errors from entity extraction stage to
relation extraction stage. Also, entity extraction is carried out without any
knowledge about the relations. Hence, it was observed that jointly performing
entity and relation extraction is beneficial for both the tasks. In this paper,
we survey various techniques for jointly extracting entities and relations. We
categorize techniques based on the approach they adopt for joint extraction,
i.e. whether they employ joint inference or joint modelling or both. We further
describe some representative techniques for joint inference and joint
modelling. We also describe two standard datasets, evaluation techniques and
performance of the joint extraction approaches on these datasets. We present a
brief analysis of application of a general domain joint extraction approach to
a Biomedical dataset. This survey is useful for researchers as well as
practitioners in the field of Information Extraction, by covering a broad
landscape of joint extraction techniques.
| 2,021 |
Computation and Language
|
Relational Weight Priors in Neural Networks for Abstract Pattern
Learning and Language Modelling
|
Deep neural networks have become the dominant approach in natural language
processing (NLP). However, in recent years, it has become apparent that there
are shortcomings in systematicity that limit the performance and data
efficiency of deep learning in NLP. These shortcomings can be clearly shown in
lower-level artificial tasks, mostly on synthetic data. Abstract patterns are
the best known examples of a hard problem for neural networks in terms of
generalisation to unseen data. They are defined by relations between items,
such as equality, rather than their values. It has been argued that these
low-level problems demonstrate the inability of neural networks to learn
systematically. In this study, we propose Embedded Relation Based Patterns
(ERBP) as a novel way to create a relational inductive bias that encourages
learning equality and distance-based relations for abstract patterns. ERBP is
based on Relation Based Patterns (RBP), but modelled as a Bayesian prior on
network weights and implemented as a regularisation term in otherwise standard
network learning. ERBP is is easy to integrate into standard neural networks
and does not affect their learning capacity. In our experiments, ERBP priors
lead to almost perfect generalisation when learning abstract patterns from
synthetic noise-free sequences. ERBP also improves natural language models on
the word and character level and pitch prediction in melodies with RNN, GRU and
LSTM networks. We also find improvements in in the more complex tasks of
learning of graph edit distance and compositional sentence entailment. ERBP
consistently improves over RBP and over standard networks, showing that it
enables abstract pattern learning which contributes to performance in natural
language tasks.
| 2,021 |
Computation and Language
|
CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review
|
Many specialized domains remain untouched by deep learning, as large labeled
datasets require expensive expert annotators. We address this bottleneck within
the legal domain by introducing the Contract Understanding Atticus Dataset
(CUAD), a new dataset for legal contract review. CUAD was created with dozens
of legal experts from The Atticus Project and consists of over 13,000
annotations. The task is to highlight salient portions of a contract that are
important for a human to review. We find that Transformer models have nascent
performance, but that this performance is strongly influenced by model design
and training dataset size. Despite these promising results, there is still
substantial room for improvement. As one of the only large, specialized NLP
benchmarks annotated by experts, CUAD can serve as a challenging research
benchmark for the broader NLP community.
| 2,021 |
Computation and Language
|
Hurdles to Progress in Long-form Question Answering
|
The task of long-form question answering (LFQA) involves retrieving documents
relevant to a given question and using them to generate a paragraph-length
answer. While many models have recently been proposed for LFQA, we show in this
paper that the task formulation raises fundamental challenges regarding
evaluation and dataset creation that currently preclude meaningful modeling
progress. To demonstrate these challenges, we first design a new system that
relies on sparse attention and contrastive retriever learning to achieve
state-of-the-art performance on the ELI5 LFQA dataset. While our system tops
the public leaderboard, a detailed analysis reveals several troubling trends:
(1) our system's generated answers are not actually grounded in the documents
that it retrieves; (2) ELI5 contains significant train / validation overlap, as
at least 81% of ELI5 validation questions occur in paraphrased form in the
training set; (3) ROUGE-L is not an informative metric of generated answer
quality and can be easily gamed; and (4) human evaluations used for other text
generation tasks are unreliable for LFQA. We offer suggestions to mitigate each
of these issues, which we hope will lead to more rigorous LFQA research and
meaningful progress in the future.
| 2,021 |
Computation and Language
|
Unified Pre-training for Program Understanding and Generation
|
Code summarization and generation empower conversion between programming
language (PL) and natural language (NL), while code translation avails the
migration of legacy code from one PL to another. This paper introduces PLBART,
a sequence-to-sequence model capable of performing a broad spectrum of program
and language understanding and generation tasks. PLBART is pre-trained on an
extensive collection of Java and Python functions and associated NL text via
denoising autoencoding. Experiments on code summarization in the English
language, code generation, and code translation in seven programming languages
show that PLBART outperforms or rivals state-of-the-art models. Moreover,
experiments on discriminative tasks, e.g., program repair, clone detection, and
vulnerable code detection, demonstrate PLBART's effectiveness in program
understanding. Furthermore, analysis reveals that PLBART learns program syntax,
style (e.g., identifier naming convention), logical flow (e.g., if block inside
an else block is equivalent to else if block) that are crucial to program
semantics and thus excels even with limited annotations.
| 2,021 |
Computation and Language
|
Identifying ARDS using the Hierarchical Attention Network with Sentence
Objectives Framework
|
Acute respiratory distress syndrome (ARDS) is a life-threatening condition
that is often undiagnosed or diagnosed late. ARDS is especially prominent in
those infected with COVID-19. We explore the automatic identification of ARDS
indicators and confounding factors in free-text chest radiograph reports. We
present a new annotated corpus of chest radiograph reports and introduce the
Hierarchical Attention Network with Sentence Objectives (HANSO) text
classification framework. HANSO utilizes fine-grained annotations to improve
document classification performance. HANSO can extract ARDS-related information
with high performance by leveraging relation annotations, even if the annotated
spans are noisy. Using annotated chest radiograph images as a gold standard,
HANSO identifies bilateral infiltrates, an indicator of ARDS, in chest
radiograph reports with performance (0.87 F1) comparable to human annotations
(0.84 F1). This algorithm could facilitate more efficient and expeditious
identification of ARDS by clinicians and researchers and contribute to the
development of new therapies to improve patient care.
| 2,021 |
Computation and Language
|
ReportAGE: Automatically extracting the exact age of Twitter users based
on self-reports in tweets
|
Advancing the utility of social media data for research applications requires
methods for automatically detecting demographic information about social media
study populations, including users' age. The objective of this study was to
develop and evaluate a method that automatically identifies the exact age of
users based on self-reports in their tweets. Our end-to-end automatic natural
language processing (NLP) pipeline, ReportAGE, includes query patterns to
retrieve tweets that potentially mention an age, a classifier to distinguish
retrieved tweets that self-report the user's exact age ("age" tweets) and those
that do not ("no age" tweets), and rule-based extraction to identify the age.
To develop and evaluate ReportAGE, we manually annotated 11,000 tweets that
matched the query patterns. Based on 1000 tweets that were annotated by all
five annotators, inter-annotator agreement (Fleiss' kappa) was 0.80 for
distinguishing "age" and "no age" tweets, and 0.95 for identifying the exact
age among the "age" tweets on which the annotators agreed. A deep neural
network classifier, based on a RoBERTa-Large pretrained model, achieved the
highest F1-score of 0.914 (precision = 0.905, recall = 0.942) for the "age"
class. When the age extraction was evaluated using the classifier's
predictions, it achieved an F1-score of 0.855 (precision = 0.805, recall =
0.914) for the "age" class. When it was evaluated directly on the held-out test
set, it achieved an F1-score of 0.931 (precision = 0.873, recall = 0.998) for
the "age" class. We deployed ReportAGE on more than 1.2 billion tweets posted
by 245,927 users, and predicted ages for 132,637 (54%) of them. Scaling the
detection of exact age to this large number of users can advance the utility of
social media data for research applications that do not align with the
predefined age groupings of extant binary or multi-class classification
approaches.
| 2,022 |
Computation and Language
|
Majority Voting with Bidirectional Pre-translation For Bitext Retrieval
|
Obtaining high-quality parallel corpora is of paramount importance for
training NMT systems. However, as many language pairs lack adequate
gold-standard training data, a popular approach has been to mine so-called
"pseudo-parallel" sentences from paired documents in two languages. In this
paper, we outline some problems with current methods, propose computationally
economical solutions to those problems, and demonstrate success with novel
methods on the Tatoeba similarity search benchmark and on a downstream task,
namely NMT. We uncover the effect of resource-related factors (i.e. how much
monolingual/bilingual data is available for a given language) on the optimal
choice of bitext mining approach, and echo problems with the oft-used BUCC
dataset that have been observed by others. We make the code and data used for
our experiments publicly available.
| 2,021 |
Computation and Language
|
Causal-aware Safe Policy Improvement for Task-oriented dialogue
|
The recent success of reinforcement learning's (RL) in solving complex tasks
is most often attributed to its capacity to explore and exploit an environment
where it has been trained. Sample efficiency is usually not an issue since
cheap simulators are available to sample data on-policy. On the other hand,
task oriented dialogues are usually learnt from offline data collected using
human demonstrations. Collecting diverse demonstrations and annotating them is
expensive. Unfortunately, use of RL methods trained on off-policy data are
prone to issues of bias and generalization, which are further exacerbated by
stochasticity in human response and non-markovian belief state of a dialogue
management system. To this end, we propose a batch RL framework for task
oriented dialogue policy learning: causal aware safe policy improvement
(CASPI). This method gives guarantees on dialogue policy's performance and also
learns to shape rewards according to intentions behind human responses, rather
than just mimicking demonstration data; this couple with batch-RL helps overall
with sample efficiency of the framework. We demonstrate the effectiveness of
this framework on a dialogue-context-to-text Generation and end-to-end dialogue
task of the Multiwoz2.0 dataset. The proposed method outperforms the current
state of the art on these metrics, in both case. In the end-to-end case, our
method trained only on 10\% of the data was able to out perform current state
in three out of four evaluation metrics.
| 2,023 |
Computation and Language
|
Self-supervised Text-to-SQL Learning with Header Alignment Training
|
Since we can leverage a large amount of unlabeled data without any human
supervision to train a model and transfer the knowledge to target tasks,
self-supervised learning is a de-facto component for the recent success of deep
learning in various fields. However, in many cases, there is a discrepancy
between a self-supervised learning objective and a task-specific objective. In
order to tackle such discrepancy in Text-to-SQL task, we propose a novel
self-supervised learning framework. We utilize the task-specific properties of
Text-to-SQL task and the underlying structures of table contents to train the
models to learn useful knowledge of the \textit{header-column} alignment task
from unlabeled table data. We are able to transfer the knowledge to the
supervised Text-to-SQL training with annotated samples, so that the model can
leverage the knowledge to better perform the \textit{header-span} alignment
task to predict SQL statements. Experimental results show that our
self-supervised learning framework significantly improves the performance of
the existing strong BERT based models without using large external corpora. In
particular, our method is effective for training the model with scarce labeled
data. The source code of this work is available in GitHub.
| 2,021 |
Computation and Language
|
MediaSum: A Large-scale Media Interview Dataset for Dialogue
Summarization
|
MediaSum, a large-scale media interview dataset consisting of 463.6K
transcripts with abstractive summaries. To create this dataset, we collect
interview transcripts from NPR and CNN and employ the overview and topic
descriptions as summaries. Compared with existing public corpora for dialogue
summarization, our dataset is an order of magnitude larger and contains complex
multi-party conversations from multiple domains. We conduct statistical
analysis to demonstrate the unique positional bias exhibited in the transcripts
of televised and radioed interviews. We also show that MediaSum can be used in
transfer learning to improve a model's performance on other dialogue
summarization tasks.
| 2,021 |
Computation and Language
|
FairFil: Contrastive Neural Debiasing Method for Pretrained Text
Encoders
|
Pretrained text encoders, such as BERT, have been applied increasingly in
various natural language processing (NLP) tasks, and have recently demonstrated
significant performance gains. However, recent studies have demonstrated the
existence of social bias in these pretrained NLP models. Although prior works
have made progress on word-level debiasing, improved sentence-level fairness of
pretrained encoders still lacks exploration. In this paper, we proposed the
first neural debiasing method for a pretrained sentence encoder, which
transforms the pretrained encoder outputs into debiased representations via a
fair filter (FairFil) network. To learn the FairFil, we introduce a contrastive
learning framework that not only minimizes the correlation between filtered
embeddings and bias words but also preserves rich semantic information of the
original sentences. On real-world datasets, our FairFil effectively reduces the
bias degree of pretrained text encoders, while continuously showing desirable
performance on downstream tasks. Moreover, our post-hoc method does not require
any retraining of the text encoders, further enlarging FairFil's application
space.
| 2,021 |
Computation and Language
|
LightMBERT: A Simple Yet Effective Method for Multilingual BERT
Distillation
|
The multilingual pre-trained language models (e.g, mBERT, XLM and XLM-R) have
shown impressive performance on cross-lingual natural language understanding
tasks. However, these models are computationally intensive and difficult to be
deployed on resource-restricted devices. In this paper, we propose a simple yet
effective distillation method (LightMBERT) for transferring the cross-lingual
generalization ability of the multilingual BERT to a small student model. The
experiment results empirically demonstrate the efficiency and effectiveness of
LightMBERT, which is significantly better than the baselines and performs
comparable to the teacher mBERT.
| 2,021 |
Computation and Language
|
Topical Language Generation using Transformers
|
Large-scale transformer-based language models (LMs) demonstrate impressive
capabilities in open text generation. However, controlling the generated text's
properties such as the topic, style, and sentiment is challenging and often
requires significant changes to the model architecture or retraining and
fine-tuning the model on new supervised data. This paper presents a novel
approach for Topical Language Generation (TLG) by combining a pre-trained LM
with topic modeling information. We cast the problem using Bayesian probability
formulation with topic probabilities as a prior, LM probabilities as the
likelihood, and topical language generation probability as the posterior. In
learning the model, we derive the topic probability distribution from the
user-provided document's natural structure. Furthermore, we extend our model by
introducing new parameters and functions to influence the quantity of the
topical features presented in the generated text. This feature would allow us
to easily control the topical properties of the generated text. Our
experimental results demonstrate that our model outperforms the
state-of-the-art results on coherency, diversity, and fluency while being
faster in decoding.
| 2,021 |
Computation and Language
|
Towards Multi-Sense Cross-Lingual Alignment of Contextual Embeddings
|
Cross-lingual word embeddings (CLWE) have been proven useful in many
cross-lingual tasks. However, most existing approaches to learn CLWE including
the ones with contextual embeddings are sense agnostic. In this work, we
propose a novel framework to align contextual embeddings at the sense level by
leveraging cross-lingual signal from bilingual dictionaries only. We
operationalize our framework by first proposing a novel sense-aware cross
entropy loss to model word senses explicitly. The monolingual ELMo and BERT
models pretrained with our sense-aware cross entropy loss demonstrate
significant performance improvement for word sense disambiguation tasks. We
then propose a sense alignment objective on top of the sense-aware cross
entropy loss for cross-lingual model pretraining, and pretrain cross-lingual
models for several language pairs (English to German/Spanish/Japanese/Chinese).
Compared with the best baseline results, our cross-lingual models achieve
0.52%, 2.09% and 1.29% average performance improvements on zero-shot
cross-lingual NER, sentiment classification and XNLI tasks, respectively.
| 2,022 |
Computation and Language
|
Active$^2$ Learning: Actively reducing redundancies in Active Learning
methods for Sequence Tagging and Machine Translation
|
While deep learning is a powerful tool for natural language processing (NLP)
problems, successful solutions to these problems rely heavily on large amounts
of annotated samples. However, manually annotating data is expensive and
time-consuming. Active Learning (AL) strategies reduce the need for huge
volumes of labeled data by iteratively selecting a small number of examples for
manual annotation based on their estimated utility in training the given model.
In this paper, we argue that since AL strategies choose examples independently,
they may potentially select similar examples, all of which may not contribute
significantly to the learning process. Our proposed approach,
Active$\mathbf{^2}$ Learning (A$\mathbf{^2}$L), actively adapts to the deep
learning model being trained to eliminate further such redundant examples
chosen by an AL strategy. We show that A$\mathbf{^2}$L is widely applicable by
using it in conjunction with several different AL strategies and NLP tasks. We
empirically demonstrate that the proposed approach is further able to reduce
the data requirements of state-of-the-art AL strategies by an absolute
percentage reduction of $\approx\mathbf{3-25\%}$ on multiple NLP tasks while
achieving the same performance with no additional computation overhead.
| 2,021 |
Computation and Language
|
Conversational Answer Generation and Factuality for Reading
Comprehension Question-Answering
|
Question answering (QA) is an important use case on voice assistants. A
popular approach to QA is extractive reading comprehension (RC) which finds an
answer span in a text passage. However, extractive answers are often unnatural
in a conversational context which results in suboptimal user experience. In
this work, we investigate conversational answer generation for QA. We propose
AnswerBART, an end-to-end generative RC model which combines answer generation
from multiple passages with passage ranking and answerability. Moreover, a
hurdle in applying generative RC are hallucinations where the answer is
factually inconsistent with the passage text. We leverage recent work from
summarization to evaluate factuality. Experiments show that AnswerBART
significantly improves over previous best published results on MS MARCO 2.1
NLGEN by 2.5 ROUGE-L and NarrativeQA by 9.4 ROUGE-L.
| 2,021 |
Computation and Language
|
Does the Magic of BERT Apply to Medical Code Assignment? A Quantitative
Study
|
Unsupervised pretraining is an integral part of many natural language
processing systems, and transfer learning with language models has achieved
remarkable results in many downstream tasks. In the clinical application of
medical code assignment, diagnosis and procedure codes are inferred from
lengthy clinical notes such as hospital discharge summaries. However, it is not
clear if pretrained models are useful for medical code prediction without
further architecture engineering. This paper conducts a comprehensive
quantitative analysis of various contextualized language models' performance,
pretrained in different domains, for medical code assignment from clinical
notes. We propose a hierarchical fine-tuning architecture to capture
interactions between distant words and adopt label-wise attention to exploit
label information. Contrary to current trends, we demonstrate that a carefully
trained classical CNN outperforms attention-based models on a MIMIC-III subset
with frequent codes. Our empirical findings suggest directions for improving
the medical code assignment application.
| 2,021 |
Computation and Language
|
DebIE: A Platform for Implicit and Explicit Debiasing of Word Embedding
Spaces
|
Recent research efforts in NLP have demonstrated that distributional word
vector spaces often encode stereotypical human biases, such as racism and
sexism. With word representations ubiquitously used in NLP models and
pipelines, this raises ethical issues and jeopardizes the fairness of language
technologies. While there exists a large body of work on bias measures and
debiasing methods, to date, there is no platform that would unify these
research efforts and make bias measuring and debiasing of representation spaces
widely accessible. In this work, we present DebIE, the first integrated
platform for (1) measuring and (2) mitigating bias in word embeddings. Given an
(i) embedding space (users can choose between the predefined spaces or upload
their own) and (ii) a bias specification (users can choose between existing
bias specifications or create their own), DebIE can (1) compute several
measures of implicit and explicit bias and modify the embedding space by
executing two (mutually composable) debiasing models. DebIE's functionality can
be accessed through four different interfaces: (a) a web application, (b) a
desktop application, (c) a REST-ful API, and (d) as a command-line application.
DebIE is available at: debie.informatik.uni-mannheim.de.
| 2,021 |
Computation and Language
|
ASAP: A Chinese Review Dataset Towards Aspect Category Sentiment
Analysis and Rating Prediction
|
Sentiment analysis has attracted increasing attention in e-commerce. The
sentiment polarities underlying user reviews are of great value for business
intelligence. Aspect category sentiment analysis (ACSA) and review rating
prediction (RP) are two essential tasks to detect the fine-to-coarse sentiment
polarities. %Considering the sentiment of the aspects(ACSA) and the overall
review rating(RP) simultaneously has the potential to improve the overall
performance. ACSA and RP are highly correlated and usually employed jointly in
real-world e-commerce scenarios. While most public datasets are constructed for
ACSA and RP separately, which may limit the further exploitation of both tasks.
To address the problem and advance related researches, we present a large-scale
Chinese restaurant review dataset \textbf{ASAP} including $46,730$ genuine
reviews from a leading online-to-offline (O2O) e-commerce platform in China.
Besides a $5$-star scale rating, each review is manually annotated according to
its sentiment polarities towards $18$ pre-defined aspect categories. We hope
the release of the dataset could shed some light on the fields of sentiment
analysis. Moreover, we propose an intuitive yet effective joint model for ACSA
and RP. Experimental results demonstrate that the joint model outperforms
state-of-the-art baselines on both tasks.
| 2,021 |
Computation and Language
|
Evaluation of Morphological Embeddings for the Russian Language
|
A number of morphology-based word embedding models were introduced in recent
years. However, their evaluation was mostly limited to English, which is known
to be a morphologically simple language. In this paper, we explore whether and
to what extent incorporating morphology into word embeddings improves
performance on downstream NLP tasks, in the case of morphologically rich
Russian language. NLP tasks of our choice are POS tagging, Chunking, and NER --
for Russian language, all can be mostly solved using only morphology without
understanding the semantics of words. Our experiments show that
morphology-based embeddings trained with Skipgram objective do not outperform
existing embedding model -- FastText. Moreover, a more complex, but morphology
unaware model, BERT, allows to achieve significantly greater performance on the
tasks that presumably require understanding of a word's morphology.
| 2,021 |
Computation and Language
|
Domain State Tracking for a Simplified Dialogue System
|
Task-oriented dialogue systems aim to help users achieve their goals in
specific domains. Recent neural dialogue systems use the entire dialogue
history for abundant contextual information accumulated over multiple
conversational turns. However, the dialogue history becomes increasingly longer
as the number of turns increases, thereby increasing memory usage and
computational costs. In this paper, we present DoTS (Domain State Tracking for
a Simplified Dialogue System), a task-oriented dialogue system that uses a
simplified input context instead of the entire dialogue history. However,
neglecting the dialogue history can result in a loss of contextual information
from previous conversational turns. To address this issue, DoTS tracks the
domain state in addition to the belief state and uses it for the input context.
Using this simplified input, DoTS improves the inform rate and success rate by
1.09 points and 1.24 points, respectively, compared to the previous
state-of-the-art model on MultiWOZ, which is a well-known benchmark.
| 2,021 |
Computation and Language
|
The Interplay of Variant, Size, and Task Type in Arabic Pre-trained
Language Models
|
In this paper, we explore the effects of language variants, data sizes, and
fine-tuning task types in Arabic pre-trained language models. To do so, we
build three pre-trained language models across three variants of Arabic: Modern
Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a
fourth language model which is pre-trained on a mix of the three. We also
examine the importance of pre-training data size by building additional models
that are pre-trained on a scaled-down set of the MSA variant. We compare our
different models to each other, as well as to eight publicly available models
by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest
that the variant proximity of pre-training data to fine-tuning data is more
important than the pre-training data size. We exploit this insight in defining
an optimized system selection model for the studied tasks.
| 2,021 |
Computation and Language
|
Unsupervised Transfer Learning in Multilingual Neural Machine
Translation with Cross-Lingual Word Embeddings
|
In this work we look into adding a new language to a multilingual NMT system
in an unsupervised fashion. Under the utilization of pre-trained cross-lingual
word embeddings we seek to exploit a language independent multilingual sentence
representation to easily generalize to a new language. While using
cross-lingual embeddings for word lookup we decode from a yet entirely unseen
source language in a process we call blind decoding. Blindly decoding from
Portuguese using a basesystem containing several Romance languages we achieve
scores of 36.4 BLEU for Portuguese-English and 12.8 BLEU for Russian-English.
In an attempt to train the mapping from the encoder sentence representation to
a new target language we use our model as an autoencoder. Merely training to
translate from Portuguese to Portuguese while freezing the encoder we achieve
26 BLEU on English-Portuguese, and up to 28 BLEU when adding artificial noise
to the input. Lastly we explore a more practical adaptation approach through
non-iterative backtranslation, exploiting our model's ability to produce high
quality translations through blind decoding. This yields us up to 34.6 BLEU on
English-Portuguese, attaining near parity with a model adapted on real
bilingual data.
| 2,021 |
Computation and Language
|
ENTRUST: Argument Reframing with Language Models and Entailment
|
Framing involves the positive or negative presentation of an argument or
issue depending on the audience and goal of the speaker (Entman 1983).
Differences in lexical framing, the focus of our work, can have large effects
on peoples' opinions and beliefs. To make progress towards reframing arguments
for positive effects, we create a dataset and method for this task. We use a
lexical resource for "connotations" to create a parallel corpus and propose a
method for argument reframing that combines controllable text generation
(positive connotation) with a post-decoding entailment component (same
denotation). Our results show that our method is effective compared to strong
baselines along the dimensions of fluency, meaning, and
trustworthiness/reduction of fear.
| 2,021 |
Computation and Language
|
MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding
|
Generating metaphors is a challenging task as it requires a proper
understanding of abstract concepts, making connections between unrelated
concepts, and deviating from the literal meaning. In this paper, we aim to
generate a metaphoric sentence given a literal expression by replacing relevant
verbs. Based on a theoretically-grounded connection between metaphors and
symbols, we propose a method to automatically construct a parallel corpus by
transforming a large number of metaphorical sentences from the Gutenberg Poetry
corpus (Jacobs, 2018) to their literal counterpart using recent advances in
masked language modeling coupled with commonsense inference. For the generation
task, we incorporate a metaphor discriminator to guide the decoding of a
sequence to sequence model fine-tuned on our parallel data to generate
high-quality metaphors. Human evaluation on an independent test set of literal
statements shows that our best model generates metaphors better than three
well-crafted baselines 66% of the time on average. A task-based evaluation
shows that human-written poems enhanced with metaphors proposed by our model
are preferred 68% of the time compared to poems without metaphors.
| 2,021 |
Computation and Language
|
Towards Continual Learning for Multilingual Machine Translation via
Vocabulary Substitution
|
We propose a straightforward vocabulary adaptation scheme to extend the
language capacity of multilingual machine translation models, paving the way
towards efficient continual learning for multilingual machine translation. Our
approach is suitable for large-scale datasets, applies to distant languages
with unseen scripts, incurs only minor degradation on the translation
performance for the original language pairs and provides competitive
performance even in the case where we only possess monolingual data for the new
languages.
| 2,021 |
Computation and Language
|
COVID-19 Smart Chatbot Prototype for Patient Monitoring
|
Many COVID-19 patients developed prolonged symptoms after the infection,
including fatigue, delirium, and headache. The long-term health impact of these
conditions is still not clear. It is necessary to develop a way to follow up
with these patients for monitoring their health status to support timely
intervention and treatment. In the lack of sufficient human resources to follow
up with patients, we propose a novel smart chatbot solution backed with machine
learning to collect information (i.e., generating digital diary) in a
personalized manner. In this article, we describe the design framework and
components of our prototype.
| 2,021 |
Computation and Language
|
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
Representation
|
Pipelined NLP systems have largely been superseded by end-to-end neural
modeling, yet nearly all commonly-used models still require an explicit
tokenization step. While recent tokenization approaches based on data-derived
subword lexicons are less brittle than manually engineered tokenizers, these
techniques are not equally suited to all languages, and the use of any fixed
vocabulary may limit a model's ability to adapt. In this paper, we present
CANINE, a neural encoder that operates directly on character sequences, without
explicit tokenization or vocabulary, and a pre-training strategy that operates
either directly on characters or optionally uses subwords as a soft inductive
bias. To use its finer-grained input effectively and efficiently, CANINE
combines downsampling, which reduces the input sequence length, with a deep
transformer stack, which encodes context. CANINE outperforms a comparable mBERT
model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite
having 28% fewer model parameters.
| 2,022 |
Computation and Language
|
Evaluation of Morphological Embeddings for English and Russian Languages
|
This paper evaluates morphology-based embeddings for English and Russian
languages. Despite the interest and introduction of several morphology-based
word embedding models in the past and acclaimed performance improvements on
word similarity and language modeling tasks, in our experiments, we did not
observe any stable preference over two of our baseline models - SkipGram and
FastText. The performance exhibited by morphological embeddings is the average
of the two baselines mentioned above.
| 2,021 |
Computation and Language
|
Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU
Models
|
Recent studies indicate that NLU models are prone to rely on shortcut
features for prediction, without achieving true language understanding. As a
result, these models fail to generalize to real-world out-of-distribution data.
In this work, we show that the words in the NLU training set can be modeled as
a long-tailed distribution. There are two findings: 1) NLU models have strong
preference for features located at the head of the long-tailed distribution,
and 2) Shortcut features are picked up during very early few iterations of the
model training. These two observations are further employed to formulate a
measurement which can quantify the shortcut degree of each training sample.
Based on this shortcut measurement, we propose a shortcut mitigation framework
LTGR, to suppress the model from making overconfident predictions for samples
with large shortcut degree. Experimental results on three NLU benchmarks
demonstrate that our long-tailed distribution explanation accurately reflects
the shortcut learning behavior of NLU models. Experimental analysis further
indicates that LTGR can improve the generalization accuracy on OOD data, while
preserving the accuracy on in-distribution data.
| 2,021 |
Computation and Language
|
Anaphoric Binding: an integrated overview
|
The interpretation of anaphors depends on their antecedents as the semantic
value that an anaphor eventually conveys is co-specified by the value of its
antecedent. Interestingly, when occurring in a given syntactic position,
different anaphors may have different sets of admissible antecedents. Such
differences are the basis for the categorization of anaphoric expressions
according to their anaphoric capacity, being important to determine what are
the sets of admissible antecedents and how to represent and process this
anaphoric capacity for each type of anaphor.
From an empirical perspective, these constraints stem from what appears as
quite cogent generalisations and exhibit a universal character, given their
cross linguistic validity. From a conceptual point of view, in turn, the
relations among binding constraints involve non-trivial cross symmetry, which
lends them a modular nature and provides further strength to the plausibility
of their universal character. This kind of anaphoric binding constraints
appears thus as a most significant subset of natural language knowledge,
usually referred to as binding theory.
This paper provides an integrated overview of these constraints holding on
the pairing of nominal anaphors with their admissible antecedents that are
based on grammatical relations and structure. Along with the increasing
interest on neuro-symbolic approaches to natural language, this paper seeks to
contribute to revive the interest on this most intriguing research topic.
| 2,021 |
Computation and Language
|
Preregistering NLP Research
|
Preregistration refers to the practice of specifying what you are going to
do, and what you expect to find in your study, before carrying out the study.
This practice is increasingly common in medicine and psychology, but is rarely
discussed in NLP. This paper discusses preregistration in more detail, explores
how NLP researchers could preregister their work, and presents several
preregistration questions for different kinds of studies. Finally, we argue in
favour of registered reports, which could provide firmer grounds for slow
science in NLP research. The goal of this paper is to elicit a discussion in
the NLP community, which we hope to synthesise into a general NLP
preregistration form in future research.
| 2,021 |
Computation and Language
|
Characterizing Partisan Political Narrative Frameworks about COVID-19 on
Twitter
|
The COVID-19 pandemic is a global crisis that has been testing every society
and exposing the critical role of local politics in crisis response. In the
United States, there has been a strong partisan divide between the Democratic
and Republican party's narratives about the pandemic which resulted in
polarization of individual behaviors and divergent policy adoption across
regions. As shown in this case, as well as in most major social issues,
strongly polarized narrative frameworks facilitate such narratives. To
understand polarization and other social chasms, it is critical to dissect
these diverging narratives. Here, taking the Democratic and Republican
political social media posts about the pandemic as a case study, we demonstrate
that a combination of computational methods can provide useful insights into
the different contexts, framing, and characters and relationships that
construct their narrative frameworks which individual posts source from.
Leveraging a dataset of tweets from elite politicians in the U.S., we found
that the Democrats' narrative tends to be more concerned with the pandemic as
well as financial and social support, while the Republicans discuss more about
other political entities such as China. We then perform an automatic framing
analysis to characterize the ways in which they frame their narratives, where
we found that the Democrats emphasize the government's role in responding to
the pandemic, and the Republicans emphasize the roles of individuals and
support for small businesses. Finally, we present a semantic role analysis that
uncovers the important characters and relationships in their narratives as well
as how they facilitate a membership categorization process. Our findings
concretely expose the gaps in the "elusive consensus" between the two parties.
Our methodologies may be applied to computationally study narratives in various
domains.
| 2,021 |
Computation and Language
|
Learning Policies for Multilingual Training of Neural Machine
Translation Systems
|
Low-resource Multilingual Neural Machine Translation (MNMT) is typically
tasked with improving the translation performance on one or more language pairs
with the aid of high-resource language pairs. In this paper, we propose two
simple search based curricula -- orderings of the multilingual training data --
which help improve translation performance in conjunction with existing
techniques such as fine-tuning. Additionally, we attempt to learn a curriculum
for MNMT from scratch jointly with the training of the translation system with
the aid of contextual multi-arm bandits. We show on the FLORES low-resource
translation dataset that these learned curricula can provide better starting
points for fine tuning and improve overall performance of the translation
system.
| 2,021 |
Computation and Language
|
Learning Feature Weights using Reward Modeling for Denoising Parallel
Corpora
|
Large web-crawled corpora represent an excellent resource for improving the
performance of Neural Machine Translation (NMT) systems across several language
pairs. However, since these corpora are typically extremely noisy, their use is
fairly limited. Current approaches to dealing with this problem mainly focus on
filtering using heuristics or single features such as language model scores or
bi-lingual similarity. This work presents an alternative approach which learns
weights for multiple sentence-level features. These feature weights which are
optimized directly for the task of improving translation performance, are used
to score and filter sentences in the noisy corpora more effectively. We provide
results of applying this technique to building NMT systems using the Paracrawl
corpus for Estonian-English and show that it beats strong single feature
baselines and hand designed combinations. Additionally, we analyze the
sensitivity of this method to different types of noise and explore if the
learned weights generalize to other language pairs using the Maltese-English
Paracrawl corpus.
| 2,021 |
Computation and Language
|
Towards Socially Intelligent Agents with Mental State Transition and
Human Utility
|
Building a socially intelligent agent involves many challenges. One of which
is to track the agent's mental state transition and teach the agent to make
decisions guided by its value like a human. Towards this end, we propose to
incorporate mental state simulation and value modeling into dialogue agents.
First, we build a hybrid mental state parser that extracts information from
both the dialogue and event observations and maintains a graphical
representation of the agent's mind; Meanwhile, the transformer-based value
model learns human preferences from the human value dataset, ValueNet.
Empirical results show that the proposed model attains state-of-the-art
performance on the dialogue/action/emotion prediction task in the fantasy
text-adventure game dataset, LIGHT. We also show example cases to demonstrate:
(i) how the proposed mental state parser can assist the agent's decision by
grounding on the context like locations and objects, and (ii) how the value
model can help the agent make decisions based on its personal priorities.
| 2,022 |
Computation and Language
|
Bilingual Dictionary-based Language Model Pretraining for Neural Machine
Translation
|
Recent studies have demonstrated a perceivable improvement on the performance
of neural machine translation by applying cross-lingual language model
pretraining (Lample and Conneau, 2019), especially the Translation Language
Modeling (TLM). To alleviate the need for expensive parallel corpora by TLM, in
this work, we incorporate the translation information from dictionaries into
the pretraining process and propose a novel Bilingual Dictionary-based Language
Model (BDLM). We evaluate our BDLM in Chinese, English, and Romanian. For
Chinese-English, we obtained a 55.0 BLEU on WMT-News19 (Tiedemann, 2012) and a
24.3 BLEU on WMT20 news-commentary, outperforming the Vanilla Transformer
(Vaswani et al., 2017) by more than 8.4 BLEU and 2.3 BLEU, respectively.
According to our results, the BDLM also has advantages on convergence speed and
predicting rare words. The increase in BLEU for WMT16 Romanian-English also
shows its effectiveness in low-resources language translation.
| 2,021 |
Computation and Language
|
Improving Authorship Verification using Linguistic Divergence
|
We propose an unsupervised solution to the Authorship Verification task that
utilizes pre-trained deep language models to compute a new metric called
DV-Distance. The proposed metric is a measure of the difference between the two
authors comparing against pre-trained language models. Our design addresses the
problem of non-comparability in authorship verification, frequently encountered
in small or cross-domain corpora. To the best of our knowledge, this paper is
the first one to introduce a method designed with non-comparability in mind
from the ground up, rather than indirectly. It is also one of the first to use
Deep Language Models in this setting. The approach is intuitive, and it is easy
to understand and interpret through visualization. Experiments on four datasets
show our methods matching or surpassing current state-of-the-art and strong
baselines in most tasks.
| 2,021 |
Computation and Language
|
A Weakly Supervised Approach for Classifying Stance in Twitter Replies
|
Conversations on social media (SM) are increasingly being used to investigate
social issues on the web, such as online harassment and rumor spread. For such
issues, a common thread of research uses adversarial reactions, e.g., replies
pointing out factual inaccuracies in rumors. Though adversarial reactions are
prevalent in online conversations, inferring those adverse views (or stance)
from the text in replies is difficult and requires complex natural language
processing (NLP) models. Moreover, conventional NLP models for stance mining
need labeled data for supervised learning. Getting labeled conversations can
itself be challenging as conversations can be on any topic, and topics change
over time. These challenges make learning the stance a difficult NLP problem.
In this research, we first create a new stance dataset comprised of three
different topics by labeling both users' opinions on the topics (as in pro/con)
and users' stance while replying to others' posts (as in favor/oppose). As we
find limitations with supervised approaches, we propose a weakly-supervised
approach to predict the stance in Twitter replies. Our novel method allows
using a smaller number of hashtags to generate weak labels for Twitter replies.
Compared to supervised learning, our method improves the mean F1-macro by 8\%
on the hand-labeled dataset without using any hand-labeled examples in the
training set. We further show the applicability of our proposed method on COVID
19 related conversations on Twitter.
| 2,021 |
Computation and Language
|
Inductive Relation Prediction by BERT
|
Relation prediction in knowledge graphs is dominated by embedding based
methods which mainly focus on the transductive setting. Unfortunately, they are
not able to handle inductive learning where unseen entities and relations are
present and cannot take advantage of prior knowledge. Furthermore, their
inference process is not easily explainable. In this work, we propose an
all-in-one solution, called BERTRL (BERT-based Relational Learning), which
leverages pre-trained language model and fine-tunes it by taking relation
instances and their possible reasoning paths as training samples. BERTRL
outperforms the SOTAs in 15 out of 18 cases in both inductive and transductive
settings. Meanwhile, it demonstrates strong generalization capability in
few-shot learning and is explainable.
| 2,021 |
Computation and Language
|
Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of
Pre-trained Models' Transferability
|
This paper investigates whether the power of the models pre-trained on text
data, such as BERT, can be transferred to general token sequence classification
applications. To verify pre-trained models' transferability, we test the
pre-trained models on text classification tasks with meanings of tokens
mismatches, and real-world non-text token sequence classification data,
including amino acid, DNA, and music. We find that even on non-text data, the
models pre-trained on text converge faster, perform better than the randomly
initialized models, and only slightly worse than the models using task-specific
knowledge. We also find that the representations of the text and non-text
pre-trained models share non-trivial similarities.
| 2,022 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.