Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Fighting the COVID-19 Infodemic with a Holistic BERT Ensemble
|
This paper describes the TOKOFOU system, an ensemble model for misinformation
detection tasks based on six different transformer-based pre-trained encoders,
implemented in the context of the COVID-19 Infodemic Shared Task for English.
We fine tune each model on each of the task's questions and aggregate their
prediction scores using a majority voting approach. TOKOFOU obtains an overall
F1 score of 89.7%, ranking first.
| 2,021 |
Computation and Language
|
Speak or Chat with Me: End-to-End Spoken Language Understanding System
with Flexible Inputs
|
A major focus of recent research in spoken language understanding (SLU) has
been on the end-to-end approach where a single model can predict intents
directly from speech inputs without intermediate transcripts. However, this
approach presents some challenges. First, since speech can be considered as
personally identifiable information, in some cases only automatic speech
recognition (ASR) transcripts are accessible. Second, intent-labeled speech
data is scarce. To address the first challenge, we propose a novel system that
can predict intents from flexible types of inputs: speech, ASR transcripts, or
both. We demonstrate strong performance for either modality separately, and
when both speech and ASR transcripts are available, through system combination,
we achieve better results than using a single input modality. To address the
second challenge, we leverage a semantically robust pre-trained BERT model and
adopt a cross-modal system that co-trains text embeddings and acoustic
embeddings in a shared latent space. We further enhance this system by
utilizing an acoustic module pre-trained on LibriSpeech and domain-adapting the
text module on our target datasets. Our experiments show significant advantages
for these pre-training and fine-tuning strategies, resulting in a system that
achieves competitive intent-classification performance on Snips SLU and Fluent
Speech Commands datasets.
| 2,021 |
Computation and Language
|
Towards a parallel corpus of Portuguese and the Bantu language Emakhuwa
of Mozambique
|
Major advancement in the performance of machine translation models has been
made possible in part thanks to the availability of large-scale parallel
corpora. But for most languages in the world, the existence of such corpora is
rare. Emakhuwa, a language spoken in Mozambique, is like most African languages
low-resource in NLP terms. It lacks both computational and linguistic resources
and, to the best of our knowledge, few parallel corpora including Emakhuwa
already exist. In this paper we describe the creation of the
Emakhuwa-Portuguese parallel corpus, which is a collection of texts from the
Jehovah's Witness website and a variety of other sources including the African
Story Book website, the Universal Declaration of Human Rights and Mozambican
legal documents. The dataset contains 47,415 sentence pairs, amounting to
699,976 word tokens of Emakhuwa and 877,595 word tokens in Portuguese. After
normalization processes which remain to be completed, the corpus will be made
freely available for research use.
| 2,021 |
Computation and Language
|
Few-shot Intent Classification and Slot Filling with Retrieved Examples
|
Few-shot learning arises in important practical scenarios, such as when a
natural language understanding system needs to learn new semantic labels for an
emerging, resource-scarce domain. In this paper, we explore retrieval-based
methods for intent classification and slot filling tasks in few-shot settings.
Retrieval-based methods make predictions based on labeled examples in the
retrieval index that are similar to the input, and thus can adapt to new
domains simply by changing the index without having to retrain the model.
However, it is non-trivial to apply such methods on tasks with a complex label
space like slot filling. To this end, we propose a span-level retrieval method
that learns similar contextualized representations for spans with the same
label via a novel batch-softmax objective. At inference time, we use the labels
of the retrieved spans to construct the final structure with the highest
aggregated score. Our method outperforms previous systems in various few-shot
settings on the CLINC and SNIPS benchmarks.
| 2,021 |
Computation and Language
|
Paragraph-level Simplification of Medical Texts
|
We consider the problem of learning to simplify medical texts. This is
important because most reliable, up-to-date information in biomedicine is dense
with jargon and thus practically inaccessible to the lay audience. Furthermore,
manual simplification does not scale to the rapidly growing body of biomedical
literature, motivating the need for automated approaches. Unfortunately, there
are no large-scale resources available for this task. In this work we introduce
a new corpus of parallel texts in English comprising technical and lay
summaries of all published evidence pertaining to different clinical topics. We
then propose a new metric based on likelihood scores from a masked language
model pretrained on scientific texts. We show that this automated measure
better differentiates between technical and lay summaries than existing
heuristics. We introduce and evaluate baseline encoder-decoder Transformer
models for simplification and propose a novel augmentation to these in which we
explicitly penalize the decoder for producing "jargon" terms; we find that this
yields improvements over baselines in terms of readability.
| 2,021 |
Computation and Language
|
Plot-guided Adversarial Example Construction for Evaluating Open-domain
Story Generation
|
With the recent advances of open-domain story generation, the lack of
reliable automatic evaluation metrics becomes an increasingly imperative issue
that hinders the fast development of story generation. According to conducted
researches in this regard, learnable evaluation metrics have promised more
accurate assessments by having higher correlations with human judgments. A
critical bottleneck of obtaining a reliable learnable evaluation metric is the
lack of high-quality training data for classifiers to efficiently distinguish
plausible and implausible machine-generated stories. Previous works relied on
\textit{heuristically manipulated} plausible examples to mimic possible system
drawbacks such as repetition, contradiction, or irrelevant content in the text
level, which can be \textit{unnatural} and \textit{oversimplify} the
characteristics of implausible machine-generated stories. We propose to tackle
these issues by generating a more comprehensive set of implausible stories
using {\em plots}, which are structured representations of controllable factors
used to generate stories. Since these plots are compact and structured, it is
easier to manipulate them to generate text with targeted undesirable
properties, while at the same time maintain the grammatical correctness and
naturalness of the generated sentences. To improve the quality of generated
implausible stories, we further apply the adversarial filtering procedure
presented by \citet{zellers2018swag} to select a more nuanced set of
implausible texts. Experiments show that the evaluation metrics trained on our
generated data result in more reliable automatic assessments that correlate
remarkably better with human judgments compared to the baselines.
| 2,021 |
Computation and Language
|
Learning from Executions for Semantic Parsing
|
Semantic parsing aims at translating natural language (NL) utterances onto
machine-interpretable programs, which can be executed against a real-world
environment. The expensive annotation of utterance-program pairs has long been
acknowledged as a major bottleneck for the deployment of contemporary neural
models to real-life applications. In this work, we focus on the task of
semi-supervised learning where a limited amount of annotated data is available
together with many unlabeled NL utterances. Based on the observation that
programs which correspond to NL utterances must be always executable, we
propose to encourage a parser to generate executable programs for unlabeled
utterances. Due to the large search space of executable programs, conventional
methods that use approximations based on beam-search such as self-training and
top-k marginal likelihood training, do not perform as well. Instead, we view
the problem of learning from executions from the perspective of posterior
regularization and propose a set of new training objectives. Experimental
results on Overnight and GeoQuery show that our new objectives outperform
conventional methods, bridging the gap between semi-supervised and supervised
learning.
| 2,021 |
Computation and Language
|
Evaluating Saliency Methods for Neural Language Models
|
Saliency methods are widely used to interpret neural network predictions, but
different variants of saliency methods often disagree even on the
interpretations of the same prediction made by the same model. In these cases,
how do we identify when are these interpretations trustworthy enough to be used
in analyses? To address this question, we conduct a comprehensive and
quantitative evaluation of saliency methods on a fundamental category of NLP
models: neural language models. We evaluate the quality of prediction
interpretations from two perspectives that each represents a desirable property
of these interpretations: plausibility and faithfulness. Our evaluation is
conducted on four different datasets constructed from the existing human
annotation of syntactic and semantic agreements, on both sentence-level and
document-level. Through our evaluation, we identified various ways saliency
methods could yield interpretations of low quality. We recommend that future
work deploying such methods to neural language models should carefully validate
their interpretations before drawing insights.
| 2,021 |
Computation and Language
|
Learning to Synthesize Data for Semantic Parsing
|
Synthesizing data for semantic parsing has gained increasing attention
recently. However, most methods require handcrafted (high-precision) rules in
their generative process, hindering the exploration of diverse unseen data. In
this work, we propose a generative model which features a (non-neural) PCFG
that models the composition of programs (e.g., SQL), and a BART-based
translation model that maps a program to an utterance. Due to the simplicity of
PCFG and pre-trained BART, our generative model can be efficiently learned from
existing data at hand. Moreover, explicitly modeling compositions using PCFG
leads to a better exploration of unseen programs, thus generate more diverse
data. We evaluate our method in both in-domain and out-of-domain settings of
text-to-SQL parsing on the standard benchmarks of GeoQuery and Spider,
respectively. Our empirical results show that the synthesized data generated
from our model can substantially help a semantic parser achieve better
compositional and domain generalization.
| 2,021 |
Computation and Language
|
SpartQA: : A Textual Question Answering Benchmark for Spatial Reasoning
|
This paper proposes a question-answering (QA) benchmark for spatial reasoning
on natural language text which contains more realistic spatial phenomena not
covered by prior work and is challenging for state-of-the-art language models
(LM). We propose a distant supervision method to improve on this task.
Specifically, we design grammar and reasoning rules to automatically generate a
spatial description of visual scenes and corresponding QA pairs. Experiments
show that further pretraining LMs on these automatically generated data
significantly improves LMs' capability on spatial understanding, which in turn
helps to better solve two external datasets, bAbI, and boolQ. We hope that this
work can foster investigations into more sophisticated models for spatial
reasoning over text.
| 2,021 |
Computation and Language
|
Relational World Knowledge Representation in Contextual Language Models:
A Review
|
Relational knowledge bases (KBs) are commonly used to represent world
knowledge in machines. However, while advantageous for their high degree of
precision and interpretability, KBs are usually organized according to
manually-defined schemas, which limit their expressiveness and require
significant human efforts to engineer and maintain. In this review, we take a
natural language processing perspective to these limitations, examining how
they may be addressed in part by training deep contextual language models (LMs)
to internalize and express relational knowledge in more flexible forms. We
propose to organize knowledge representation strategies in LMs by the level of
KB supervision provided, from no KB supervision at all to entity- and
relation-level supervision. Our contributions are threefold: (1) We provide a
high-level, extensible taxonomy for knowledge representation in LMs; (2) Within
our taxonomy, we highlight notable models, evaluation tasks, and findings, in
order to provide an up-to-date review of current knowledge representation
capabilities in LMs; and (3) We suggest future research directions that build
upon the complementary aspects of LMs and KBs as knowledge representations.
| 2,021 |
Computation and Language
|
Targeted Adversarial Training for Natural Language Understanding
|
We present a simple yet effective Targeted Adversarial Training (TAT)
algorithm to improve adversarial training for natural language understanding.
The key idea is to introspect current mistakes and prioritize adversarial
training steps to where the model errs the most. Experiments show that TAT can
significantly improve accuracy over standard adversarial training on GLUE and
attain new state-of-the-art zero-shot results on XNLI. Our code will be
released at: https://github.com/namisan/mt-dnn.
| 2,021 |
Computation and Language
|
Family of Origin and Family of Choice: Massively Parallel Lexiconized
Iterative Pretraining for Severely Low Resource Machine Translation
|
We translate a closed text that is known in advance into a severely low
resource language by leveraging massive source parallelism. In other words,
given a text in 124 source languages, we translate it into a severely low
resource language using only ~1,000 lines of low resource data without any
external help. Firstly, we propose a systematic method to rank and choose
source languages that are close to the low resource language. We call the
linguistic definition of language family Family of Origin (FAMO), and we call
the empirical definition of higher-ranked languages using our metrics Family of
Choice (FAMC). Secondly, we build an Iteratively Pretrained Multilingual
Order-preserving Lexiconized Transformer (IPML) to train on ~1,000 lines
(~3.5%) of low resource data. To translate named entities correctly, we build a
massive lexicon table for 2,939 Bible named entities in 124 source languages,
and include many that occur once and covers more than 66 severely low resource
languages. Moreover, we also build a novel method of combining translations
from different source languages into one. Using English as a hypothetical low
resource language, we get a +23.9 BLEU increase over a multilingual baseline,
and a +10.3 BLEU increase over our asymmetric baseline in the Bible dataset. We
get a 42.8 BLEU score for Portuguese-English translation on the medical EMEA
dataset. We also have good results for a real severely low resource Mayan
language, Eastern Pokomchi.
| 2,021 |
Computation and Language
|
From partners to populations: A hierarchical Bayesian account of
coordination and convention
|
Languages are powerful solutions to coordination problems: they provide
stable, shared expectations about how the words we say correspond to the
beliefs and intentions in our heads. Yet language use in a variable and
non-stationary social environment requires linguistic representations to be
flexible: old words acquire new ad hoc or partner-specific meanings on the fly.
In this paper, we introduce CHAI (Continual Hierarchical Adaptation through
Inference), a hierarchical Bayesian theory of coordination and convention
formation that aims to reconcile the long-standing tension between these two
basic observations. We argue that the central computational problem of
communication is not simply transmission, as in classical formulations, but
continual learning and adaptation over multiple timescales. Partner-specific
common ground quickly emerges from social inferences within dyadic
interactions, while community-wide social conventions are stable priors that
have been abstracted away from interactions with multiple partners. We present
new empirical data alongside simulations showing how our model provides a
computational foundation for several phenomena that have posed a challenge for
previous accounts: (1) the convergence to more efficient referring expressions
across repeated interaction with the same partner, (2) the gradual transfer of
partner-specific common ground to strangers, and (3) the influence of
communicative context on which conventions eventually form.
| 2,021 |
Computation and Language
|
Discourse Probing of Pretrained Language Models
|
Existing work on probing of pretrained language models (LMs) has
predominantly focused on sentence-level syntactic tasks. In this paper, we
introduce document-level discourse probing to evaluate the ability of
pretrained LMs to capture document-level relations. We experiment with 7
pretrained LMs, 4 languages, and 7 discourse probing tasks, and find BART to be
overall the best model at capturing discourse -- but only in its encoder, with
BERT performing surprisingly well as the baseline model. Across the different
models, there are substantial differences in which layers best capture
discourse information, and large disparities between models.
| 2,021 |
Computation and Language
|
Multi-Step Reasoning Over Unstructured Text with Beam Dense Retrieval
|
Complex question answering often requires finding a reasoning chain that
consists of multiple evidence pieces. Current approaches incorporate the
strengths of structured knowledge and unstructured text, assuming text corpora
is semi-structured. Building on dense retrieval methods, we propose a new
multi-step retrieval approach (BeamDR) that iteratively forms an evidence chain
through beam search in dense representations. When evaluated on multi-hop
question answering, BeamDR is competitive to state-of-the-art systems, without
using any semi-structured information. Through query composition in dense
space, BeamDR captures the implicit relationships between evidence in the
reasoning chain. The code is available at https://github.com/
henryzhao5852/BeamDR.
| 2,021 |
Computation and Language
|
DirectProbe: Studying Representations without Classifiers
|
Understanding how linguistic structures are encoded in contextualized
embedding could help explain their impressive performance across NLP@. Existing
approaches for probing them usually call for training classifiers and use the
accuracy, mutual information, or complexity as a proxy for the representation's
goodness. In this work, we argue that doing so can be unreliable because
different representations may need different classifiers. We develop a
heuristic, DirectProbe, that directly studies the geometry of a representation
by building upon the notion of a version space for a task. Experiments with
several linguistic tasks and contextualized embeddings show that, even without
training classifiers, DirectProbe can shine light into how an embedding space
represents labels, and also anticipate classifier performance for the
representation.
| 2,021 |
Computation and Language
|
Document-Level Event Argument Extraction by Conditional Generation
|
Event extraction has long been treated as a sentence-level task in the IE
community. We argue that this setting does not match human information-seeking
behavior and leads to incomplete and uninformative extraction results. We
propose a document-level neural event argument extraction model by formulating
the task as conditional generation following event templates. We also compile a
new document-level event extraction benchmark dataset WikiEvents which includes
complete event and coreference annotation. On the task of argument extraction,
we achieve an absolute gain of 7.6% F1 and 5.7% F1 over the next best model on
the RAMS and WikiEvents datasets respectively. On the more challenging task of
informative argument extraction, which requires implicit coreference reasoning,
we achieve a 9.3% F1 gain over the best baseline. To demonstrate the
portability of our model, we also create the first end-to-end zero-shot event
extraction framework and achieve 97% of fully supervised model's trigger
extraction performance and 82% of the argument extraction performance given
only access to 10 out of the 33 types on ACE.
| 2,021 |
Computation and Language
|
Semantic maps and metrics for science Semantic maps and metrics for
science using deep transformer encoders
|
The growing deluge of scientific publications demands text analysis tools
that can help scientists and policy-makers navigate, forecast and beneficially
guide scientific research. Recent advances in natural language understanding
driven by deep transformer networks offer new possibilities for mapping
science. Because the same surface text can take on multiple and sometimes
contradictory specialized senses across distinct research communities,
sensitivity to context is critical for infometric applications. Transformer
embedding models such as BERT capture shades of association and connotation
that vary across the different linguistic contexts of any particular word or
span of text. Here we report a procedure for encoding scientific documents with
these tools, measuring their improvement over static word embeddings in a
nearest-neighbor retrieval task. We find discriminability of contextual
representations is strongly influenced by choice of pooling strategy for
summarizing the high-dimensional network activations. Importantly, we note that
fundamentals such as domain-matched training data are more important than
state-of-the-art NLP tools. Yet state-of-the-art models did offer significant
gains. The best approach we investigated combined domain-matched pretraining,
sound pooling, and state-of-the-art deep transformer network encoders. Finally,
with the goal of leveraging contextual representations from deep encoders, we
present a range of measurements for understanding and forecasting research
communities in science.
| 2,021 |
Computation and Language
|
QMSum: A New Benchmark for Query-based Multi-domain Meeting
Summarization
|
Meetings are a key component of human collaboration. As increasing numbers of
meetings are recorded and transcribed, meeting summaries have become essential
to remind those who may or may not have attended the meetings about the key
decisions made and the tasks to be completed. However, it is hard to create a
single short summary that covers all the content of a long meeting involving
multiple people and topics. In order to satisfy the needs of different types of
users, we define a new query-based multi-domain meeting summarization task,
where models have to select and summarize relevant spans of meetings in
response to a query, and we introduce QMSum, a new benchmark for this task.
QMSum consists of 1,808 query-summary pairs over 232 meetings in multiple
domains. Besides, we investigate a locate-then-summarize method and evaluate a
set of strong summarization baselines on the task. Experimental results and
manual analysis reveal that QMSum presents significant challenges in long
meeting summarization for future research. Dataset is available at
\url{https://github.com/Yale-LILY/QMSum}.
| 2,021 |
Computation and Language
|
Restoring and Mining the Records of the Joseon Dynasty via Neural
Language Modeling and Machine Translation
|
Understanding voluminous historical records provides clues on the past in
various aspects, such as social and political issues and even natural science
facts. However, it is generally difficult to fully utilize the historical
records, since most of the documents are not written in a modern language and
part of the contents are damaged over time. As a result, restoring the damaged
or unrecognizable parts as well as translating the records into modern
languages are crucial tasks. In response, we present a multi-task learning
approach to restore and translate historical documents based on a
self-attention mechanism, specifically utilizing two Korean historical records,
ones of the most voluminous historical records in the world. Experimental
results show that our approach significantly improves the accuracy of the
translation task than baselines without multi-task learning. In addition, we
present an in-depth exploratory analysis on our translated results via topic
modeling, uncovering several significant historical events.
| 2,021 |
Computation and Language
|
Experiments of ASR-based mispronunciation detection for children and
adult English learners
|
Pronunciation is one of the fundamentals of language learning, and it is
considered a primary factor of spoken language when it comes to an
understanding and being understood by others. The persistent presence of high
error rates in speech recognition domains resulting from mispronunciations
motivates us to find alternative techniques for handling mispronunciations. In
this study, we develop a mispronunciation assessment system that checks the
pronunciation of non-native English speakers, identifies the commonly
mispronounced phonemes of Italian learners of English, and presents an
evaluation of the non-native pronunciation observed in phonetically annotated
speech corpora. In this work, to detect mispronunciations, we used a
phone-based ASR implemented using Kaldi. We used two non-native English labeled
corpora; (i) a corpus of Italian adults contains 5,867 utterances from 46
speakers, and (ii) a corpus of Italian children consists of 5,268 utterances
from 78 children. Our results show that the selected error model can
discriminate correct sounds from incorrect sounds in both native and nonnative
speech, and therefore can be used to detect pronunciation errors in non-native
speech. The phone error rates show improvement in using the error language
model. The ASR system shows better accuracy after applying the error model on
our selected corpora.
| 2,021 |
Computation and Language
|
Gender Bias in Machine Translation
|
Machine translation (MT) technology has facilitated our daily tasks by
providing accessible shortcuts for gathering, elaborating and communicating
information. However, it can suffer from biases that harm users and society at
large. As a relatively new field of inquiry, gender bias in MT still lacks
internal cohesion, which advocates for a unified framework to ease future
research. To this end, we: i) critically review current conceptualizations of
bias in light of theoretical insights from related disciplines, ii) summarize
previous analyses aimed at assessing gender bias in MT, iii) discuss the
mitigating strategies proposed so far, and iv) point toward potential
directions for future work.
| 2,021 |
Computation and Language
|
Lessons on Parameter Sharing across Layers in Transformers
|
We propose a parameter sharing method for Transformers (Vaswani et al.,
2017). The proposed approach relaxes a widely used technique, which shares
parameters for one layer with all layers such as Universal Transformers
(Dehghani et al., 2019), to increase the efficiency in the computational time.
We propose three strategies: Sequence, Cycle, and Cycle (rev) to assign
parameters to each layer. Experimental results show that the proposed
strategies are efficient in the parameter size and computational time.
Moreover, we indicate that the proposed strategies are also effective in the
configuration where we use many training data such as the recent WMT
competition.
| 2,023 |
Computation and Language
|
MultiModalQA: Complex Question Answering over Text, Tables and Images
|
When answering complex questions, people can seamlessly combine information
from visual, textual and tabular sources. While interest in models that reason
over multiple pieces of evidence has surged in recent years, there has been
relatively little work on question answering models that reason across multiple
modalities. In this paper, we present MultiModalQA(MMQA): a challenging
question answering dataset that requires joint reasoning over text, tables and
images. We create MMQA using a new framework for generating complex multi-modal
questions at scale, harvesting tables from Wikipedia, and attaching images and
text paragraphs using entities that appear in each table. We then define a
formal language that allows us to take questions that can be answered from a
single modality, and combine them to generate cross-modal questions. Last,
crowdsourcing workers take these automatically-generated questions and rephrase
them into more fluent language. We create 29,918 questions through this
procedure, and empirically demonstrate the necessity of a multi-modal multi-hop
approach to solve our task: our multi-hop model, ImplicitDecomp, achieves an
average F1of 51.7 over cross-modal questions, substantially outperforming a
strong baseline that achieves 38.2 F1, but still lags significantly behind
human performance, which is at 90.1 F1
| 2,021 |
Computation and Language
|
Structural analysis of an all-purpose question answering model
|
Attention is a key component of the now ubiquitous pre-trained language
models. By learning to focus on relevant pieces of information, these
Transformer-based architectures have proven capable of tackling several tasks
at once and sometimes even surpass their single-task counterparts. To better
understand this phenomenon, we conduct a structural analysis of a new
all-purpose question answering model that we introduce. Surprisingly, this
model retains single-task performance even in the absence of a strong transfer
effect between tasks. Through attention head importance scoring, we observe
that attention heads specialize in a particular task and that some heads are
more conducive to learning than others in both the multi-task and single-task
settings.
| 2,021 |
Computation and Language
|
Transformer-based Methods for Recognizing Ultra Fine-grained Entities
(RUFES)
|
This paper summarizes the participation of the Laboratoire Informatique,
Image et Interaction (L3i laboratory) of the University of La Rochelle in the
Recognizing Ultra Fine-grained Entities (RUFES) track within the Text Analysis
Conference (TAC) series of evaluation workshops. Our participation relies on
two neural-based models, one based on a pre-trained and fine-tuned language
model with a stack of Transformer layers for fine-grained entity extraction and
one out-of-the-box model for within-document entity coreference. We observe
that our approach has great potential in increasing the performance of
fine-grained entity recognition. Thus, the future work envisioned is to enhance
the ability of the models following additional experiments and a deeper
analysis of the results.
| 2,020 |
Computation and Language
|
UPB at SemEval-2021 Task 7: Adversarial Multi-Task Learning for
Detecting and Rating Humor and Offense
|
Detecting humor is a challenging task since words might share multiple
valences and, depending on the context, the same words can be even used in
offensive expressions. Neural network architectures based on Transformer obtain
state-of-the-art results on several Natural Language Processing tasks,
especially text classification. Adversarial learning, combined with other
techniques such as multi-task learning, aids neural models learn the intrinsic
properties of data. In this work, we describe our adversarial multi-task
network, AMTL-Humor, used to detect and rate humor and offensive texts from
Task 7 at SemEval-2021. Each branch from the model is focused on solving a
related task, and consists of a BiLSTM layer followed by Capsule layers, on top
of BERTweet used for generating contextualized embeddings. Our best model
consists of an ensemble of all tested configurations, and achieves a 95.66%
F1-score and 94.70% accuracy for Task 1a, while obtaining RMSE scores of 0.6200
and 0.5318 for Tasks 1b and 2, respectively.
| 2,021 |
Computation and Language
|
Equivalence of Segmental and Neural Transducer Modeling: A Proof of
Concept
|
With the advent of direct models in automatic speech recognition (ASR), the
formerly prevalent frame-wise acoustic modeling based on hidden Markov models
(HMM) diversified into a number of modeling architectures like encoder-decoder
attention models, transducer models and segmental models (direct HMM). While
transducer models stay with a frame-level model definition, segmental models
are defined on the level of label segments directly. While
(soft-)attention-based models avoid explicit alignment, transducer and
segmental approach internally do model alignment, either by segment hypotheses
or, more implicitly, by emitting so-called blank symbols. In this work, we
prove that the widely used class of RNN-Transducer models and segmental models
(direct HMM) are equivalent and therefore show equal modeling power. It is
shown that blank probabilities translate into segment length probabilities and
vice versa. In addition, we provide initial experiments investigating decoding
and beam-pruning, comparing time-synchronous and label-/segment-synchronous
search strategies and their properties using the same underlying model.
| 2,023 |
Computation and Language
|
What's in your Head? Emergent Behaviour in Multi-Task Transformer Models
|
The primary paradigm for multi-task training in natural language processing
is to represent the input with a shared pre-trained language model, and add a
small, thin network (head) per task. Given an input, a target head is the head
that is selected for outputting the final prediction. In this work, we examine
the behaviour of non-target heads, that is, the output of heads when given
input that belongs to a different task than the one they were trained for. We
find that non-target heads exhibit emergent behaviour, which may either explain
the target task, or generalize beyond their original task. For example, in a
numerical reasoning task, a span extraction head extracts from the input the
arguments to a computation that results in a number generated by a target
generative head. In addition, a summarization head that is trained with a
target question answering head, outputs query-based summaries when given a
question and a context from which the answer is to be extracted. This emergent
behaviour suggests that multi-task training leads to non-trivial extrapolation
of skills, which can be harnessed for interpretability and generalization.
| 2,021 |
Computation and Language
|
Understanding Transformers for Bot Detection in Twitter
|
In this paper we shed light on the impact of fine-tuning over social media
data in the internal representations of neural language models. We focus on bot
detection in Twitter, a key task to mitigate and counteract the automatic
spreading of disinformation and bias in social media. We investigate the use of
pre-trained language models to tackle the detection of tweets generated by a
bot or a human account based exclusively on its content. Unlike the general
trend in benchmarks like GLUE, where BERT generally outperforms generative
transformers like GPT and GPT-2 for most classification tasks on regular text,
we observe that fine-tuning generative transformers on a bot detection task
produces higher accuracies. We analyze the architectural components of each
transformer and study the effect of fine-tuning on their hidden states and
output representations. Among our findings, we show that part of the
syntactical information and distributional properties captured by BERT during
pre-training is lost upon fine-tuning while the generative pre-training
approach manage to preserve these properties.
| 2,021 |
Computation and Language
|
On the Impact of Knowledge-based Linguistic Annotations in the Quality
of Scientific Embeddings
|
In essence, embedding algorithms work by optimizing the distance between a
word and its usual context in order to generate an embedding space that encodes
the distributional representation of words. In addition to single words or word
pieces, other features which result from the linguistic analysis of text,
including lexical, grammatical and semantic information, can be used to improve
the quality of embedding spaces. However, until now we did not have a precise
understanding of the impact that such individual annotations and their possible
combinations may have in the quality of the embeddings. In this paper, we
conduct a comprehensive study on the use of explicit linguistic annotations to
generate embeddings from a scientific corpus and quantify their impact in the
resulting representations. Our results show how the effect of such annotations
in the embeddings varies depending on the evaluation task. In general, we
observe that learning embeddings using linguistic annotations contributes to
achieve better evaluation results.
| 2,021 |
Computation and Language
|
GLaRA: Graph-based Labeling Rule Augmentation for Weakly Supervised
Named Entity Recognition
|
Instead of using expensive manual annotations, researchers have proposed to
train named entity recognition (NER) systems using heuristic labeling rules.
However, devising labeling rules is challenging because it often requires a
considerable amount of manual effort and domain expertise. To alleviate this
problem, we propose \textsc{GLaRA}, a graph-based labeling rule augmentation
framework, to learn new labeling rules from unlabeled data. We first create a
graph with nodes representing candidate rules extracted from unlabeled data.
Then, we design a new graph neural network to augment labeling rules by
exploring the semantic relations between rules. We finally apply the augmented
rules on unlabeled data to generate weak labels and train a NER model using the
weakly labeled data. We evaluate our method on three NER datasets and find that
we can achieve an average improvement of +20\% F1 score over the best baseline
when given a small set of seed rules.
| 2,021 |
Computation and Language
|
Reducing Discontinuous to Continuous Parsing with Pointer Network
Reordering
|
Discontinuous constituent parsers have always lagged behind continuous
approaches in terms of accuracy and speed, as the presence of constituents with
discontinuous yield introduces extra complexity to the task. However, a
discontinuous tree can be converted into a continuous variant by reordering
tokens. Based on that, we propose to reduce discontinuous parsing to a
continuous problem, which can then be directly solved by any off-the-shelf
continuous parser. To that end, we develop a Pointer Network capable of
accurately generating the continuous token arrangement for a given input
sentence and define a bijective function to recover the original order.
Experiments on the main benchmarks with two continuous parsers prove that our
approach is on par in accuracy with purely discontinuous state-of-the-art
algorithms, but considerably faster.
| 2,021 |
Computation and Language
|
Understanding Hard Negatives in Noise Contrastive Estimation
|
The choice of negative examples is important in noise contrastive estimation.
Recent works find that hard negatives -- highest-scoring incorrect examples
under the model -- are effective in practice, but they are used without a
formal justification. We develop analytical tools to understand the role of
hard negatives. Specifically, we view the contrastive loss as a biased
estimator of the gradient of the cross-entropy loss, and show both
theoretically and empirically that setting the negative distribution to be the
model distribution results in bias reduction. We also derive a general form of
the score function that unifies various architectures used in text retrieval.
By combining hard negatives with appropriate score functions, we obtain strong
results on the challenging task of zero-shot entity linking.
| 2,021 |
Computation and Language
|
Multilingual Transfer Learning for Code-Switched Language and Speech
Neural Modeling
|
In this thesis, we address the data scarcity and limitations of linguistic
theory by proposing language-agnostic multi-task training methods. First, we
introduce a meta-learning-based approach, meta-transfer learning, in which
information is judiciously extracted from high-resource monolingual speech data
to the code-switching domain. The meta-transfer learning quickly adapts the
model to the code-switching task from a number of monolingual tasks by learning
to learn in a multi-task learning fashion. Second, we propose a novel
multilingual meta-embeddings approach to effectively represent code-switching
data by acquiring useful knowledge learned in other languages, learning the
commonalities of closely related languages and leveraging lexical composition.
The method is far more efficient compared to contextualized pre-trained
multilingual models. Third, we introduce multi-task learning to integrate
syntactic information as a transfer learning strategy to a language model and
learn where to code-switch. To further alleviate the aforementioned issues, we
propose a data augmentation method using Pointer-Gen, a neural network using a
copy mechanism to teach the model the code-switch points from monolingual
parallel sentences. We disentangle the need for linguistic theory, and the
model captures code-switching points by attending to input words and aligning
the parallel words, without requiring any word alignments or constituency
parsers. More importantly, the model can be effectively used for languages that
are syntactically different, and it outperforms the linguistic theory-based
models.
| 2,021 |
Computation and Language
|
Modeling the dynamics of language change: logistic regression,
Piotrowski's law, and a handful of examples in Polish
|
The study discusses modeling diachronic processes by logistic regression. The
phenomenon of nonlinear changes in language was first observed by Raimund
Piotrowski (hence labelled as Piotrowski's law), even if actual linguistic
evidence usually speaks against using the notion of a "law" in this context. In
our study, we apply logistic regression models to 9 changes which occurred
between 15th and 18th century in the Polish language. The attested course of
the majority of these changes closely follow the expected values, which proves
that the language change might indeed resemble a nonlinear phase change
scenario. We also extend the original Piotrowski's approach by proposing
polynomial logistic regression for these cases which can hardly be described by
its standard version. Also, we propose to consider individual language change
cases jointly, in order to inspect their possible collinearity or, more likely,
their different dynamics in the function of time. Last but not least, we
evaluate our results by testing the influence of the subcorpus size on the
model's goodness-of-fit.
| 2,023 |
Computation and Language
|
Finding Concept-specific Biases in Form--Meaning Associations
|
This work presents an information-theoretic operationalisation of
cross-linguistic non-arbitrariness. It is not a new idea that there are small,
cross-linguistic associations between the forms and meanings of words. For
instance, it has been claimed (Blasi et al., 2016) that the word for "tongue"
is more likely than chance to contain the phone [l]. By controlling for the
influence of language family and geographic proximity within a very large
concept-aligned, cross-lingual lexicon, we extend methods previously used to
detect within language non-arbitrariness (Pimentel et al., 2019) to measure
cross-linguistic associations. We find that there is a significant effect of
non-arbitrariness, but it is unsurprisingly small (less than 0.5% on average
according to our information-theoretic estimate). We also provide a
concept-level analysis which shows that a quarter of the concepts considered in
our work exhibit a significant level of cross-linguistic non-arbitrariness. In
sum, the paper provides new methods to detect cross-linguistic associations at
scale, and confirms their effects are minor.
| 2,021 |
Computation and Language
|
On the Use of Linguistic Features for the Evaluation of Generative
Dialogue Systems
|
Automatically evaluating text-based, non-task-oriented dialogue systems
(i.e., `chatbots') remains an open problem. Previous approaches have suffered
challenges ranging from poor correlation with human judgment to poor
generalization and have often required a gold standard reference for comparison
or human-annotated data. Extending existing evaluation methods, we propose that
a metric based on linguistic features may be able to maintain good correlation
with human judgment and be interpretable, without requiring a gold-standard
reference or human-annotated data. To support this proposition, we measure and
analyze various linguistic features on dialogues produced by multiple dialogue
models. We find that the features' behaviour is consistent with the known
properties of the models tested, and is similar across domains. We also
demonstrate that this approach exhibits promising properties such as zero-shot
generalization to new domains on the related task of evaluating response
relevance.
| 2,021 |
Computation and Language
|
On the Impact of Random Seeds on the Fairness of Clinical Classifiers
|
Recent work has shown that fine-tuning large networks is surprisingly
sensitive to changes in random seed(s). We explore the implications of this
phenomenon for model fairness across demographic groups in clinical prediction
tasks over electronic health records (EHR) in MIMIC-III -- the standard dataset
in clinical NLP research. Apparent subgroup performance varies substantially
for seeds that yield similar overall performance, although there is no evidence
of a trade-off between overall and subgroup performance. However, we also find
that the small sample sizes inherent to looking at intersections of minority
groups and somewhat rare conditions limit our ability to accurately estimate
disparities. Further, we find that jointly optimizing for high overall
performance and low disparities does not yield statistically significant
improvements. Our results suggest that fairness work using MIMIC-III should
carefully account for variations in apparent differences that may arise from
stochasticity and small sample sizes.
| 2,021 |
Computation and Language
|
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering
|
The problem of answering questions using knowledge from pre-trained language
models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA
context (question and answer choice), methods need to (i) identify relevant
knowledge from large KGs, and (ii) perform joint reasoning over the QA context
and KG. In this work, we propose a new model, QA-GNN, which addresses the above
challenges through two key innovations: (i) relevance scoring, where we use LMs
to estimate the importance of KG nodes relative to the given QA context, and
(ii) joint reasoning, where we connect the QA context and KG to form a joint
graph, and mutually update their representations through graph neural networks.
We evaluate our model on QA benchmarks in the commonsense (CommonsenseQA,
OpenBookQA) and biomedical (MedQA-USMLE) domains. QA-GNN outperforms existing
LM and LM+KG models, and exhibits capabilities to perform interpretable and
structured reasoning, e.g., correctly handling negation in questions.
| 2,022 |
Computation and Language
|
ExplainaBoard: An Explainable Leaderboard for NLP
|
With the rapid development of NLP research, leaderboards have emerged as one
tool to track the performance of various systems on various NLP tasks. They are
effective in this goal to some extent, but generally present a rather
simplistic one-dimensional view of the submitted systems, communicated only
through holistic accuracy numbers. In this paper, we present a new
conceptualization and implementation of NLP evaluation: the ExplainaBoard,
which in addition to inheriting the functionality of the standard leaderboard,
also allows researchers to (i) diagnose strengths and weaknesses of a single
system (e.g.~what is the best-performing system bad at?) (ii) interpret
relationships between multiple systems. (e.g.~where does system A outperform
system B? What if we combine systems A, B, and C?) and (iii) examine prediction
results closely (e.g.~what are common errors made by multiple systems, or in
what contexts do particular errors occur?). So far, ExplainaBoard covers more
than 400 systems, 50 datasets, 40 languages, and 12 tasks. ExplainaBoard keeps
updated and is recently upgraded by supporting (1) multilingual multi-task
benchmark, (2) meta-evaluation, and (3) more complicated task: machine
translation, which reviewers also suggested.} We not only released an online
platform on the website \url{http://explainaboard.nlpedia.ai/} but also make
our evaluation tool an API with MIT Licence at Github
\url{https://github.com/neulab/explainaBoard} and PyPi
\url{https://pypi.org/project/interpret-eval/} that allows users to
conveniently assess their models offline. We additionally release all output
files from systems that we have run or collected to motivate "output-driven"
research in the future.
| 2,021 |
Computation and Language
|
Detoxifying Language Models Risks Marginalizing Minority Voices
|
Language models (LMs) must be both safe and equitable to be responsibly
deployed in practice. With safety in mind, numerous detoxification techniques
(e.g., Dathathri et al. 2020; Krause et al. 2020) have been proposed to
mitigate toxic LM generations. In this work, we show that current
detoxification techniques hurt equity: they decrease the utility of LMs on
language used by marginalized groups (e.g., African-American English and
minority identity mentions). In particular, we perform automatic and human
evaluations of text generation quality when LMs are conditioned on inputs with
different dialects and group identifiers. We find that detoxification makes LMs
more brittle to distribution shift, especially on language used by marginalized
groups. We identify that these failures stem from detoxification methods
exploiting spurious correlations in toxicity datasets. Overall, our results
highlight the tension between the controllability and distributional robustness
of LMs.
| 2,021 |
Computation and Language
|
Bridging the Gap Between Clean Data Training and Real-World Inference
for Spoken Language Understanding
|
Spoken language understanding (SLU) system usually consists of various
pipeline components, where each component heavily relies on the results of its
upstream ones. For example, Intent detection (ID), and slot filling (SF)
require its upstream automatic speech recognition (ASR) to transform the voice
into text. In this case, the upstream perturbations, e.g. ASR errors,
environmental noise and careless user speaking, will propagate to the ID and SF
models, thus deteriorating the system performance. Therefore, the
well-performing SF and ID models are expected to be noise resistant to some
extent. However, existing models are trained on clean data, which causes a
\textit{gap between clean data training and real-world inference.} To bridge
the gap, we propose a method from the perspective of domain adaptation, by
which both high- and low-quality samples are embedding into similar vector
space. Meanwhile, we design a denoising generation model to reduce the impact
of the low-quality samples. Experiments on the widely-used dataset, i.e. Snips,
and large scale in-house dataset (10 million training examples) demonstrate
that this method not only outperforms the baseline models on real-world (noisy)
corpus but also enhances the robustness, that is, it produces high-quality
results under a noisy environment. The source code will be released.
| 2,021 |
Computation and Language
|
Mediators in Determining what Processing BERT Performs First
|
Probing neural models for the ability to perform downstream tasks using their
activation patterns is often used to localize what parts of the network
specialize in performing what tasks. However, little work addressed potential
mediating factors in such comparisons. As a test-case mediating factor, we
consider the prediction's context length, namely the length of the span whose
processing is minimally required to perform the prediction. We show that not
controlling for context length may lead to contradictory conclusions as to the
localization patterns of the network, depending on the distribution of the
probing dataset. Indeed, when probing BERT with seven tasks, we find that it is
possible to get 196 different rankings between them when manipulating the
distribution of context lengths in the probing dataset. We conclude by
presenting best practices for conducting such comparisons in the future.
| 2,022 |
Computation and Language
|
Zhestyatsky at SemEval-2021 Task 2: ReLU over Cosine Similarity for BERT
Fine-tuning
|
This paper presents our contribution to SemEval-2021 Task 2: Multilingual and
Cross-lingual Word-in-Context Disambiguation (MCL-WiC). Our experiments cover
English (EN-EN) sub-track from the multilingual setting of the task. We
experiment with several pre-trained language models and investigate an impact
of different top-layers on fine-tuning. We find the combination of Cosine
Similarity and ReLU activation leading to the most effective fine-tuning
procedure. Our best model results in accuracy 92.7%, which is the fourth-best
score in EN-EN sub-track.
| 2,021 |
Computation and Language
|
Modeling Framing in Immigration Discourse on Social Media
|
The framing of political issues can influence policy and public opinion. Even
though the public plays a key role in creating and spreading frames, little is
known about how ordinary people on social media frame political issues. By
creating a new dataset of immigration-related tweets labeled for multiple
framing typologies from political communication theory, we develop supervised
models to detect frames. We demonstrate how users' ideology and region impact
framing choices, and how a message's framing influences audience responses. We
find that the more commonly-used issue-generic frames obscure important
ideological and regional patterns that are only revealed by
immigration-specific frames. Furthermore, frames oriented towards human
interests, culture, and politics are associated with higher user engagement.
This large-scale analysis of a complex social and linguistic phenomenon
contributes to both NLP and social science research.
| 2,021 |
Computation and Language
|
Source and Target Bidirectional Knowledge Distillation for End-to-end
Speech Translation
|
A conventional approach to improving the performance of end-to-end speech
translation (E2E-ST) models is to leverage the source transcription via
pre-training and joint training with automatic speech recognition (ASR) and
neural machine translation (NMT) tasks. However, since the input modalities are
different, it is difficult to leverage source language text successfully. In
this work, we focus on sequence-level knowledge distillation (SeqKD) from
external text-based NMT models. To leverage the full potential of the source
language information, we propose backward SeqKD, SeqKD from a target-to-source
backward NMT model. To this end, we train a bilingual E2E-ST model to predict
paraphrased transcriptions as an auxiliary task with a single decoder. The
paraphrases are generated from the translations in bitext via back-translation.
We further propose bidirectional SeqKD in which SeqKD from both forward and
backward NMT models is combined. Experimental evaluations on both
autoregressive and non-autoregressive models show that SeqKD in each direction
consistently improves the translation performance, and the effectiveness is
complementary regardless of the model capacity.
| 2,021 |
Computation and Language
|
On the Interpretability and Significance of Bias Metrics in Texts: a
PMI-based Approach
|
In recent years, word embeddings have been widely used to measure biases in
texts. Even if they have proven to be effective in detecting a wide variety of
biases, metrics based on word embeddings lack transparency and
interpretability. We analyze an alternative PMI-based metric to quantify biases
in texts. It can be expressed as a function of conditional probabilities, which
provides a simple interpretation in terms of word co-occurrences. We also prove
that it can be approximated by an odds ratio, which allows estimating
confidence intervals and statistical significance of textual biases. This
approach produces similar results to metrics based on word embeddings when
capturing gender gaps of the real world embedded in large corpora.
| 2,023 |
Computation and Language
|
Can a Transformer Pass the Wug Test? Tuning Copying Bias in Neural
Morphological Inflection Models
|
Deep learning sequence models have been successfully applied to the task of
morphological inflection. The results of the SIGMORPHON shared tasks in the
past several years indicate that such models can perform well, but only if the
training data cover a good amount of different lemmata, or if the lemmata that
are inflected at test time have also been seen in training, as has indeed been
largely the case in these tasks. Surprisingly, standard models such as the
Transformer almost completely fail at generalizing inflection patterns when
asked to inflect previously unseen lemmata -- i.e. under "wug test"-like
circumstances. While established data augmentation techniques can be employed
to alleviate this shortcoming by introducing a copying bias through
hallucinating synthetic new word forms using the alphabet in the language at
hand, we show that, to be more effective, the hallucination process needs to
pay attention to substrings of syllable-like length rather than individual
characters or stems. We report a significant performance improvement with our
substring-based hallucination model over previous data hallucination methods
when training and test data do not overlap in their lemmata.
| 2,021 |
Computation and Language
|
MS2: Multi-Document Summarization of Medical Studies
|
To assess the effectiveness of any medical intervention, researchers must
conduct a time-intensive and highly manual literature review. NLP systems can
help to automate or assist in parts of this expensive process. In support of
this goal, we release MS^2 (Multi-Document Summarization of Medical Studies), a
dataset of over 470k documents and 20k summaries derived from the scientific
literature. This dataset facilitates the development of systems that can assess
and aggregate contradictory evidence across multiple studies, and is the first
large-scale, publicly available multi-document summarization dataset in the
biomedical domain. We experiment with a summarization system based on BART,
with promising early results. We formulate our summarization inputs and targets
in both free text and structured forms and modify a recently proposed metric to
assess the quality of our system's generated summaries. Data and models are
available at https://github.com/allenai/ms2
| 2,021 |
Computation and Language
|
"I'm Not Mad": Commonsense Implications of Negation and Contradiction
|
Natural language inference requires reasoning about contradictions,
negations, and their commonsense implications. Given a simple premise (e.g.,
"I'm mad at you"), humans can reason about the varying shades of contradictory
statements ranging from straightforward negations ("I'm not mad at you") to
commonsense contradictions ("I'm happy"). Moreover, these negated or
contradictory statements shift the commonsense implications of the original
premise in nontrivial ways. For example, while "I'm mad" implies "I'm unhappy
about something," negating the premise (i.e., "I'm not mad") does not
necessarily negate the corresponding commonsense implications.
In this paper, we present the first comprehensive study focusing on
commonsense implications of negated statements and contradictions. We introduce
ANION1, a new commonsense knowledge graph with 624K if-then rules focusing on
negated and contradictory events. We then present joint generative and
discriminative inference models for this new resource, providing novel
empirical insights on how logical negations and commonsense contradictions
reshape the commonsense implications of their original premises.
| 2,021 |
Computation and Language
|
From Solving a Problem Boldly to Cutting the Gordian Knot: Idiomatic
Text Generation
|
We study a new application for text generation -- idiomatic sentence
generation -- which aims to transfer literal phrases in sentences into their
idiomatic counterparts. Inspired by psycholinguistic theories of idiom use in
one's native language, we propose a novel approach for this task, which
retrieves the appropriate idiom for a given literal sentence, extracts the span
of the sentence to be replaced by the idiom, and generates the idiomatic
sentence by using a neural model to combine the retrieved idiom and the
remainder of the sentence. Experiments on a novel dataset created for this task
show that our model is able to effectively transfer literal sentences into
idiomatic ones. Furthermore, automatic and human evaluations show that for this
task, the proposed model outperforms a series of competitive baseline models
for text generation.
| 2,021 |
Computation and Language
|
Large-Scale Contextualised Language Modelling for Norwegian
|
We present the ongoing NorLM initiative to support the creation and use of
very large contextualised language models for Norwegian (and in principle other
Nordic languages), including a ready-to-use software environment, as well as an
experience report for data preparation and training. This paper introduces the
first large-scale monolingual language models for Norwegian, based on both the
ELMo and BERT frameworks. In addition to detailing the training process, we
present contrastive benchmark results on a suite of NLP tasks for Norwegian.
For additional background and access to the data, models, and software, please
see http://norlm.nlpl.eu
| 2,021 |
Computation and Language
|
Developing a Conversational Recommendation System for Navigating Limited
Options
|
We have developed a conversational recommendation system designed to help
users navigate through a set of limited options to find the best choice. Unlike
many internet scale systems that use a singular set of search terms and return
a ranked list of options from amongst thousands, our system uses multi-turn
user dialog to deeply understand the users preferences. The system responds in
context to the users specific and immediate feedback to make sequential
recommendations. We envision our system would be highly useful in situations
with intrinsic constraints, such as finding the right restaurant within walking
distance or the right retail item within a limited inventory. Our research
prototype instantiates the former use case, leveraging real data from Google
Places, Yelp, and Zomato. We evaluated our system against a similar system that
did not incorporate user feedback in a 16 person remote study, generating 64
scenario-based search journeys. When our recommendation system was successfully
triggered, we saw both an increase in efficiency and a higher confidence rating
with respect to final user choice. We also found that users preferred our
system (75%) compared with the baseline.
| 2,021 |
Computation and Language
|
Should Semantic Vector Composition be Explicit? Can it be Linear?
|
Vector representations have become a central element in semantic language
modelling, leading to mathematical overlaps with many fields including quantum
theory. Compositionality is a core goal for such representations: given
representations for 'wet' and 'fish', how should the concept 'wet fish' be
represented?
This position paper surveys this question from two points of view. The first
considers the question of whether an explicit mathematical representation can
be successful using only tools from within linear algebra, or whether other
mathematical tools are needed. The second considers whether semantic vector
composition should be explicitly described mathematically, or whether it can be
a model-internal side-effect of training a neural network.
A third and newer question is whether a compositional model can be
implemented on a quantum computer. Given the fundamentally linear nature of
quantum mechanics, we propose that these questions are related, and that this
survey may help to highlight candidate operations for future quantum
implementation.
| 2,021 |
Computation and Language
|
Zero-Resource Multi-Dialectal Arabic Natural Language Understanding
|
A reasonable amount of annotated data is required for fine-tuning pre-trained
language models (PLM) on downstream tasks. However, obtaining labeled examples
for different language varieties can be costly. In this paper, we investigate
the zero-shot performance on Dialectal Arabic (DA) when fine-tuning a PLM on
modern standard Arabic (MSA) data only -- identifying a significant performance
drop when evaluating such models on DA. To remedy such performance drop, we
propose self-training with unlabeled DA data and apply it in the context of
named entity recognition (NER), part-of-speech (POS) tagging, and sarcasm
detection (SRD) on several DA varieties. Our results demonstrate the
effectiveness of self-training with unlabeled DA data: improving zero-shot
MSA-to-DA transfer by as large as $\sim$10\% F$_1$ (NER), 2\% accuracy (POS
tagging), and 4.5\% F$_1$ (SRD). We conduct an ablation experiment and show
that the performance boost observed directly results from the unlabeled DA
examples used for self-training. Our work opens up opportunities for leveraging
the relatively abundant labeled MSA datasets to develop DA models for zero and
low-resource dialects. We also report new state-of-the-art performance on all
three tasks and open-source our fine-tuned models for the research community.
| 2,022 |
Computation and Language
|
AR-LSAT: Investigating Analytical Reasoning of Text
|
Analytical reasoning is an essential and challenging task that requires a
system to analyze a scenario involving a set of particular circumstances and
perform reasoning over it to make conclusions. In this paper, we study the
challenge of analytical reasoning of text and introduce a new dataset
consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to
do well on this task. Furthermore, to address this reasoning challenge, we
design two different baselines: (1) a Transformer-based method which leverages
the state-of-the-art pre-trained language models and (2) Analytical Reasoning
Machine (ARM), a logical-level reasoning framework extracting symbolic
knowledge (e.g, participants, facts, logical functions) to deduce legitimate
solutions. In our experiments, we find that the Transformer-based models
struggle to solve this task as their performance is close to random guess and
ARM achieves better performance by leveraging symbolic knowledge and
interpretable reasoning steps. Results show that both methods still lag far
behind human performance, which leave further space for future research.
| 2,021 |
Computation and Language
|
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
|
Natural-language prompts have recently been used to coax pretrained language
models into performing other AI tasks, using a fill-in-the-blank paradigm
(Petroni et al., 2019) or a few-shot extrapolation paradigm (Brown et al.,
2020). For example, language models retain factual knowledge from their
training corpora that can be extracted by asking them to "fill in the blank" in
a sentential prompt. However, where does this prompt come from? We explore the
idea of learning prompts by gradient descent -- either fine-tuning prompts
taken from previous work, or starting from random initialization. Our prompts
consist of "soft words," i.e., continuous vectors that are not necessarily word
type embeddings from the language model. Furthermore, for each task, we
optimize a mixture of prompts, learning which prompts are most effective and
how to ensemble them. Across multiple English LMs and tasks, our approach
hugely outperforms previous methods, showing that the implicit factual
knowledge in language models was previously underestimated. Moreover, this
knowledge is cheap to elicit: random initialization is nearly as good as
informed initialization.
| 2,021 |
Computation and Language
|
Masked Language Modeling and the Distributional Hypothesis: Order Word
Matters Pre-training for Little
|
A possible explanation for the impressive performance of masked language
model (MLM) pre-training is that such models have learned to represent the
syntactic structures prevalent in classical NLP pipelines. In this paper, we
propose a different explanation: MLMs succeed on downstream tasks almost
entirely due to their ability to model higher-order word co-occurrence
statistics. To demonstrate this, we pre-train MLMs on sentences with randomly
shuffled word order, and show that these models still achieve high accuracy
after fine-tuning on many downstream tasks -- including on tasks specifically
designed to be challenging for models that ignore word order. Our models
perform surprisingly well according to some parametric syntactic probes,
indicating possible deficiencies in how we test representations for syntactic
information. Overall, our results show that purely distributional information
largely explains the success of pre-training, and underscore the importance of
curating challenging evaluation datasets that require deeper linguistic
knowledge.
| 2,021 |
Computation and Language
|
Jointly Learning Truth-Conditional Denotations and Groundings using
Parallel Attention
|
We present a model that jointly learns the denotations of words together with
their groundings using a truth-conditional semantics. Our model builds on the
neurosymbolic approach of Mao et al. (2019), learning to ground objects in the
CLEVR dataset (Johnson et al., 2017) using a novel parallel attention
mechanism. The model achieves state of the art performance on visual question
answering, learning to detect and ground objects with question performance as
the only training signal. We also show that the model is able to learn flexible
non-canonical groundings just by adjusting answers to questions in the training
set.
| 2,021 |
Computation and Language
|
NAREOR: The Narrative Reordering Problem
|
Many implicit inferences exist in text depending on how it is structured that
can critically impact the text's interpretation and meaning. One such
structural aspect present in text with chronology is the order of its
presentation. For narratives or stories, this is known as the narrative order.
Reordering a narrative can impact the temporal, causal, event-based, and other
inferences readers draw from it, which in turn can have strong effects both on
its interpretation and interestingness. In this paper, we propose and
investigate the task of Narrative Reordering (NAREOR) which involves rewriting
a given story in a different narrative order while preserving its plot. We
present a dataset, NAREORC, with human rewritings of stories within ROCStories
in non-linear orders, and conduct a detailed analysis of it. Further, we
propose novel task-specific training methods with suitable evaluation metrics.
We perform experiments on NAREORC using state-of-the-art models such as BART
and T5 and conduct extensive automatic and human evaluations. We demonstrate
that although our models can perform decently, NAREOR is a challenging task
with potential for further exploration. We also investigate two applications of
NAREOR: generation of more interesting variations of stories and serving as
adversarial sets for temporal/event-related tasks, besides discussing other
prospective ones, such as for pedagogical setups related to language skills
like essay writing and applications to medicine involving clinical narratives.
| 2,022 |
Computation and Language
|
Large-Scale Self- and Semi-Supervised Learning for Speech Translation
|
In this paper, we improve speech translation (ST) through effectively
leveraging large quantities of unlabeled speech and text data in different and
complementary ways. We explore both pretraining and self-training by using the
large Libri-Light speech audio corpus and language modeling with CommonCrawl.
Our experiments improve over the previous state of the art by 2.6 BLEU on
average on all four considered CoVoST 2 language pairs via a simple recipe of
combining wav2vec 2.0 pretraining, a single iteration of self-training and
decoding with a language model. Different to existing work, our approach does
not leverage any other supervision than ST data. Code and models will be
publicly released.
| 2,021 |
Computation and Language
|
The Curious Case of Hallucinations in Neural Machine Translation
|
In this work, we study hallucinations in Neural Machine Translation (NMT),
which lie at an extreme end on the spectrum of NMT pathologies. Firstly, we
connect the phenomenon of hallucinations under source perturbation to the
Long-Tail theory of Feldman (2020), and present an empirically validated
hypothesis that explains hallucinations under source perturbation. Secondly, we
consider hallucinations under corpus-level noise (without any source
perturbation) and demonstrate that two prominent types of natural
hallucinations (detached and oscillatory outputs) could be generated and
explained through specific corpus-level noise patterns. Finally, we elucidate
the phenomenon of hallucination amplification in popular data-generation
processes such as Backtranslation and sequence-level Knowledge Distillation.
| 2,021 |
Computation and Language
|
Towards BERT-based Automatic ICD Coding: Limitations and Opportunities
|
Automatic ICD coding is the task of assigning codes from the International
Classification of Diseases (ICD) to medical notes. These codes describe the
state of the patient and have multiple applications, e.g., computer-assisted
diagnosis or epidemiological studies. ICD coding is a challenging task due to
the complexity and length of medical notes. Unlike the general trend in
language processing, no transformer model has been reported to reach high
performance on this task. Here, we investigate in detail ICD coding using
PubMedBERT, a state-of-the-art transformer model for biomedical language
understanding. We find that the difficulty of fine-tuning the model on long
pieces of text is the main limitation for BERT-based models on ICD coding. We
run extensive experiments and show that despite the gap with current
state-of-the-art, pretrained transformers can reach competitive performance
using relatively small portions of text. We point at better methods to
aggregate information from long texts as the main need for improving BERT-based
ICD coding.
| 2,021 |
Computation and Language
|
Sentence Embeddings by Ensemble Distillation
|
This paper contributes a new State Of The Art (SOTA) for Semantic Textual
Similarity (STS). We compare and combine a number of recently proposed sentence
embedding methods for STS, and propose a novel and simple ensemble knowledge
distillation scheme that improves on previous approaches. Our experiments
demonstrate that a model trained to learn the average embedding space from
multiple ensemble students outperforms all the other individual models with
high robustness. Utilizing our distillation method in combination with previous
methods, we significantly improve on the SOTA unsupervised STS, and by proper
hyperparameter tuning of previous methods we improve the supervised SOTA
scores.
| 2,021 |
Computation and Language
|
WARM: A Weakly (+Semi) Supervised Model for Solving Math word Problems
|
Solving math word problems (MWPs) is an important and challenging problem in
natural language processing. Existing approaches to solve MWPs require full
supervision in the form of intermediate equations. However, labeling every MWP
with its corresponding equations is a time-consuming and expensive task. In
order to address this challenge of equation annotation, we propose a weakly
supervised model for solving MWPs by requiring only the final answer as
supervision. We approach this problem by first learning to generate the
equation using the problem description and the final answer, which we
subsequently use to train a supervised MWP solver. We propose and compare
various weakly supervised techniques to learn to generate equations directly
from the problem description and answer. Through extensive experiments, we
demonstrate that without using equations for supervision, our approach achieves
accuracy gains of 4.5% and 32% over the state-of-the-art weakly supervised
approach, on the standard Math23K and AllArith datasets respectively.
Additionally, we curate and release new datasets of roughly 10k MWPs each in
English and in Hindi (a low resource language).These datasets are suitable for
training weakly supervised models. We also present an extension of WARMM to
semi-supervised learning and present further improvements on results, along
with insights.
| 2,023 |
Computation and Language
|
Natural-Language Multi-Agent Simulations of Argumentative Opinion
Dynamics
|
This paper develops a natural-language agent-based model of argumentation
(ABMA). Its artificial deliberative agents (ADAs) are constructed with the help
of so-called neural language models recently developed in AI and computational
linguistics. ADAs are equipped with a minimalist belief system and may generate
and submit novel contributions to a conversation. The natural-language ABMA
allows us to simulate collective deliberation in English, i.e. with arguments,
reasons, and claims themselves -- rather than with their mathematical
representations (as in formal models). This paper uses the natural-language
ABMA to test the robustness of formal reason-balancing models of argumentation
[Maes & Flache 2013, Singer et al. 2019]: First of all, as long as ADAs remain
passive, confirmation bias and homophily updating trigger polarization, which
is consistent with results from formal models. However, once ADAs start to
actively generate new contributions, the evolution of a conservation is
dominated by properties of the agents *as authors*. This suggests that the
creation of new arguments, reasons, and claims critically affects a
conversation and is of pivotal importance for understanding the dynamics of
collective deliberation. The paper closes by pointing out further fruitful
applications of the model and challenges for future research.
| 2,022 |
Computation and Language
|
Ask what's missing and what's useful: Improving Clarification Question
Generation using Global Knowledge
|
The ability to generate clarification questions i.e., questions that identify
useful missing information in a given context, is important in reducing
ambiguity. Humans use previous experience with similar contexts to form a
global view and compare it to the given context to ascertain what is missing
and what is useful in the context. Inspired by this, we propose a model for
clarification question generation where we first identify what is missing by
taking a difference between the global and the local view and then train a
model to identify what is useful and generate a question about it. Our model
outperforms several baselines as judged by both automatic metrics and humans.
| 2,021 |
Computation and Language
|
Enhancing Word-Level Semantic Representation via Dependency Structure
for Expressive Text-to-Speech Synthesis
|
Exploiting rich linguistic information in raw text is crucial for expressive
text-to-speech (TTS). As large scale pre-trained text representation develops,
bidirectional encoder representations from Transformers (BERT) has been proven
to embody semantic information and employed to TTS recently. However, original
or simply fine-tuned BERT embeddings still cannot provide sufficient semantic
knowledge that expressive TTS models should take into account. In this paper,
we propose a word-level semantic representation enhancing method based on
dependency structure and pre-trained BERT embedding. The BERT embedding of each
word is reprocessed considering its specific dependencies and related words in
the sentence, to generate more effective semantic representation for TTS. To
better utilize the dependency structure, relational gated graph network (RGGN)
is introduced to make semantic information flow and aggregate through the
dependency structure. The experimental results show that the proposed method
can further improve the naturalness and expressiveness of synthesized speeches
on both Mandarin and English datasets.
| 2,022 |
Computation and Language
|
Knowledge-driven Answer Generation for Conversational Search
|
The conversational search paradigm introduces a step change over the
traditional search paradigm by allowing users to interact with search agents in
a multi-turn and natural fashion. The conversation flows naturally and is
usually centered around a target field of knowledge. In this work, we propose a
knowledge-driven answer generation approach for open-domain conversational
search, where a conversation-wide entities' knowledge graph is used to bias
search-answer generation. First, a conversation-specific knowledge graph is
extracted from the top passages retrieved with a Transformer-based re-ranker.
The entities knowledge-graph is then used to bias a search-answer generator
Transformer towards information rich and concise answers. This conversation
specific bias is computed by identifying the most relevant passages according
to the most salient entities of that particular conversation. Experiments show
that the proposed approach successfully exploits entities knowledge along the
conversation, and outperforms a set of baselines on the search-answer
generation task.
| 2,021 |
Computation and Language
|
I Wish I Would Have Loved This One, But I Didn't -- A Multilingual
Dataset for Counterfactual Detection in Product Reviews
|
Counterfactual statements describe events that did not or cannot take place.
We consider the problem of counterfactual detection (CFD) in product reviews.
For this purpose, we annotate a multilingual CFD dataset from Amazon product
reviews covering counterfactual statements written in English, German, and
Japanese languages. The dataset is unique as it contains counterfactuals in
multiple languages, covers a new application area of e-commerce reviews, and
provides high quality professional annotations. We train CFD models using
different text representation methods and classifiers. We find that these
models are robust against the selectional biases introduced due to cue
phrase-based sentence selection. Moreover, our CFD dataset is compatible with
prior datasets and can be merged to learn accurate CFD models. Applying machine
translation on English counterfactual examples to create multilingual data
performs poorly, demonstrating the language-specificity of this problem, which
has been ignored so far.
| 2,021 |
Computation and Language
|
Enhancing Interpretable Clauses Semantically using Pretrained Word
Representation
|
Tsetlin Machine (TM) is an interpretable pattern recognition algorithm based
on propositional logic, which has demonstrated competitive performance in many
Natural Language Processing (NLP) tasks, including sentiment analysis, text
classification, and Word Sense Disambiguation. To obtain human-level
interpretability, legacy TM employs Boolean input features such as bag-of-words
(BOW). However, the BOW representation makes it difficult to use any
pre-trained information, for instance, word2vec and GloVe word representations.
This restriction has constrained the performance of TM compared to deep neural
networks (DNNs) in NLP. To reduce the performance gap, in this paper, we
propose a novel way of using pre-trained word representations for TM. The
approach significantly enhances the performance and interpretability of TM. We
achieve this by extracting semantically related words from pre-trained word
representations as input features to the TM. Our experiments show that the
accuracy of the proposed approach is significantly higher than the previous
BOW-based TM, reaching the level of DNN-based models.
| 2,021 |
Computation and Language
|
Evaluation of Unsupervised Entity and Event Salience Estimation
|
Salience Estimation aims to predict term importance in documents. Due to few
existing human-annotated datasets and the subjective notion of salience,
previous studies typically generate pseudo-ground truth for evaluation.
However, our investigation reveals that the evaluation protocol proposed by
prior work is difficult to replicate, thus leading to few follow-up studies
existing. Moreover, the evaluation process is problematic: the entity linking
tool used for entity matching is very noisy, while the ignorance of event
argument for event evaluation leads to boosted performance. In this work, we
propose a light yet practical entity and event salience estimation evaluation
protocol, which incorporates the more reliable syntactic dependency parser.
Furthermore, we conduct a comprehensive analysis among popular entity and event
definition standards, and present our own definition for the Salience
Estimation task to reduce noise during the pseudo-ground truth generation
process. Furthermore, we construct dependency-based heterogeneous graphs to
capture the interactions of entities and events. The empirical results show
that both baseline methods and the novel GNN method utilizing the heterogeneous
graph consistently outperform the previous SOTA model in all proposed metrics.
| 2,021 |
Computation and Language
|
Domain Adaptation and Multi-Domain Adaptation for Neural Machine
Translation: A Survey
|
The development of deep learning techniques has allowed Neural Machine
Translation (NMT) models to become extremely powerful, given sufficient
training data and training time. However, systems struggle when translating
text from a new domain with a distinct style or vocabulary. Fine-tuning on
in-domain data allows good domain adaptation, but requires sufficient relevant
bilingual data. Even if this is available, simple fine-tuning can cause
overfitting to new data and `catastrophic forgetting' of previously learned
behaviour.
We concentrate on robust approaches to domain adaptation for NMT,
particularly where a system may need to translate across multiple domains. We
divide techniques into those revolving around data selection or generation,
model architecture, parameter adaptation procedure, and inference procedure. We
finally highlight the benefits of domain adaptation and multi-domain adaptation
techniques to other lines of NMT research.
| 2,022 |
Computation and Language
|
The Surprising Performance of Simple Baselines for Misinformation
Detection
|
As social media becomes increasingly prominent in our day to day lives, it is
increasingly important to detect informative content and prevent the spread of
disinformation and unverified rumours. While many sophisticated and successful
models have been proposed in the literature, they are often compared with older
NLP baselines such as SVMs, CNNs, and LSTMs. In this paper, we examine the
performance of a broad set of modern transformer-based language models and show
that with basic fine-tuning, these models are competitive with and can even
significantly outperform recently proposed state-of-the-art methods. We present
our framework as a baseline for creating and evaluating new methods for
misinformation detection. We further study a comprehensive set of benchmark
datasets, and discuss potential data leakage and the need for careful design of
the experiments and understanding of datasets to account for confounding
variables. As an extreme case example, we show that classifying only based on
the first three digits of tweet ids, which contain information on the date,
gives state-of-the-art performance on a commonly used benchmark dataset for
fake news detection --Twitter16. We provide a simple tool to detect this
problem and suggest steps to mitigate it in future datasets.
| 2,021 |
Computation and Language
|
K-PLUG: Knowledge-injected Pre-trained Language Model for Natural
Language Understanding and Generation in E-Commerce
|
Existing pre-trained language models (PLMs) have demonstrated the
effectiveness of self-supervised learning for a broad range of natural language
processing (NLP) tasks. However, most of them are not explicitly aware of
domain-specific knowledge, which is essential for downstream tasks in many
domains, such as tasks in e-commerce scenarios. In this paper, we propose
K-PLUG, a knowledge-injected pre-trained language model based on the
encoder-decoder transformer that can be transferred to both natural language
understanding and generation tasks. We verify our method in a diverse range of
e-commerce scenarios that require domain-specific knowledge. Specifically, we
propose five knowledge-aware self-supervised pre-training objectives to
formulate the learning of domain-specific knowledge, including e-commerce
domain-specific knowledge-bases, aspects of product entities, categories of
product entities, and unique selling propositions of product entities. K-PLUG
achieves new state-of-the-art results on a suite of domain-specific NLP tasks,
including product knowledge base completion, abstractive product summarization,
and multi-turn dialogue, significantly outperforms baselines across the board,
which demonstrates that the proposed method effectively learns a diverse set of
domain-specific knowledge for both language understanding and generation tasks.
| 2,021 |
Computation and Language
|
Event Detection as Question Answering with Entity Information
|
In this paper, we propose a recent and under-researched paradigm for the task
of event detection (ED) by casting it as a question-answering (QA) problem with
the possibility of multiple answers and the support of entities. The extraction
of event triggers is, thus, transformed into the task of identifying answer
spans from a context, while also focusing on the surrounding entities. The
architecture is based on a pre-trained and fine-tuned language model, where the
input context is augmented with entities marked at different levels, their
positions, their types, and, finally, the argument roles. Experiments on the
ACE~2005 corpus demonstrate that the proposed paradigm is a viable solution for
the ED task and it significantly outperforms the state-of-the-art models.
Moreover, we prove that our methods are also able to extract unseen event
types.
| 2,021 |
Computation and Language
|
[RE] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias
Mitigation
|
Despite widespread use in natural language processing (NLP) tasks, word
embeddings have been criticized for inheriting unintended gender bias from
training corpora. programmer is more closely associated with man and homemaker
is more closely associated with woman. Such gender bias has also been shown to
propagate in downstream tasks.
| 2,021 |
Computation and Language
|
TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for
Unsupervised Sentence Embedding Learning
|
Learning sentence embeddings often requires a large amount of labeled data.
However, for most tasks and domains, labeled data is seldom available and
creating it is expensive. In this work, we present a new state-of-the-art
unsupervised method based on pre-trained Transformers and Sequential Denoising
Auto-Encoder (TSDAE) which outperforms previous approaches by up to 6.4 points.
It can achieve up to 93.1% of the performance of in-domain supervised
approaches. Further, we show that TSDAE is a strong domain adaptation and
pre-training method for sentence embeddings, significantly outperforming other
approaches like Masked Language Model.
A crucial shortcoming of previous studies is the narrow evaluation: Most work
mainly evaluates on the single task of Semantic Textual Similarity (STS), which
does not require any domain knowledge. It is unclear if these proposed methods
generalize to other domains and tasks. We fill this gap and evaluate TSDAE and
other recent approaches on four different datasets from heterogeneous domains.
| 2,021 |
Computation and Language
|
UPB at SemEval-2021 Task 1: Combining Deep Learning and Hand-Crafted
Features for Lexical Complexity Prediction
|
Reading is a complex process which requires proper understanding of texts in
order to create coherent mental representations. However, comprehension
problems may arise due to hard-to-understand sections, which can prove
troublesome for readers, while accounting for their specific language skills.
As such, steps towards simplifying these sections can be performed, by
accurately identifying and evaluating difficult structures. In this paper, we
describe our approach for the SemEval-2021 Task 1: Lexical Complexity
Prediction competition that consists of a mixture of advanced NLP techniques,
namely Transformer-based language models, pre-trained word embeddings, Graph
Convolutional Networks, Capsule Networks, as well as a series of hand-crafted
textual complexity features. Our models are applicable on both subtasks and
achieve good performance results, with a MAE below 0.07 and a Person
correlation of .73 for single word identification, as well as a MAE below 0.08
and a Person correlation of .79 for multiple word targets. Our results are just
5.46% and 6.5% lower than the top scores obtained in the competition on the
first and the second subtasks, respectively.
| 2,021 |
Computation and Language
|
An Update to the Minho Quotation Resource
|
The Minho Quotation Resource was originally released in 2012. It provided
approximately 500,000 quotes from business leaders, analysts and politicians
that spanned the period from 2008 to 2012. The original resource had several
failings which include a large number of missing job titles and affiliations as
well as unnormalised job titles which produced a large variation in spellings
and formats of the same employment position. Also, there were numerous
duplicate posts. This update has standardised the job title text as well as the
imputation of missing job titles and affiliations. Duplicate quotes have been
deleted. This update also provides some metaphor and simile extraction as well
as an emotion distribution of the quotes. This update has also replaced an
antiquated version of Lucene index with a JSONL format as well as a rudimentary
interface that can query the data supplied with the resource. It is hoped that
this update will encourage the study of business communication in a time of a
financial crisis.
| 2,021 |
Computation and Language
|
Detecting Cross-Geographic Biases in Toxicity Modeling on Social Media
|
Online social media platforms increasingly rely on Natural Language
Processing (NLP) techniques to detect abusive content at scale in order to
mitigate the harms it causes to their users. However, these techniques suffer
from various sampling and association biases present in training data, often
resulting in sub-par performance on content relevant to marginalized groups,
potentially furthering disproportionate harms towards them. Studies on such
biases so far have focused on only a handful of axes of disparities and
subgroups that have annotations/lexicons available. Consequently, biases
concerning non-Western contexts are largely ignored in the literature. In this
paper, we introduce a weakly supervised method to robustly detect lexical
biases in broader geocultural contexts. Through a case study on a publicly
available toxicity detection model, we demonstrate that our method identifies
salient groups of cross-geographic errors, and, in a follow up, demonstrate
that these groupings reflect human judgments of offensive and inoffensive
language in those geographic contexts. We also conduct analysis of a model
trained on a dataset with ground truth labels to better understand these
biases, and present preliminary mitigation experiments.
| 2,021 |
Computation and Language
|
IGA : An Intent-Guided Authoring Assistant
|
While large-scale pretrained language models have significantly improved
writing assistance functionalities such as autocomplete, more complex and
controllable writing assistants have yet to be explored. We leverage advances
in language modeling to build an interactive writing assistant that generates
and rephrases text according to fine-grained author specifications. Users
provide input to our Intent-Guided Assistant (IGA) in the form of text
interspersed with tags that correspond to specific rhetorical directives (e.g.,
adding description or contrast, or rephrasing a particular sentence). We
fine-tune a language model on a dataset heuristically-labeled with author
intent, which allows IGA to fill in these tags with generated text that users
can subsequently edit to their liking. A series of automatic and crowdsourced
evaluations confirm the quality of IGA's generated outputs, while a small-scale
user study demonstrates author preference for IGA over baseline methods in a
creative writing task. We release our dataset, code, and demo to spur further
research into AI-assisted writing.
| 2,021 |
Computation and Language
|
Translating synthetic natural language to database queries: a polyglot
deep learning framework
|
The number of databases as well as their size and complexity is increasing.
This creates a barrier to use especially for non-experts, who have to come to
grips with the nature of the data, the way it has been represented in the
database, and the specific query languages or user interfaces by which data are
accessed. These difficulties worsen in research settings, where it is common to
work with many different databases. One approach to improving this situation is
to allow users to pose their queries in natural language.
In this work we describe a machine learning framework, Polyglotter, that in a
general way supports the mapping of natural language searches to database
queries. Importantly, it does not require the creation of manually annotated
data for training and therefore can be applied easily to multiple domains. The
framework is polyglot in the sense that it supports multiple different database
engines that are accessed with a variety of query languages, including SQL and
Cypher. Furthermore Polyglotter also supports multi-class queries.
Our results indicate that our framework performs well on both synthetic and
real databases, and may provide opportunities for database maintainers to
improve accessibility to their resources.
| 2,021 |
Computation and Language
|
Sparse Attention with Linear Units
|
Recently, it has been argued that encoder-decoder models can be made more
interpretable by replacing the softmax function in the attention with its
sparse variants. In this work, we introduce a novel, simple method for
achieving sparsity in attention: we replace the softmax activation with a ReLU,
and show that sparsity naturally emerges from such a formulation. Training
stability is achieved with layer normalization with either a specialized
initialization or an additional gating function. Our model, which we call
Rectified Linear Attention (ReLA), is easy to implement and more efficient than
previously proposed sparse attention mechanisms. We apply ReLA to the
Transformer and conduct experiments on five machine translation tasks. ReLA
achieves translation performance comparable to several strong baselines, with
training and decoding speed similar to that of the vanilla attention. Our
analysis shows that ReLA delivers high sparsity rate and head diversity, and
the induced cross attention achieves better accuracy with respect to
source-target word alignment than recent sparsified softmax-based models.
Intriguingly, ReLA heads also learn to attend to nothing (i.e. 'switch off')
for some queries, which is not possible with sparsified softmax alternatives.
| 2,021 |
Computation and Language
|
Predicting Discourse Trees from Transformer-based Neural Summarizers
|
Previous work indicates that discourse information benefits summarization. In
this paper, we explore whether this synergy between discourse and summarization
is bidirectional, by inferring document-level discourse trees from pre-trained
neural summarizers. In particular, we generate unlabeled RST-style discourse
trees from the self-attention matrices of the transformer model. Experiments
across models and datasets reveal that the summarizer learns both, dependency-
and constituency-style discourse information, which is typically encoded in a
single head, covering long- and short-distance discourse dependencies. Overall,
the experimental results suggest that the learned discourse information is
general and transferable inter-domain.
| 2,021 |
Computation and Language
|
Is Everything in Order? A Simple Way to Order Sentences
|
The task of organizing a shuffled set of sentences into a coherent text has
been used to evaluate a machine's understanding of causal and temporal
relations. We formulate the sentence ordering task as a conditional
text-to-marker generation problem. We present Reorder-BART (Re-BART) that
leverages a pre-trained Transformer-based model to identify a coherent order
for a given set of shuffled sentences. The model takes a set of shuffled
sentences with sentence-specific markers as input and generates a sequence of
position markers of the sentences in the ordered text. Re-BART achieves the
state-of-the-art performance across 7 datasets in Perfect Match Ratio (PMR) and
Kendall's tau ($\tau$). We perform evaluations in a zero-shot setting,
showcasing that our model is able to generalize well across other datasets. We
additionally perform several experiments to understand the functioning and
limitations of our framework.
| 2,021 |
Computation and Language
|
UDALM: Unsupervised Domain Adaptation through Language Modeling
|
In this work we explore Unsupervised Domain Adaptation (UDA) of pretrained
language models for downstream tasks. We introduce UDALM, a fine-tuning
procedure, using a mixed classification and Masked Language Model loss, that
can adapt to the target domain distribution in a robust and sample efficient
manner. Our experiments show that performance of models trained with the mixed
loss scales with the amount of available target data and the mixed loss can be
effectively used as a stopping criterion during UDA training. Furthermore, we
discuss the relationship between A-distance and the target error and explore
some limitations of the Domain Adversarial Training approach. Our method is
evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset,
yielding $91.74\%$ accuracy, which is an $1.11\%$ absolute improvement over the
state-of-the-art.
| 2,021 |
Computation and Language
|
Modeling Human Mental States with an Entity-based Narrative Graph
|
Understanding narrative text requires capturing characters' motivations,
goals, and mental states. This paper proposes an Entity-based Narrative Graph
(ENG) to model the internal-states of characters in a story. We explicitly
model entities, their interactions and the context in which they appear, and
learn rich representations for them. We experiment with different task-adaptive
pre-training objectives, in-domain training, and symbolic inference to capture
dependencies between different decisions in the output space. We evaluate our
model on two narrative understanding tasks: predicting character mental states,
and desire fulfillment, and conduct a qualitative analysis.
| 2,021 |
Computation and Language
|
TWEAC: Transformer with Extendable QA Agent Classifiers
|
Question answering systems should help users to access knowledge on a broad
range of topics and to answer a wide array of different questions. Most systems
fall short of this expectation as they are only specialized in one particular
setting, e.g., answering factual questions with Wikipedia data. To overcome
this limitation, we propose composing multiple QA agents within a meta-QA
system. We argue that there exist a wide range of specialized QA agents in
literature. Thus, we address the central research question of how to
effectively and efficiently identify suitable QA agents for any given question.
We study both supervised and unsupervised approaches to address this challenge,
showing that TWEAC -- Transformer with Extendable Agent Classifiers -- achieves
the best performance overall with 94% accuracy. We provide extensive insights
on the scalability of TWEAC, demonstrating that it scales robustly to over 100
QA agents with each providing just 1000 examples of questions they can answer.
Our code and data is available:
https://github.com/UKPLab/TWEAC-qa-agent-selection
| 2,021 |
Computation and Language
|
SummScreen: A Dataset for Abstractive Screenplay Summarization
|
We introduce SummScreen, a summarization dataset comprised of pairs of TV
series transcripts and human written recaps. The dataset provides a challenging
testbed for abstractive summarization for several reasons. Plot details are
often expressed indirectly in character dialogues and may be scattered across
the entirety of the transcript. These details must be found and integrated to
form the succinct plot descriptions in the recaps. Also, TV scripts contain
content that does not directly pertain to the central plot but rather serves to
develop characters or provide comic relief. This information is rarely
contained in recaps. Since characters are fundamental to TV series, we also
propose two entity-centric evaluation metrics. Empirically, we characterize the
dataset by evaluating several methods, including neural models and those based
on nearest neighbors. An oracle extractive approach outperforms all benchmarked
models according to automatic metrics, showing that the neural models are
unable to fully exploit the input transcripts. Human evaluation and qualitative
analysis reveal that our non-oracle models are competitive with their oracle
counterparts in terms of generating faithful plot events and can benefit from
better content selectors. Both oracle and non-oracle models generate unfaithful
facts, suggesting future research directions.
| 2,022 |
Computation and Language
|
Static Embeddings as Efficient Knowledge Bases?
|
Recent research investigates factual knowledge stored in large pretrained
language models (PLMs). Instead of structural knowledge base (KB) queries,
masked sentences such as "Paris is the capital of [MASK]" are used as probes.
The good performance on this analysis task has been interpreted as PLMs
becoming potential repositories of factual knowledge. In experiments across ten
linguistically diverse languages, we study knowledge contained in static
embeddings. We show that, when restricting the output space to a candidate set,
simple nearest neighbor matching using static embeddings performs better than
PLMs. E.g., static embeddings perform 1.6% points better than BERT while just
using 0.3% of energy for training. One important factor in their good
comparative performance is that static embeddings are standardly learned for a
large vocabulary. In contrast, BERT exploits its more sophisticated, but
expensive ability to compose meaningful representations from a much smaller
subword vocabulary.
| 2,021 |
Computation and Language
|
What Makes a Scientific Paper be Accepted for Publication?
|
Despite peer-reviewing being an essential component of academia since the
1600s, it has repeatedly received criticisms for lack of transparency and
consistency. We posit that recent work in machine learning and explainable AI
provide tools that enable insights into the decisions from a given peer review
process. We start by extracting global explanations in the form of linguistic
features that affect the acceptance of a scientific paper for publication on an
open peer-review dataset. Second, since such global explanations do not justify
causal interpretations, we provide a methodology for detecting confounding
effects in natural language in order to generate causal explanations, under
assumptions, in the form of lexicons. Our proposed linguistic explanation
methodology indicates the following on a case dataset of ICLR submissions: a)
the organising committee follows, for the most part, the recommendations of
reviewers, and, b) the paper's main characteristics that led to reviewers
recommending acceptance for publication are originality, clarity and substance.
| 2,021 |
Computation and Language
|
The MuSe 2021 Multimodal Sentiment Analysis Challenge: Sentiment,
Emotion, Physiological-Emotion, and Stress
|
Multimodal Sentiment Analysis (MuSe) 2021 is a challenge focusing on the
tasks of sentiment and emotion, as well as physiological-emotion and
emotion-based stress recognition through more comprehensively integrating the
audio-visual, language, and biological signal modalities. The purpose of MuSe
2021 is to bring together communities from different disciplines; mainly, the
audio-visual emotion recognition community (signal-based), the sentiment
analysis community (symbol-based), and the health informatics community. We
present four distinct sub-challenges: MuSe-Wilder and MuSe-Stress which focus
on continuous emotion (valence and arousal) prediction; MuSe-Sent, in which
participants recognise five classes each for valence and arousal; and
MuSe-Physio, in which the novel aspect of `physiological-emotion' is to be
predicted. For this years' challenge, we utilise the MuSe-CaR dataset focusing
on user-generated reviews and introduce the Ulm-TSST dataset, which displays
people in stressful depositions. This paper also provides detail on the
state-of-the-art feature sets extracted from these datasets for utilisation by
our baseline model, a Long Short-Term Memory-Recurrent Neural Network. For each
sub-challenge, a competitive baseline for participants is set; namely, on test,
we report a Concordance Correlation Coefficient (CCC) of .4616 CCC for
MuSe-Wilder; .4717 CCC for MuSe-Stress, and .4606 CCC for MuSe-Physio. For
MuSe-Sent an F1 score of 32.82 % is obtained.
| 2,021 |
Computation and Language
|
An Interpretability Illusion for BERT
|
We describe an "interpretability illusion" that arises when analyzing the
BERT model. Activations of individual neurons in the network may spuriously
appear to encode a single, simple concept, when in fact they are encoding
something far more complex. The same effect holds for linear combinations of
activations. We trace the source of this illusion to geometric properties of
BERT's embedding space as well as the fact that common text corpora represent
only narrow slices of possible English sentences. We provide a taxonomy of
model-learned concepts and discuss methodological implications for
interpretability research, especially the importance of testing hypotheses on
multiple data sets.
| 2,021 |
Computation and Language
|
On the Robustness of Intent Classification and Slot Labeling in
Goal-oriented Dialog Systems to Real-world Noise
|
Intent Classification (IC) and Slot Labeling (SL) models, which form the
basis of dialogue systems, often encounter noisy data in real-word
environments. In this work, we investigate how robust IC/SL models are to noisy
data. We collect and publicly release a test-suite for seven common noise types
found in production human-to-bot conversations (abbreviations, casing,
misspellings, morphological variants, paraphrases, punctuation and synonyms).
On this test-suite, we show that common noise types substantially degrade the
IC accuracy and SL F1 performance of state-of-the-art BERT-based IC/SL models.
By leveraging cross-noise robustness transfer -- training on one noise type to
improve robustness on another noise type -- we design aggregate
data-augmentation approaches that increase the model performance across all
seven noise types by +10.8% for IC accuracy and +15 points for SL F1 on
average. To the best of our knowledge, this is the first work to present a
single IC/SL model that is robust to a wide range of noise phenomena.
| 2,021 |
Computation and Language
|
Disentangling Representations of Text by Masking Transformers
|
Representations from large pretrained models such as BERT encode a range of
features into monolithic vectors, affording strong predictive accuracy across a
multitude of downstream tasks. In this paper we explore whether it is possible
to learn disentangled representations by identifying existing subnetworks
within pretrained models that encode distinct, complementary aspect
representations. Concretely, we learn binary masks over transformer weights or
hidden units to uncover subsets of features that correlate with a specific
factor of variation; this eliminates the need to train a disentangled model
from scratch for a particular task. We evaluate this method with respect to its
ability to disentangle representations of sentiment from genre in movie
reviews, "toxicity" from dialect in Tweets, and syntax from semantics.
By combining masking with magnitude pruning we find that we can identify
sparse subnetworks within BERT that strongly encode particular aspects (e.g.,
toxicity) while only weakly encoding others (e.g., race). Moreover, despite
only learning masks, we find that disentanglement-via-masking performs as well
as -- and often better than -- previously proposed methods based on variational
autoencoders and adversarial training.
| 2,021 |
Computation and Language
|
Annealing Knowledge Distillation
|
Significant memory and computational requirements of large deep neural
networks restrict their application on edge devices. Knowledge distillation
(KD) is a prominent model compression technique for deep neural networks in
which the knowledge of a trained large teacher model is transferred to a
smaller student model. The success of knowledge distillation is mainly
attributed to its training objective function, which exploits the soft-target
information (also known as "dark knowledge") besides the given regular hard
labels in a training set. However, it is shown in the literature that the
larger the gap between the teacher and the student networks, the more difficult
is their training using knowledge distillation. To address this shortcoming, we
propose an improved knowledge distillation method (called Annealing-KD) by
feeding the rich information provided by the teacher's soft-targets
incrementally and more efficiently. Our Annealing-KD technique is based on a
gradual transition over annealed soft-targets generated by the teacher at
different temperatures in an iterative process, and therefore, the student is
trained to follow the annealed teacher output in a step-by-step manner. This
paper includes theoretical and empirical evidence as well as practical
experiments to support the effectiveness of our Annealing-KD method. We did a
comprehensive set of experiments on different tasks such as image
classification (CIFAR-10 and 100) and NLP language inference with BERT-based
models on the GLUE benchmark and consistently got superior results.
| 2,021 |
Computation and Language
|
Does Putting a Linguist in the Loop Improve NLU Data Collection?
|
Many crowdsourced NLP datasets contain systematic gaps and biases that are
identified only after data collection is complete. Identifying these issues
from early data samples during crowdsourcing should make mitigation more
efficient, especially when done iteratively. We take natural language inference
as a test case and ask whether it is beneficial to put a linguist `in the loop'
during data collection to dynamically identify and address gaps in the data by
introducing novel constraints on the task. We directly compare three data
collection protocols: (i) a baseline protocol, (ii) a linguist-in-the-loop
intervention with iteratively-updated constraints on the task, and (iii) an
extension of linguist-in-the-loop that provides direct interaction between
linguists and crowdworkers via a chatroom. The datasets collected with linguist
involvement are more reliably challenging than baseline, without loss of
quality. But we see no evidence that using this data in training leads to
better out-of-domain model performance, and the addition of a chat platform has
no measurable effect on the resulting dataset. We suggest integrating expert
analysis \textit{during} data collection so that the expert can dynamically
address gaps and biases in the dataset.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.