Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Towards Robustness of Text-to-SQL Models against Synonym Substitution
|
Recently, there has been significant progress in studying neural networks to
translate text descriptions into SQL queries. Despite achieving good
performance on some public benchmarks, existing text-to-SQL models typically
rely on the lexical matching between words in natural language (NL) questions
and tokens in table schemas, which may render the models vulnerable to attacks
that break the schema linking mechanism. In this work, we investigate the
robustness of text-to-SQL models to synonym substitution. In particular, we
introduce Spider-Syn, a human-curated dataset based on the Spider benchmark for
text-to-SQL translation. NL questions in Spider-Syn are modified from Spider,
by replacing their schema-related words with manually selected synonyms that
reflect real-world question paraphrases. We observe that the accuracy
dramatically drops by eliminating such explicit correspondence between NL
questions and table schemas, even if the synonyms are not adversarially
selected to conduct worst-case adversarial attacks. Finally, we present two
categories of approaches to improve the model robustness. The first category of
approaches utilizes additional synonym annotations for table schemas by
modifying the model input, while the second category is based on adversarial
training. We demonstrate that both categories of approaches significantly
outperform their counterparts without the defense, and the first category of
approaches are more effective.
| 2,021 |
Computation and Language
|
Topic-Driven and Knowledge-Aware Transformer for Dialogue Emotion
Detection
|
Emotion detection in dialogues is challenging as it often requires the
identification of thematic topics underlying a conversation, the relevant
commonsense knowledge, and the intricate transition patterns between the
affective states. In this paper, we propose a Topic-Driven Knowledge-Aware
Transformer to handle the challenges above. We firstly design a topic-augmented
language model (LM) with an additional layer specialized for topic detection.
The topic-augmented LM is then combined with commonsense statements derived
from a knowledge base based on the dialogue contextual information. Finally, a
transformer-based encoder-decoder architecture fuses the topical and
commonsense information, and performs the emotion label sequence prediction.
The model has been experimented on four datasets in dialogue emotion detection,
demonstrating its superiority empirically over the existing state-of-the-art
approaches. Quantitative and qualitative results show that the model can
discover topics which help in distinguishing emotion categories.
| 2,021 |
Computation and Language
|
Evidence-based Factual Error Correction
|
This paper introduces the task of factual error correction: performing edits
to a claim so that the generated rewrite is better supported by evidence. This
extends the well-studied task of fact verification by providing a mechanism to
correct written texts that are refuted or only partially supported by evidence.
We demonstrate that it is feasible to train factual error correction systems
from existing fact checking datasets which only contain labeled claims
accompanied by evidence, but not the correction. We achieve this by employing a
two-stage distant supervision approach that incorporates evidence into masked
claims when generating corrections. Our approach, based on the T5 transformer
and using retrieved evidence, achieved better results than existing work which
used a pointer copy network and gold evidence, producing accurate factual error
corrections for 5x more instances in human evaluation and a .125 increase in
SARI score. The evaluation is conducted on a dataset of 65,000 instances based
on a recent fact verification shared task and we release it to enable further
work on the task.
| 2,021 |
Computation and Language
|
Database Reasoning Over Text
|
Neural models have shown impressive performance gains in answering queries
from natural language text. However, existing works are unable to support
database queries, such as "List/Count all female athletes who were born in 20th
century", which require reasoning over sets of relevant facts with operations
such as join, filtering and aggregation. We show that while state-of-the-art
transformer models perform very well for small databases, they exhibit
limitations in processing noisy data, numerical operations, and queries that
aggregate facts. We propose a modular architecture to answer these
database-style queries over multiple spans from text and aggregating these at
scale. We evaluate the architecture using WikiNLDB, a novel dataset for
exploring such queries. Our architecture scales to databases containing
thousands of facts whereas contemporary models are limited by how many facts
can be encoded. In direct comparison on small databases, our approach increases
overall answer accuracy from 85% to 90%. On larger databases, our approach
retains its accuracy whereas transformer baselines could not encode the
context.
| 2,021 |
Computation and Language
|
SyGNS: A Systematic Generalization Testbed Based on Natural Language
Semantics
|
Recently, deep neural networks (DNNs) have achieved great success in
semantically challenging NLP tasks, yet it remains unclear whether DNN models
can capture compositional meanings, those aspects of meaning that have been
long studied in formal semantics. To investigate this issue, we propose a
Systematic Generalization testbed based on Natural language Semantics (SyGNS),
whose challenge is to map natural language sentences to multiple forms of
scoped meaning representations, designed to account for various semantic
phenomena. Using SyGNS, we test whether neural networks can systematically
parse sentences involving novel combinations of logical expressions such as
quantifiers and negation. Experiments show that Transformer and GRU models can
generalize to unseen combinations of quantifiers, negations, and modifiers that
are similar to given training instances in form, but not to the others. We also
find that the generalization performance to unseen combinations is better when
the form of meaning representations is simpler. The data and code for SyGNS are
publicly available at https://github.com/verypluming/SyGNS.
| 2,021 |
Computation and Language
|
Is Sparse Attention more Interpretable?
|
Sparse attention has been claimed to increase model interpretability under
the assumption that it highlights influential inputs. Yet the attention
distribution is typically over representations internal to the model rather
than the inputs themselves, suggesting this assumption may not have merit. We
build on the recent work exploring the interpretability of attention; we design
a set of experiments to help us understand how sparsity affects our ability to
use attention as an explainability tool. On three text classification tasks, we
verify that only a weak relationship between inputs and co-indexed intermediate
representations exists -- under sparse attention and otherwise. Further, we do
not find any plausible mappings from sparse attention distributions to a sparse
set of influential inputs through other avenues. Rather, we observe in this
setting that inducing sparsity may make it less plausible that attention can be
used as a tool for understanding model behavior.
| 2,021 |
Computation and Language
|
belabBERT: a Dutch RoBERTa-based language model applied to psychiatric
classification
|
Natural language processing (NLP) is becoming an important means for
automatic recognition of human traits and states, such as intoxication,
presence of psychiatric disorders, presence of airway disorders and states of
stress. Such applications have the potential to be an important pillar for
online help lines, and may gradually be introduced into eHealth modules.
However, NLP is language specific and for languages such as Dutch, NLP models
are scarce. As a result, recent Dutch NLP models have a low capture of long
range semantic dependencies over sentences. To overcome this, here we present
belabBERT, a new Dutch language model extending the RoBERTa architecture.
belabBERT is trained on a large Dutch corpus (+32 GB) of web crawled texts. We
applied belabBERT to the classification of psychiatric illnesses. First, we
evaluated the strength of text-based classification using belabBERT, and
compared the results to the existing RobBERT model. Then, we compared the
performance of belabBERT to audio classification for psychiatric disorders.
Finally, a brief exploration was performed, extending the framework to a hybrid
text- and audio-based classification. Our results show that belabBERT
outperformed the current best text classification network for Dutch, RobBERT.
belabBERT also outperformed classification based on audio alone.
| 2,021 |
Computation and Language
|
LGESQL: Line Graph Enhanced Text-to-SQL Model with Mixed Local and
Non-Local Relations
|
This work aims to tackle the challenging heterogeneous graph encoding problem
in the text-to-SQL task. Previous methods are typically node-centric and merely
utilize different weight matrices to parameterize edge types, which 1) ignore
the rich semantics embedded in the topological structure of edges, and 2) fail
to distinguish local and non-local relations for each node. To this end, we
propose a Line Graph Enhanced Text-to-SQL (LGESQL) model to mine the underlying
relational features without constructing meta-paths. By virtue of the line
graph, messages propagate more efficiently through not only connections between
nodes, but also the topology of directed edges. Furthermore, both local and
non-local relations are integrated distinctively during the graph iteration. We
also design an auxiliary task called graph pruning to improve the
discriminative capability of the encoder. Our framework achieves
state-of-the-art results (62.8% with Glove, 72.0% with Electra) on the
cross-domain text-to-SQL benchmark Spider at the time of writing.
| 2,021 |
Computation and Language
|
T-BERT -- Model for Sentiment Analysis of Micro-blogs Integrating Topic
Model and BERT
|
Sentiment analysis (SA) has become an extensive research area in recent years
impacting diverse fields including ecommerce, consumer business, and politics,
driven by increasing adoption and usage of social media platforms. It is
challenging to extract topics and sentiments from unsupervised short texts
emerging in such contexts, as they may contain figurative words, strident data,
and co-existence of many possible meanings for a single word or phrase, all
contributing to obtaining incorrect topics. Most prior research is based on a
specific theme/rhetoric/focused-content on a clean dataset. In the work
reported here, the effectiveness of BERT(Bidirectional Encoder Representations
from Transformers) in sentiment classification tasks from a raw live dataset
taken from a popular microblogging platform is demonstrated. A novel T-BERT
framework is proposed to show the enhanced performance obtainable by combining
latent topics with contextual BERT embeddings. Numerical experiments were
conducted on an ensemble with about 42000 datasets using NimbleBox.ai platform
with a hardware configuration consisting of Nvidia Tesla K80(CUDA), 4 core CPU,
15GB RAM running on an isolated Google Cloud Platform instance. The empirical
results show that the model improves in performance while adding topics to BERT
and an accuracy rate of 90.81% on sentiment classification using BERT with the
proposed approach.
| 2,021 |
Computation and Language
|
Use of Formal Ethical Reviews in NLP Literature: Historical Trends and
Current Practices
|
Ethical aspects of research in language technologies have received much
attention recently. It is a standard practice to get a study involving human
subjects reviewed and approved by a professional ethics committee/board of the
institution. How commonly do we see mention of ethical approvals in NLP
research? What types of research or aspects of studies are usually subject to
such reviews? With the rising concerns and discourse around the ethics of NLP,
do we also observe a rise in formal ethical reviews of NLP studies? And, if so,
would this imply that there is a heightened awareness of ethical issues that
was previously lacking? We aim to address these questions by conducting a
detailed quantitative and qualitative analysis of the ACL Anthology, as well as
comparing the trends in our field to those of other related disciplines, such
as cognitive science, machine learning, data mining, and systems.
| 2,021 |
Computation and Language
|
DynaEval: Unifying Turn and Dialogue Level Evaluation
|
A dialogue is essentially a multi-turn interaction among interlocutors.
Effective evaluation metrics should reflect the dynamics of such interaction.
Existing automatic metrics are focused very much on the turn-level quality,
while ignoring such dynamics. To this end, we propose DynaEval, a unified
automatic evaluation framework which is not only capable of performing
turn-level evaluation, but also holistically considers the quality of the
entire dialogue. In DynaEval, the graph convolutional network (GCN) is adopted
to model a dialogue in totality, where the graph nodes denote each individual
utterance and the edges represent the dependency between pairs of utterances. A
contrastive loss is then applied to distinguish well-formed dialogues from
carefully constructed negative samples. Experiments show that DynaEval
significantly outperforms the state-of-the-art dialogue coherence model, and
correlates strongly with human judgements across multiple dialogue evaluation
aspects at both turn and dialogue level.
| 2,021 |
Computation and Language
|
Towards Emotional Support Dialog Systems
|
Emotional support is a crucial ability for many conversation scenarios,
including social interactions, mental health support, and customer service
chats. Following reasonable procedures and using various support skills can
help to effectively provide support. However, due to the lack of a
well-designed task and corpora of effective emotional support conversations,
research on building emotional support into dialog systems remains untouched.
In this paper, we define the Emotional Support Conversation (ESC) task and
propose an ESC Framework, which is grounded on the Helping Skills Theory. We
construct an Emotion Support Conversation dataset (ESConv) with rich annotation
(especially support strategy) in a help-seeker and supporter mode. To ensure a
corpus of high-quality conversations that provide examples of effective
emotional support, we take extensive effort to design training tutorials for
supporters and several mechanisms for quality control during data collection.
Finally, we evaluate state-of-the-art dialog models with respect to the ability
to provide emotional support. Our results show the importance of support
strategies in providing effective emotional support and the utility of ESConv
in training more emotional support systems.
| 2,021 |
Computation and Language
|
End-to-End NLP Knowledge Graph Construction
|
This paper studies the end-to-end construction of an NLP Knowledge Graph (KG)
from scientific papers. We focus on extracting four types of relations:
evaluatedOn between tasks and datasets, evaluatedBy between tasks and
evaluation metrics, as well as coreferent and related relations between the
same type of entities. For instance, F1-score is coreferent with F-measure. We
introduce novel methods for each of these relation types and apply our final
framework (SciNLP-KG) to 30,000 NLP papers from ACL Anthology to build a
large-scale KG, which can facilitate automatically constructing scientific
leaderboards for the NLP community. The results of our experiments indicate
that the resulting KG contains high-quality information.
| 2,021 |
Computation and Language
|
Detecting Bot-Generated Text by Characterizing Linguistic Accommodation
in Human-Bot Interactions
|
Language generation models' democratization benefits many domains, from
answering health-related questions to enhancing education by providing
AI-driven tutoring services. However, language generation models'
democratization also makes it easier to generate human-like text at-scale for
nefarious activities, from spreading misinformation to targeting specific
groups with hate speech. Thus, it is essential to understand how people
interact with bots and develop methods to detect bot-generated text. This paper
shows that bot-generated text detection methods are more robust across datasets
and models if we use information about how people respond to it rather than
using the bot's text directly. We also analyze linguistic alignment, providing
insight into differences between human-human and human-bot conversations.
| 2,021 |
Computation and Language
|
A Cluster-based Approach for Improving Isotropy in Contextual Embedding
Space
|
The representation degeneration problem in Contextual Word Representations
(CWRs) hurts the expressiveness of the embedding space by forming an
anisotropic cone where even unrelated words have excessively positive
correlations. Existing techniques for tackling this issue require a learning
process to re-train models with additional objectives and mostly employ a
global assessment to study isotropy. Our quantitative analysis over isotropy
shows that a local assessment could be more accurate due to the clustered
structure of CWRs. Based on this observation, we propose a local cluster-based
method to address the degeneration issue in contextual embedding spaces. We
show that in clusters including punctuations and stop words, local dominant
directions encode structural information, removing which can improve CWRs
performance on semantic tasks. Moreover, we find that tense information in verb
representations dominates sense semantics. We show that removing dominant
directions of verb representations can transform the space to better suit
semantic applications. Our experiments demonstrate that the proposed
cluster-based method can mitigate the degeneration problem on multiple tasks.
| 2,021 |
Computation and Language
|
Self-Supervised Document Similarity Ranking via Contextualized Language
Models and Hierarchical Inference
|
We present a novel model for the problem of ranking a collection of documents
according to their semantic similarity to a source (query) document. While the
problem of document-to-document similarity ranking has been studied, most
modern methods are limited to relatively short documents or rely on the
existence of "ground-truth" similarity labels. Yet, in most common real-world
cases, similarity ranking is an unsupervised problem as similarity labels are
unavailable. Moreover, an ideal model should not be restricted by documents'
length. Hence, we introduce SDR, a self-supervised method for document
similarity that can be applied to documents of arbitrary length. Importantly,
SDR can be effectively applied to extremely long documents, exceeding the 4,096
maximal token limits of Longformer. Extensive evaluations on large document
datasets show that SDR significantly outperforms its alternatives across all
metrics. To accelerate future research on unlabeled long document similarity
ranking, and as an additional contribution to the community, we herein publish
two human-annotated test sets of long documents similarity evaluation. The SDR
code and datasets are publicly available.
| 2,021 |
Computation and Language
|
Topic-Aware Evidence Reasoning and Stance-Aware Aggregation for Fact
Verification
|
Fact verification is a challenging task that requires simultaneously
reasoning and aggregating over multiple retrieved pieces of evidence to
evaluate the truthfulness of a claim. Existing approaches typically (i) explore
the semantic interaction between the claim and evidence at different
granularity levels but fail to capture their topical consistency during the
reasoning process, which we believe is crucial for verification; (ii) aggregate
multiple pieces of evidence equally without considering their implicit stances
to the claim, thereby introducing spurious information. To alleviate the above
issues, we propose a novel topic-aware evidence reasoning and stance-aware
aggregation model for more accurate fact verification, with the following four
key properties: 1) checking topical consistency between the claim and evidence;
2) maintaining topical coherence among multiple pieces of evidence; 3) ensuring
semantic similarity between the global topic information and the semantic
representation of evidence; 4) aggregating evidence based on their implicit
stances to the claim. Extensive experiments conducted on the two benchmark
datasets demonstrate the superiority of the proposed model over several
state-of-the-art approaches for fact verification. The source code can be
obtained from https://github.com/jasenchn/TARSA.
| 2,021 |
Computation and Language
|
Figurative Language in Recognizing Textual Entailment
|
We introduce a collection of recognizing textual entailment (RTE) datasets
focused on figurative language. We leverage five existing datasets annotated
for a variety of figurative language -- simile, metaphor, and irony -- and
frame them into over 12,500 RTE examples.We evaluate how well state-of-the-art
models trained on popular RTE datasets capture different aspects of figurative
language. Our results and analyses indicate that these models might not
sufficiently capture figurative language, struggling to perform pragmatic
inference and reasoning about world knowledge. Ultimately, our datasets provide
a challenging testbed for evaluating RTE models.
| 2,021 |
Computation and Language
|
IrEne: Interpretable Energy Prediction for Transformers
|
Existing software-based energy measurements of NLP models are not accurate
because they do not consider the complex interactions between energy
consumption and model execution. We present IrEne, an interpretable and
extensible energy prediction system that accurately predicts the inference
energy consumption of a wide range of Transformer-based NLP models. IrEne
constructs a model tree graph that breaks down the NLP model into modules that
are further broken down into low-level machine learning (ML) primitives. IrEne
predicts the inference energy consumption of the ML primitives as a function of
generalizable features and fine-grained runtime resource usage. IrEne then
aggregates these low-level predictions recursively to predict the energy of
each module and finally of the entire model. Experiments across multiple
Transformer models show IrEne predicts inference energy consumption of
transformer models with an error of under 7% compared to the ground truth. In
contrast, existing energy models see an error of over 50%. We also show how
IrEne can be used to conduct energy bottleneck analysis and to easily evaluate
the energy impact of different architectural choices. We release the code and
data at https://github.com/StonyBrookNLP/irene.
| 2,021 |
Computation and Language
|
Uncovering Constraint-Based Behavior in Neural Models via Targeted
Fine-Tuning
|
A growing body of literature has focused on detailing the linguistic
knowledge embedded in large, pretrained language models. Existing work has
shown that non-linguistic biases in models can drive model behavior away from
linguistic generalizations. We hypothesized that competing linguistic processes
within a language, rather than just non-linguistic model biases, could obscure
underlying linguistic knowledge. We tested this claim by exploring a single
phenomenon in four languages: English, Chinese, Spanish, and Italian. While
human behavior has been found to be similar across languages, we find
cross-linguistic variation in model behavior. We show that competing processes
in a language act as constraints on model behavior and demonstrate that
targeted fine-tuning can re-weight the learned constraints, uncovering
otherwise dormant linguistic knowledge in models. Our results suggest that
models need to learn both the linguistic constraints in a language and their
relative ranking, with mismatches in either producing non-human-like behavior.
| 2,021 |
Computation and Language
|
Cross-document Coreference Resolution over Predicted Mentions
|
Coreference resolution has been mostly investigated within a single document
scope, showing impressive progress in recent years based on end-to-end models.
However, the more challenging task of cross-document (CD) coreference
resolution remained relatively under-explored, with the few recent models
applied only to gold mentions. Here, we introduce the first end-to-end model
for CD coreference resolution from raw text, which extends the prominent model
for within-document coreference to the CD setting. Our model achieves
competitive results for event and entity coreference resolution on gold
mentions. More importantly, we set first baseline results, on the standard ECB+
dataset, for CD coreference resolution over predicted mentions. Further, our
model is simpler and more efficient than recent CD coreference resolution
systems, while not using any external resources.
| 2,021 |
Computation and Language
|
Differential Privacy for Text Analytics via Natural Text Sanitization
|
Texts convey sophisticated knowledge. However, texts also convey sensitive
information. Despite the success of general-purpose language models and
domain-specific mechanisms with differential privacy (DP), existing text
sanitization mechanisms still provide low utility, as cursed by the
high-dimensional text representation. The companion issue of utilizing
sanitized texts for downstream analytics is also under-explored. This paper
takes a direct approach to text sanitization. Our insight is to consider both
sensitivity and similarity via our new local DP notion. The sanitized texts
also contribute to our sanitization-aware pretraining and fine-tuning, enabling
privacy-preserving natural language processing over the BERT language model
with promising utility. Surprisingly, the high utility does not boost up the
success rate of inference attacks.
| 2,021 |
Computation and Language
|
A Unified Generative Framework for Various NER Subtasks
|
Named Entity Recognition (NER) is the task of identifying spans that
represent entities in sentences. Whether the entity spans are nested or
discontinuous, the NER task can be categorized into the flat NER, nested NER,
and discontinuous NER subtasks. These subtasks have been mainly solved by the
token-level sequence labelling or span-level classification. However, these
solutions can hardly tackle the three kinds of NER subtasks concurrently. To
that end, we propose to formulate the NER subtasks as an entity span sequence
generation task, which can be solved by a unified sequence-to-sequence
(Seq2Seq) framework. Based on our unified framework, we can leverage the
pre-trained Seq2Seq model to solve all three kinds of NER subtasks without the
special design of the tagging schema or ways to enumerate spans. We exploit
three types of entity representations to linearize entities into a sequence.
Our proposed framework is easy-to-implement and achieves state-of-the-art
(SoTA) or near SoTA performance on eight English NER datasets, including two
flat NER datasets, three nested NER datasets, and three discontinuous NER
datasets.
| 2,021 |
Computation and Language
|
Improving low-resource ASR performance with untranscribed out-of-domain
data
|
Semi-supervised training (SST) is a common approach to leverage
untranscribed/unlabeled speech data to improve automatic speech recognition
performance in low-resource languages. However, if the available unlabeled
speech is mismatched to the target domain, SST is not as effective, and in many
cases performs worse than the original system. In this paper, we address the
issue of low-resource ASR when only untranscribed out-of-domain speech data is
readily available in the target language. Specifically, we look to improve
performance on conversational/telephony speech (target domain) using web
resources, in particular YouTube data, which more closely resembles
news/topical broadcast data. Leveraging SST, we show that while in some cases
simply pooling the out-of-domain data with the training data lowers word error
rate (WER), in all cases, we see improvements if we train first with the
out-of-domain data and then fine-tune the resulting model with the original
training data. Using 2000 hours of speed perturbed YouTube audio in each target
language, with semi-supervised transcripts, we show improvements on multiple
languages/data sets, of up to 16.3% relative improvement in WER over the
baseline systems and up to 7.4% relative improvement in WER over a system that
simply pools the out-of-domain data with the training data.
| 2,021 |
Computation and Language
|
Metaphor Generation with Conceptual Mappings
|
Generating metaphors is a difficult task as it requires understanding nuanced
relationships between abstract concepts. In this paper, we aim to generate a
metaphoric sentence given a literal expression by replacing relevant verbs.
Guided by conceptual metaphor theory, we propose to control the generation
process by encoding conceptual mappings between cognitive domains to generate
meaningful metaphoric expressions. To achieve this, we develop two methods: 1)
using FrameNet-based embeddings to learn mappings between domains and applying
them at the lexical level (CM-Lex), and 2) deriving source/target pairs to
train a controlled seq-to-seq generation model (CM-BART). We assess our methods
through automatic and human evaluation for basic metaphoricity and conceptual
metaphor presence. We show that the unsupervised CM-Lex model is competitive
with recent deep learning metaphor generation systems, and CM-BART outperforms
all other models both in automatic and human evaluations.
| 2,021 |
Computation and Language
|
Lower Perplexity is Not Always Human-Like
|
In computational psycholinguistics, various language models have been
evaluated against human reading behavior (e.g., eye movement) to build
human-like computational models. However, most previous efforts have focused
almost exclusively on English, despite the recent trend towards linguistic
universal within the general community. In order to fill the gap, this paper
investigates whether the established results in computational psycholinguistics
can be generalized across languages. Specifically, we re-examine an established
generalization -- the lower perplexity a language model has, the more
human-like the language model is -- in Japanese with typologically different
structures from English. Our experiments demonstrate that this established
generalization exhibits a surprising lack of universality; namely, lower
perplexity is not always human-like. Moreover, this discrepancy between English
and Japanese is further explored from the perspective of (non-)uniform
information density. Overall, our results suggest that a cross-lingual
evaluation will be necessary to construct human-like computational models.
| 2,022 |
Computation and Language
|
Multilingual Medical Question Answering and Information Retrieval for
Rural Health Intelligence Access
|
In rural regions of several developing countries, access to quality
healthcare, medical infrastructure, and professional diagnosis is largely
unavailable. Many of these regions are gradually gaining access to internet
infrastructure, although not with a strong enough connection to allow for
sustained communication with a medical practitioner. Several deaths resulting
from this lack of medical access, absence of patient's previous health records,
and the unavailability of information in indigenous languages can be easily
prevented. In this paper, we describe an approach leveraging the phenomenal
progress in Machine Learning and NLP (Natural Language Processing) techniques
to design a model that is low-resource, multilingual, and a preliminary
first-point-of-contact medical assistant. Our contribution includes defining
the NLP pipeline required for named-entity-recognition, language-agnostic
sentence embedding, natural language translation, information retrieval,
question answering, and generative pre-training for final query processing. We
obtain promising results for this pipeline and preliminary results for EHR
(Electronic Health Record) analysis with text summarization for medical
practitioners to peruse for their diagnosis. Through this NLP pipeline, we aim
to provide preliminary medical information to the user and do not claim to
supplant diagnosis from qualified medical practitioners. Using the input from
subject matter experts, we have compiled a large corpus to pre-train and
fine-tune our BioBERT based NLP model for the specific tasks. We expect recent
advances in NLP architectures, several of which are efficient and
privacy-preserving models, to further the impact of our solution and improve on
individual task performance.
| 2,021 |
Computation and Language
|
Uni-Encoder: A Fast and Accurate Response Selection Paradigm for
Generation-Based Dialogue Systems
|
Sample-and-rank is a key decoding strategy for modern generation-based
dialogue systems. It helps achieve diverse and high-quality responses by
selecting an answer from a small pool of generated candidates. The current
state-of-the-art ranking methods mainly use an encoding paradigm called
Cross-Encoder, which separately encodes each context-candidate pair and ranks
the candidates according to their fitness scores. However, Cross-Encoder
repeatedly encodes the same lengthy context for each candidate, resulting in
high computational costs. Poly-Encoder addresses the above problems by reducing
the interaction between context and candidates, but with a price of performance
drop. In this work, we develop a new paradigm called Uni-Encoder, that keeps
the full attention over each pair as in Cross-Encoder while only encoding the
context once, as in Poly-Encoder. Uni-Encoder encodes all the candidates with
the context in one forward pass. We use the same positional embedding for all
candidates to ensure they are treated equally and design a new attention
mechanism to avoid confusion. Our Uni-Encoder can simulate other ranking
paradigms using different attention and response concatenation methods.
Extensive experiments show that our proposed paradigm achieves new
state-of-the-art results on four benchmark datasets with high computational
efficiency. For instance, it improves R10@1 by 2.9% with an approximately 4X
faster inference speed on the Ubuntu V2 dataset.
| 2,023 |
Computation and Language
|
More Identifiable yet Equally Performant Transformers for Text
Classification
|
Interpretability is an important aspect of the trustworthiness of a model's
predictions. Transformer's predictions are widely explained by the attention
weights, i.e., a probability distribution generated at its self-attention unit
(head). Current empirical studies provide shreds of evidence that attention
weights are not explanations by proving that they are not unique. A recent
study showed theoretical justifications to this observation by proving the
non-identifiability of attention weights. For a given input to a head and its
output, if the attention weights generated in it are unique, we call the
weights identifiable. In this work, we provide deeper theoretical analysis and
empirical observations on the identifiability of attention weights. Ignored in
the previous works, we find the attention weights are more identifiable than we
currently perceive by uncovering the hidden role of the key vector. However,
the weights are still prone to be non-unique attentions that make them unfit
for interpretation. To tackle this issue, we provide a variant of the encoder
layer that decouples the relationship between key and value vector and provides
identifiable weights up to the desired length of the input. We prove the
applicability of such variations by providing empirical justifications on
varied text classification tasks. The implementations are available at
https://github.com/declare-lab/identifiable-transformers.
| 2,021 |
Computation and Language
|
Enriching Transformers with Structured Tensor-Product Representations
for Abstractive Summarization
|
Abstractive summarization, the task of generating a concise summary of input
documents, requires: (1) reasoning over the source document to determine the
salient pieces of information scattered across the long document, and (2)
composing a cohesive text by reconstructing these salient facts into a shorter
summary that faithfully reflects the complex relations connecting these facts.
In this paper, we adapt TP-TRANSFORMER (Schlag et al., 2019), an architecture
that enriches the original Transformer (Vaswani et al., 2017) with the
explicitly compositional Tensor Product Representation (TPR), for the task of
abstractive summarization. The key feature of our model is a structural bias
that we introduce by encoding two separate representations for each token to
represent the syntactic structure (with role vectors) and semantic content
(with filler vectors) separately. The model then binds the role and filler
vectors into the TPR as the layer output. We argue that the structured
intermediate representations enable the model to take better control of the
contents (salient facts) and structures (the syntax that connects the facts)
when generating the summary. Empirically, we show that our TP-TRANSFORMER
outperforms the Transformer and the original TP-TRANSFORMER significantly on
several abstractive summarization datasets based on both automatic and human
evaluations. On several syntactic and semantic probing tasks, we demonstrate
the emergent structural information in the role vectors and improved syntactic
interpretability in the TPR layer outputs. Code and models are available at
https://github.com/jiangycTarheel/TPT-Summ.
| 2,021 |
Computation and Language
|
On the Distribution, Sparsity, and Inference-time Quantization of
Attention Values in Transformers
|
How much information do NLP tasks really need from a transformer's attention
mechanism at application-time (inference)? From recent work, we know that there
is sparsity in transformers and that the floating-points within its computation
can be discretized to fewer values with minimal loss to task accuracies.
However, this requires retraining or even creating entirely new models, both of
which can be expensive and carbon-emitting. Focused on optimizations that do
not require training, we systematically study the full range of typical
attention values necessary. This informs the design of an inference-time
quantization technique using both pruning and log-scaled mapping which produces
only a few (e.g. $2^3$) unique values. Over the tasks of question answering and
sentiment analysis, we find nearly 80% of attention values can be pruned to
zeros with minimal ($< 1.0\%$) relative loss in accuracy. We use this pruning
technique in conjunction with quantizing the attention values to only a 3-bit
format, without retraining, resulting in only a 0.8% accuracy reduction on
question answering with fine-tuned RoBERTa.
| 2,021 |
Computation and Language
|
multiPRover: Generating Multiple Proofs for Improved Interpretability in
Rule Reasoning
|
We focus on a type of linguistic formal reasoning where the goal is to reason
over explicit knowledge in the form of natural language facts and rules (Clark
et al., 2020). A recent work, named PRover (Saha et al., 2020), performs such
reasoning by answering a question and also generating a proof graph that
explains the answer. However, compositional reasoning is not always unique and
there may be multiple ways of reaching the correct answer. Thus, in our work,
we address a new and challenging problem of generating multiple proof graphs
for reasoning over natural language rule-bases. Each proof provides a different
rationale for the answer, thereby improving the interpretability of such
reasoning systems. In order to jointly learn from all proof graphs and exploit
the correlations between multiple proofs for a question, we pose this task as a
set generation problem over structured output spaces where each proof is
represented as a directed graph. We propose two variants of a proof-set
generation model, multiPRover. Our first model, Multilabel-multiPRover,
generates a set of proofs via multi-label classification and implicit
conditioning between the proofs; while the second model, Iterative-multiPRover,
generates proofs iteratively by explicitly conditioning on the previously
generated proofs. Experiments on multiple synthetic, zero-shot, and
human-paraphrased datasets reveal that both multiPRover models significantly
outperform PRover on datasets containing multiple gold proofs.
Iterative-multiPRover obtains state-of-the-art proof F1 in zero-shot scenarios
where all examples have single correct proofs. It also generalizes better to
questions requiring higher depths of reasoning where multiple proofs are more
frequent. Our code and models are publicly available at
https://github.com/swarnaHub/multiPRover
| 2,021 |
Computation and Language
|
SMURF: SeMantic and linguistic UndeRstanding Fusion for Caption
Evaluation via Typicality Analysis
|
The open-ended nature of visual captioning makes it a challenging area for
evaluation. The majority of proposed models rely on specialized training to
improve human-correlation, resulting in limited adoption, generalizability, and
explainabilty. We introduce "typicality", a new formulation of evaluation
rooted in information theory, which is uniquely suited for problems lacking a
definite ground truth. Typicality serves as our framework to develop a novel
semantic comparison, SPARCS, as well as referenceless fluency evaluation
metrics. Over the course of our analysis, two separate dimensions of fluency
naturally emerge: style, captured by metric SPURTS, and grammar, captured in
the form of grammatical outlier penalties. Through extensive experiments and
ablation studies on benchmark datasets, we show how these decomposed dimensions
of semantics and fluency provide greater system-level insight into captioner
differences. Our proposed metrics along with their combination, SMURF, achieve
state-of-the-art correlation with human judgment when compared with other
rule-based evaluation metrics.
| 2,022 |
Computation and Language
|
Attention-based Contextual Language Model Adaptation for Speech
Recognition
|
Language modeling (LM) for automatic speech recognition (ASR) does not
usually incorporate utterance level contextual information. For some domains
like voice assistants, however, additional context, such as the time at which
an utterance was spoken, provides a rich input signal. We introduce an
attention mechanism for training neural speech recognition language models on
both text and non-linguistic contextual data. When applied to a large
de-identified dataset of utterances collected by a popular voice assistant
platform, our method reduces perplexity by 7.0% relative over a standard LM
that does not incorporate contextual information. When evaluated on utterances
extracted from the long tail of the dataset, our method improves perplexity by
9.0% relative over a standard LM and by over 2.8% relative when compared to a
state-of-the-art model for contextual LM.
| 2,021 |
Computation and Language
|
BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively
Inspired Orthographic Adversarial Attacks
|
Adversarial attacks expose important blind spots of deep learning systems.
While word- and sentence-level attack scenarios mostly deal with finding
semantic paraphrases of the input that fool NLP models, character-level attacks
typically insert typos into the input stream. It is commonly thought that these
are easier to defend via spelling correction modules. In this work, we show
that both a standard spellchecker and the approach of Pruthi et al. (2019),
which trains to defend against insertions, deletions and swaps, perform poorly
on the character-level benchmark recently proposed in Eger and Benz (2020)
which includes more challenging attacks such as visual and phonetic
perturbations and missing word segmentations. In contrast, we show that an
untrained iterative approach which combines context-independent character-level
information with context-dependent information from BERT's masked language
modeling can perform on par with human crowd-workers from Amazon Mechanical
Turk (AMT) supervised via 3-shot learning.
| 2,021 |
Computation and Language
|
Lightweight Adapter Tuning for Multilingual Speech Translation
|
Adapter modules were recently introduced as an efficient alternative to
fine-tuning in NLP. Adapter tuning consists in freezing pretrained parameters
of a model and injecting lightweight modules between layers, resulting in the
addition of only a small number of task-specific trainable parameters. While
adapter tuning was investigated for multilingual neural machine translation,
this paper proposes a comprehensive analysis of adapters for multilingual
speech translation (ST). Starting from different pre-trained models (a
multilingual ST trained on parallel data or a multilingual BART (mBART) trained
on non-parallel multilingual data), we show that adapters can be used to: (a)
efficiently specialize ST to specific language pairs with a low extra cost in
terms of parameters, and (b) transfer from an automatic speech recognition
(ASR) task and an mBART pre-trained model to a multilingual ST task.
Experiments show that adapter tuning offer competitive results to full
fine-tuning, while being much more parameter-efficient.
| 2,021 |
Computation and Language
|
Ethical-Advice Taker: Do Language Models Understand Natural Language
Interventions?
|
Is it possible to use natural language to intervene in a model's behavior and
alter its prediction in a desired way? We investigate the effectiveness of
natural language interventions for reading-comprehension systems, studying this
in the context of social stereotypes. Specifically, we propose a new language
understanding task, Linguistic Ethical Interventions (LEI), where the goal is
to amend a question-answering (QA) model's unethical behavior by communicating
context-specific principles of ethics and equity to it. To this end, we build
upon recent methods for quantifying a system's social stereotypes, augmenting
them with different kinds of ethical interventions and the desired model
behavior under such interventions. Our zero-shot evaluation finds that even
today's powerful neural language models are extremely poor ethical-advice
takers, that is, they respond surprisingly little to ethical interventions even
though these interventions are stated as simple sentences. Few-shot learning
improves model behavior but remains far from the desired outcome, especially
when evaluated for various types of generalization. Our new task thus poses a
novel language understanding challenge for the community.
| 2,021 |
Computation and Language
|
Evaluating the Efficacy of Summarization Evaluation across Languages
|
While automatic summarization evaluation methods developed for English are
routinely applied to other languages, this is the first attempt to
systematically quantify their panlinguistic efficacy. We take a summarization
corpus for eight different languages, and manually annotate generated summaries
for focus (precision) and coverage (recall). Based on this, we evaluate 19
summarization evaluation metrics, and find that using multilingual BERT within
BERTScore performs well across all languages, at a level above that for
English.
| 2,021 |
Computation and Language
|
MedNLI Is Not Immune: Natural Language Inference Artifacts in the
Clinical Domain
|
Crowdworker-constructed natural language inference (NLI) datasets have been
found to contain statistical artifacts associated with the annotation process
that allow hypothesis-only classifiers to achieve better-than-random
performance (Poliak et al., 2018; Gururanganet et al., 2018; Tsuchiya, 2018).
We investigate whether MedNLI, a physician-annotated dataset with premises
extracted from clinical notes, contains such artifacts (Romanov and Shivade,
2018). We find that entailed hypotheses contain generic versions of specific
concepts in the premise, as well as modifiers related to responsiveness,
duration, and probability. Neutral hypotheses feature conditions and behaviors
that co-occur with, or cause, the condition(s) in the premise. Contradiction
hypotheses feature explicit negation of the premise and implicit negation via
assertion of good health. Adversarial filtering demonstrates that performance
degrades when evaluated on the difficult subset. We provide partition
information and recommendations for alternative dataset construction strategies
for knowledge-intensive domains.
| 2,021 |
Computation and Language
|
Knowing More About Questions Can Help: Improving Calibration in Question
Answering
|
We study calibration in question answering, estimating whether model
correctly predicts answer for each question. Unlike prior work which mainly
rely on the model's confidence score, our calibrator incorporates information
about the input example (e.g., question and the evidence context). Together
with data augmentation via back translation, our simple approach achieves 5-10%
gains in calibration accuracy on reading comprehension benchmarks. Furthermore,
we present the first calibration study in the open retrieval setting, comparing
the calibration accuracy of retrieval-based span prediction models and answer
generation models. Here again, our approach shows consistent gains over
calibrators relying on the model confidence. Our simple and efficient
calibrator can be easily adapted to many tasks and model architectures, showing
robust gains in all settings.
| 2,021 |
Computation and Language
|
Dissecting Generation Modes for Abstractive Summarization Models via
Ablation and Attribution
|
Despite the prominence of neural abstractive summarization models, we know
little about how they actually form summaries and how to understand where their
decisions come from. We propose a two-step method to interpret summarization
model decisions. We first analyze the model's behavior by ablating the full
model to categorize each decoder decision into one of several generation modes:
roughly, is the model behaving like a language model, is it relying heavily on
the input, or is it somewhere in between? After isolating decisions that do
depend on the input, we explore interpreting these decisions using several
different attribution methods. We compare these techniques based on their
ability to select content and reconstruct the model's predicted token from
perturbations of the input, thus revealing whether highlighted attributions are
truly important for the generation of the next token. While this machinery can
be broadly useful even beyond summarization, we specifically demonstrate its
capability to identify phrases the summarization model has memorized and
determine where in the training pipeline this memorization happened, as well as
study complex generation phenomena like sentence fusion on a per-instance
basis.
| 2,021 |
Computation and Language
|
"You made me feel this way": Investigating Partners' Influence in
Predicting Emotions in Couples' Conflict Interactions using Speech Data
|
How romantic partners interact with each other during a conflict influences
how they feel at the end of the interaction and is predictive of whether the
partners stay together in the long term. Hence understanding the emotions of
each partner is important. Yet current approaches that are used include
self-reports which are burdensome and hence limit the frequency of this data
collection. Automatic emotion prediction could address this challenge. Insights
from psychology research indicate that partners' behaviors influence each
other's emotions in conflict interaction and hence, the behavior of both
partners could be considered to better predict each partner's emotion. However,
it is yet to be investigated how doing so compares to only using each partner's
own behavior in terms of emotion prediction performance. In this work, we used
BERT to extract linguistic features (i.e., what partners said) and openSMILE to
extract paralinguistic features (i.e., how they said it) from a data set of 368
German-speaking Swiss couples (N = 736 individuals) who were videotaped during
an 8-minutes conflict interaction in the laboratory. Based on those features,
we trained machine learning models to predict if partners feel positive or
negative after the conflict interaction. Our results show that including the
behavior of the other partner improves the prediction performance. Furthermore,
for men, considering how their female partners spoke is most important and for
women considering what their male partner said is most important in getting
better prediction performance. This work is a step towards automatically
recognizing each partners' emotion based on the behavior of both, which would
enable a better understanding of couples in research, therapy, and the real
world.
| 2,021 |
Computation and Language
|
BERT meets LIWC: Exploring State-of-the-Art Language Models for
Predicting Communication Behavior in Couples' Conflict Interactions
|
Many processes in psychology are complex, such as dyadic interactions between
two interacting partners (e.g. patient-therapist, intimate relationship
partners). Nevertheless, many basic questions about interactions are difficult
to investigate because dyadic processes can be within a person and between
partners, they are based on multimodal aspects of behavior and unfold rapidly.
Current analyses are mainly based on the behavioral coding method, whereby
human coders annotate behavior based on a coding schema. But coding is
labor-intensive, expensive, slow, focuses on few modalities. Current approaches
in psychology use LIWC for analyzing couples' interactions. However, advances
in natural language processing such as BERT could enable the development of
systems to potentially automate behavioral coding, which in turn could
substantially improve psychological research. In this work, we train machine
learning models to automatically predict positive and negative communication
behavioral codes of 368 German-speaking Swiss couples during an 8-minute
conflict interaction on a fine-grained scale (10-seconds sequences) using
linguistic features and paralinguistic features derived with openSMILE. Our
results show that both simpler TF-IDF features as well as more complex BERT
features performed better than LIWC, and that adding paralinguistic features
did not improve the performance. These results suggest it might be time to
consider modern alternatives to LIWC, the de facto linguistic features in
psychology, for prediction tasks in couples research. This work is a further
step towards the automated coding of couples' behavior which could enhance
couple research and therapy, and be utilized for other dyadic interactions as
well.
| 2,021 |
Computation and Language
|
MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation
Understanding
|
Recently, various neural models for multi-party conversation (MPC) have
achieved impressive improvements on a variety of tasks such as addressee
recognition, speaker identification and response prediction. However, these
existing methods on MPC usually represent interlocutors and utterances
individually and ignore the inherent complicated structure in MPC which may
provide crucial interlocutor and utterance semantics and would enhance the
conversation understanding process. To this end, we present MPC-BERT, a
pre-trained model for MPC understanding that considers learning who says what
to whom in a unified model with several elaborated self-supervised tasks.
Particularly, these tasks can be generally categorized into (1) interlocutor
structure modeling including reply-to utterance recognition, identical speaker
searching and pointer consistency distinction, and (2) utterance semantics
modeling including masked shared utterance restoration and shared node
detection. We evaluate MPC-BERT on three downstream tasks including addressee
recognition, speaker identification and response selection. Experimental
results show that MPC-BERT outperforms previous methods by large margins and
achieves new state-of-the-art performance on all three downstream tasks at two
benchmarks.
| 2,021 |
Computation and Language
|
Comparing Acoustic-based Approaches for Alzheimer's Disease Detection
|
Robust strategies for Alzheimer's disease (AD) detection are important, given
the high prevalence of AD. In this paper, we study the performance and
generalizability of three approaches for AD detection from speech on the recent
ADReSSo challenge dataset: 1) using conventional acoustic features 2) using
novel pre-trained acoustic embeddings 3) combining acoustic features and
embeddings. We find that while feature-based approaches have a higher
precision, classification approaches relying on pre-trained embeddings prove to
have a higher, and more balanced cross-validated performance across multiple
metrics of performance. Further, embedding-only approaches are more
generalizable. Our best model outperforms the acoustic baseline in the
challenge by 2.8%.
| 2,022 |
Computation and Language
|
Adjacency List Oriented Relational Fact Extraction via Adaptive
Multi-task Learning
|
Relational fact extraction aims to extract semantic triplets from
unstructured text. In this work, we show that all of the relational fact
extraction models can be organized according to a graph-oriented analytical
perspective. An efficient model, aDjacency lIst oRiented rElational faCT
(DIRECT), is proposed based on this analytical framework. To alleviate
challenges of error propagation and sub-task loss equilibrium, DIRECT employs a
novel adaptive multi-task learning strategy with dynamic sub-task loss
balancing. Extensive experiments are conducted on two benchmark datasets, and
results prove that the proposed model outperforms a series of state-of-the-art
(SoTA) models for relational triplet extraction.
| 2,021 |
Computation and Language
|
Can Generative Pre-trained Language Models Serve as Knowledge Bases for
Closed-book QA?
|
Recent work has investigated the interesting question using pre-trained
language models (PLMs) as knowledge bases for answering open questions.
However, existing work is limited in using small benchmarks with high
test-train overlaps. We construct a new dataset of closed-book QA using SQuAD,
and investigate the performance of BART. Experiments show that it is
challenging for BART to remember training facts in high precision, and also
challenging to answer closed-book questions even if relevant knowledge is
retained. Some promising directions are found, including decoupling the
knowledge memorizing process and the QA finetune process, forcing the model to
recall relevant knowledge when question answering.
| 2,021 |
Computation and Language
|
Discriminative Reasoning for Document-level Relation Extraction
|
Document-level relation extraction (DocRE) models generally use graph
networks to implicitly model the reasoning skill (i.e., pattern recognition,
logical reasoning, coreference reasoning, etc.) related to the relation between
one entity pair in a document. In this paper, we propose a novel discriminative
reasoning framework to explicitly model the paths of these reasoning skills
between each entity pair in this document. Thus, a discriminative reasoning
network is designed to estimate the relation probability distribution of
different reasoning paths based on the constructed graph and vectorized
document contexts for each entity pair, thereby recognizing their relation.
Experimental results show that our method outperforms the previous
state-of-the-art performance on the large-scale DocRE dataset. The code is
publicly available at https://github.com/xwjim/DRN.
| 2,021 |
Computation and Language
|
The Limitations of Limited Context for Constituency Parsing
|
Incorporating syntax into neural approaches in NLP has a multitude of
practical and scientific benefits. For instance, a language model that is
syntax-aware is likely to be able to produce better samples; even a
discriminative model like BERT with a syntax module could be used for core NLP
tasks like unsupervised syntactic parsing. Rapid progress in recent years was
arguably spurred on by the empirical success of the Parsing-Reading-Predict
architecture of (Shen et al., 2018a), later simplified by the Order Neuron LSTM
of (Shen et al., 2019). Most notably, this is the first time neural approaches
were able to successfully perform unsupervised syntactic parsing (evaluated by
various metrics like F-1 score).
However, even heuristic (much less fully mathematical) understanding of why
and when these architectures work is lagging severely behind. In this work, we
answer representational questions raised by the architectures in (Shen et al.,
2018a, 2019), as well as some transition-based syntax-aware language models
(Dyer et al., 2016): what kind of syntactic structure can current neural
approaches to syntax represent? Concretely, we ground this question in the
sandbox of probabilistic context-free-grammars (PCFGs), and identify a key
aspect of the representational power of these approaches: the amount and
directionality of context that the predictor has access to when forced to make
parsing decision. We show that with limited context (either bounded, or
unidirectional), there are PCFGs, for which these approaches cannot represent
the max-likelihood parse; conversely, if the context is unlimited, they can
represent the max-likelihood parse of any PCFG.
| 2,021 |
Computation and Language
|
To Point or Not to Point: Understanding How Abstractive Summarizers
Paraphrase Text
|
Abstractive neural summarization models have seen great improvements in
recent years, as shown by ROUGE scores of the generated summaries. But despite
these improved metrics, there is limited understanding of the strategies
different models employ, and how those strategies relate their understanding of
language. To understand this better, we run several experiments to characterize
how one popular abstractive model, the pointer-generator model of See et al.
(2017), uses its explicit copy/generation switch to control its level of
abstraction (generation) vs extraction (copying). On an extractive-biased
dataset, the model utilizes syntactic boundaries to truncate sentences that are
otherwise often copied verbatim. When we modify the copy/generation switch and
force the model to generate, only simple paraphrasing abilities are revealed
alongside factual inaccuracies and hallucinations. On an abstractive-biased
dataset, the model copies infrequently but shows similarly limited abstractive
abilities. In line with previous research, these results suggest that
abstractive summarization models lack the semantic understanding necessary to
generate paraphrases that are both abstractive and faithful to the source
document.
| 2,021 |
Computation and Language
|
A Systematic Investigation of KB-Text Embedding Alignment at Scale
|
Knowledge bases (KBs) and text often contain complementary knowledge: KBs
store structured knowledge that can support long range reasoning, while text
stores more comprehensive and timely knowledge in an unstructured way.
Separately embedding the individual knowledge sources into vector spaces has
demonstrated tremendous successes in encoding the respective knowledge, but how
to jointly embed and reason with both knowledge sources to fully leverage the
complementary information is still largely an open problem. We conduct a
large-scale, systematic investigation of aligning KB and text embeddings for
joint reasoning. We set up a novel evaluation framework with two evaluation
tasks, few-shot link prediction and analogical reasoning, and evaluate an array
of KB-text embedding alignment methods. We also demonstrate how such alignment
can infuse textual information into KB embeddings for more accurate link
prediction on emerging entities and events, using COVID-19 as a case study.
| 2,021 |
Computation and Language
|
ZmBART: An Unsupervised Cross-lingual Transfer Framework for Language
Generation
|
Despite the recent advancement in NLP research, cross-lingual transfer for
natural language generation is relatively understudied. In this work, we
transfer supervision from high resource language (HRL) to multiple low-resource
languages (LRLs) for natural language generation (NLG). We consider four NLG
tasks (text summarization, question generation, news headline generation, and
distractor generation) and three syntactically diverse languages, i.e.,
English, Hindi, and Japanese. We propose an unsupervised cross-lingual language
generation framework (called ZmBART) that does not use any parallel or
pseudo-parallel/back-translated data. In this framework, we further pre-train
mBART sequence-to-sequence denoising auto-encoder model with an auxiliary task
using monolingual data of three languages. The objective function of the
auxiliary task is close to the target tasks which enriches the multi-lingual
latent representation of mBART and provides good initialization for target
tasks. Then, this model is fine-tuned with task-specific supervised English
data and directly evaluated with low-resource languages in the Zero-shot
setting. To overcome catastrophic forgetting and spurious correlation issues,
we applied freezing model component and data argumentation approaches
respectively. This simple modeling approach gave us promising results.We
experimented with few-shot training (with 1000 supervised data points) which
boosted the model performance further. We performed several ablations and
cross-lingual transferability analyses to demonstrate the robustness of ZmBART.
| 2,021 |
Computation and Language
|
Automatically Detecting Cyberbullying Comments on Online Game Forums
|
Online game forums are popular to most of game players. They use it to
communicate and discuss the strategy of the game, or even to make friends.
However, game forums also contain abusive and harassment speech, disturbing and
threatening players. Therefore, it is necessary to automatically detect and
remove cyberbullying comments to keep the game forum clean and friendly. We use
the Cyberbullying dataset collected from World of Warcraft (WoW) and League of
Legends (LoL) forums and train classification models to automatically detect
whether a comment of a player is abusive or not. The result obtains 82.69% of
macro F1-score for LoL forum and 83.86% of macro F1-score for WoW forum by the
Toxic-BERT model on the Cyberbullying dataset.
| 2,021 |
Computation and Language
|
Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia
|
Human activities can be seen as sequences of events, which are crucial to
understanding societies. Disproportional event distribution for different
demographic groups can manifest and amplify social stereotypes, and potentially
jeopardize the ability of members in some groups to pursue certain goals. In
this paper, we present the first event-centric study of gender biases in a
Wikipedia corpus. To facilitate the study, we curate a corpus of career and
personal life descriptions with demographic information consisting of 7,854
fragments from 10,412 celebrities. Then we detect events with a
state-of-the-art event detection model, calibrate the results using
strategically generated templates, and extract events that have asymmetric
associations with genders. Our study discovers that the Wikipedia pages tend to
intermingle personal life events with professional events for females but not
for males, which calls for the awareness of the Wikipedia community to
formalize guidelines and train the editors to mind the implicit biases that
contributors carry. Our work also lays the foundation for future works on
quantifying and discovering event biases at the corpus level.
| 2,022 |
Computation and Language
|
Tail-to-Tail Non-Autoregressive Sequence Prediction for Chinese
Grammatical Error Correction
|
We investigate the problem of Chinese Grammatical Error Correction (CGEC) and
present a new framework named Tail-to-Tail (\textbf{TtT}) non-autoregressive
sequence prediction to address the deep issues hidden in CGEC. Considering that
most tokens are correct and can be conveyed directly from source to target, and
the error positions can be estimated and corrected based on the bidirectional
context information, thus we employ a BERT-initialized Transformer Encoder as
the backbone model to conduct information modeling and conveying. Considering
that only relying on the same position substitution cannot handle the
variable-length correction cases, various operations such substitution,
deletion, insertion, and local paraphrasing are required jointly. Therefore, a
Conditional Random Fields (CRF) layer is stacked on the up tail to conduct
non-autoregressive sequence prediction by modeling the token dependencies.
Since most tokens are correct and easily to be predicted/conveyed to the
target, then the models may suffer from a severe class imbalance issue. To
alleviate this problem, focal loss penalty strategies are integrated into the
loss functions. Moreover, besides the typical fix-length error correction
datasets, we also construct a variable-length corpus to conduct experiments.
Experimental results on standard datasets, especially on the variable-length
datasets, demonstrate the effectiveness of TtT in terms of sentence-level
Accuracy, Precision, Recall, and F1-Measure on tasks of error Detection and
Correction.
| 2,021 |
Computation and Language
|
Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models
|
This paper studies how to automatically generate a natural language text that
describes the facts in knowledge graph (KG). Considering the few-shot setting,
we leverage the excellent capacities of pretrained language models (PLMs) in
language understanding and generation. We make three major technical
contributions, namely representation alignment for bridging the semantic gap
between KG encodings and PLMs, relation-biased KG linearization for deriving
better input representations, and multi-task learning for learning the
correspondence between KG and text. Extensive experiments on three benchmark
datasets have demonstrated the effectiveness of our model on KG-to-text
generation task. In particular, our model outperforms all comparison methods on
both fully-supervised and few-shot settings. Our code and datasets are
available at https://github.com/RUCAIBox/Few-Shot-KG2Text.
| 2,021 |
Computation and Language
|
Generate, Prune, Select: A Pipeline for Counterspeech Generation against
Online Hate Speech
|
Countermeasures to effectively fight the ever increasing hate speech online
without blocking freedom of speech is of great social interest. Natural
Language Generation (NLG), is uniquely capable of developing scalable
solutions. However, off-the-shelf NLG methods are primarily
sequence-to-sequence neural models and they are limited in that they generate
commonplace, repetitive and safe responses regardless of the hate speech (e.g.,
"Please refrain from using such language.") or irrelevant responses, making
them ineffective for de-escalating hateful conversations. In this paper, we
design a three-module pipeline approach to effectively improve the diversity
and relevance. Our proposed pipeline first generates various counterspeech
candidates by a generative model to promote diversity, then filters the
ungrammatical ones using a BERT model, and finally selects the most relevant
counterspeech response using a novel retrieval-based method. Extensive
Experiments on three representative datasets demonstrate the efficacy of our
approach in generating diverse and relevant counterspeech.
| 2,021 |
Computation and Language
|
Can vectors read minds better than experts? Comparing data augmentation
strategies for the automated scoring of children's mindreading ability
|
In this paper we implement and compare 7 different data augmentation
strategies for the task of automatic scoring of children's ability to
understand others' thoughts, feelings, and desires (or "mindreading").
We recruit in-domain experts to re-annotate augmented samples and determine
to what extent each strategy preserves the original rating. We also carry out
multiple experiments to measure how much each augmentation strategy improves
the performance of automatic scoring systems. To determine the capabilities of
automatic systems to generalize to unseen data, we create UK-MIND-20 - a new
corpus of children's performance on tests of mindreading, consisting of 10,320
question-answer pairs.
We obtain a new state-of-the-art performance on the MIND-CA corpus, improving
macro-F1-score by 6 points. Results indicate that both the number of training
examples and the quality of the augmentation strategies affect the performance
of the systems. The task-specific augmentations generally outperform
task-agnostic augmentations. Automatic augmentations based on vectors (GloVe,
FastText) perform the worst.
We find that systems trained on MIND-CA generalize well to UK-MIND-20. We
demonstrate that data augmentation strategies also improve the performance on
unseen data.
| 2,021 |
Computation and Language
|
Corporate core values and social responsibility: What really matters to
whom
|
This study uses an innovative measure, the Semantic Brand Score, to assess
the interest of stakeholders in different company core values. Among others, we
focus on corporate social responsibility (CSR) core value statements, and on
the attention they receive from five categories of stakeholders (customers,
company communication teams, employees, associations and media). Combining big
data methods and tools of Social Network Analysis and Text Mining, we analyzed
about 58,000 Italian tweets and found that different stakeholders have
different prevailing interests. CSR gets much less attention than expected.
Core values related to customers and employees are in the foreground.
| 2,021 |
Computation and Language
|
LearnDA: Learnable Knowledge-Guided Data Augmentation for Event
Causality Identification
|
Modern models for event causality identification (ECI) are mainly based on
supervised learning, which are prone to the data lacking problem.
Unfortunately, the existing NLP-related augmentation methods cannot directly
produce the available data required for this task. To solve the data lacking
problem, we introduce a new approach to augment training data for event
causality identification, by iteratively generating new examples and
classifying event causality in a dual learning framework. On the one hand, our
approach is knowledge-guided, which can leverage existing knowledge bases to
generate well-formed new sentences. On the other hand, our approach employs a
dual mechanism, which is a learnable augmentation framework and can
interactively adjust the generation process to generate task-related sentences.
Experimental results on two benchmarks EventStoryLine and Causal-TimeBank show
that 1) our method can augment suitable task-related training data for ECI; 2)
our method outperforms previous methods on EventStoryLine and Causal-TimeBank
(+2.5 and +2.1 points on F1 value respectively).
| 2,021 |
Computation and Language
|
Improving Event Causality Identification via Self-Supervised
Representation Learning on External Causal Statement
|
Current models for event causality identification (ECI) mainly adopt a
supervised framework, which heavily rely on labeled data for training.
Unfortunately, the scale of current annotated datasets is relatively limited,
which cannot provide sufficient support for models to capture useful indicators
from causal statements, especially for handing those new, unseen cases. To
alleviate this problem, we propose a novel approach, shortly named CauSeRL,
which leverages external causal statements for event causality identification.
First of all, we design a self-supervised framework to learn context-specific
causal patterns from external causal statements. Then, we adopt a contrastive
transfer strategy to incorporate the learned context-specific causal patterns
into the target ECI model. Experimental results show that our method
significantly outperforms previous methods on EventStoryLine and
Causal-TimeBank (+2.0 and +3.4 points on F1 value respectively).
| 2,021 |
Computation and Language
|
Dialoging Resonance: How Users Perceive, Reciprocate and React to
Chatbot's Self-Disclosure in Conversational Recommendations
|
Using chatbots to deliver recommendations is increasingly popular. The design
of recommendation chatbots has primarily been taking an information-centric
approach by focusing on the recommended content per se. Limited attention is on
how social connection and relational strategies, such as self-disclosure from a
chatbot, may influence users' perception and acceptance of the recommendation.
In this work, we designed, implemented, and evaluated a social chatbot capable
of performing three different levels of self-disclosure: factual information
(low), cognitive opinions (medium), and emotions (high). In the evaluation, we
recruited 372 participants to converse with the chatbot on two topics: movies
and COVID-19 experiences. In each topic, the chatbot performed small talks and
made recommendations relevant to the topic. Participants were randomly assigned
to four experimental conditions where the chatbot used factual, cognitive,
emotional, and adaptive strategies to perform self-disclosures. By training a
text classifier to identify users' level of self-disclosure in real-time, the
adaptive chatbot can dynamically match its self-disclosure to the level of
disclosure exhibited by the users. Our results show that users reciprocate with
higher-level self-disclosure when a recommendation chatbot consistently
displays emotions throughout the conversation. Chatbot's emotional disclosure
also led to increased interactional enjoyment and more positive interpersonal
perception towards the bot, fostering a stronger human-chatbot relationship and
thus leading to increased recommendation effectiveness, including a higher
tendency to accept the recommendation. We discuss the understandings obtained
and implications to future design.
| 2,022 |
Computation and Language
|
PsyQA: A Chinese Dataset for Generating Long Counseling Text for Mental
Health Support
|
Great research interests have been attracted to devise AI services that are
able to provide mental health support. However, the lack of corpora is a main
obstacle to this research, particularly in Chinese language. In this paper, we
propose PsyQA, a Chinese dataset of psychological health support in the form of
question and answer pair. PsyQA is crawled from a Chinese mental health service
platform, and contains 22K questions and 56K long and well-structured answers.
Based on the psychological counseling theories, we annotate a portion of answer
texts with typical strategies for providing support, and further present
in-depth analysis of both lexical features and strategy patterns in the
counseling answers. We also evaluate the performance of generating counseling
answers with the generative pretrained models. Results show that utilizing
strategies enhances the fluency and helpfulness of generated answers, but there
is still a large space for future research.
| 2,021 |
Computation and Language
|
Fingerprinting Fine-tuned Language Models in the Wild
|
There are concerns that the ability of language models (LMs) to generate high
quality synthetic text can be misused to launch spam, disinformation, or
propaganda. Therefore, the research community is actively working on developing
approaches to detect whether a given text is organic or synthetic. While this
is a useful first step, it is important to be able to further fingerprint the
author LM to attribute its origin. Prior work on fingerprinting LMs is limited
to attributing synthetic text generated by a handful (usually < 10) of
pre-trained LMs. However, LMs such as GPT2 are commonly fine-tuned in a myriad
of ways (e.g., on a domain-specific text corpus) before being used to generate
synthetic text. It is challenging to fingerprinting fine-tuned LMs because the
universe of fine-tuned LMs is much larger in realistic scenarios. To address
this challenge, we study the problem of large-scale fingerprinting of
fine-tuned LMs in the wild. Using a real-world dataset of synthetic text
generated by 108 different fine-tuned LMs, we conduct comprehensive experiments
to demonstrate the limitations of existing fingerprinting approaches. Our
results show that fine-tuning itself is the most effective in attributing the
synthetic text generated by fine-tuned LMs.
| 2,021 |
Computation and Language
|
SIRE: Separate Intra- and Inter-sentential Reasoning for Document-level
Relation Extraction
|
Document-level relation extraction has attracted much attention in recent
years. It is usually formulated as a classification problem that predicts
relations for all entity pairs in the document. However, previous works
indiscriminately represent intra- and inter-sentential relations in the same
way, confounding the different patterns for predicting them. Besides, they
create a document graph and use paths between entities on the graph as clues
for logical reasoning. However, not all entity pairs can be connected with a
path and have the correct logical reasoning paths in their graph. Thus many
cases of logical reasoning cannot be covered. This paper proposes an effective
architecture, SIRE, to represent intra- and inter-sentential relations in
different ways. We design a new and straightforward form of logical reasoning
module that can cover more logical reasoning chains. Experiments on the public
datasets show SIRE outperforms the previous state-of-the-art methods. Further
analysis shows that our predictions are reliable and explainable. Our code is
available at https://github.com/DreamInvoker/SIRE.
| 2,021 |
Computation and Language
|
Bilingual Alignment Pre-Training for Zero-Shot Cross-Lingual Transfer
|
Multilingual pre-trained models have achieved remarkable performance on
cross-lingual transfer learning. Some multilingual models such as mBERT, have
been pre-trained on unlabeled corpora, therefore the embeddings of different
languages in the models may not be aligned very well. In this paper, we aim to
improve the zero-shot cross-lingual transfer performance by proposing a
pre-training task named Word-Exchange Aligning Model (WEAM), which uses the
statistical alignment information as the prior knowledge to guide cross-lingual
word prediction. We evaluate our model on multilingual machine reading
comprehension task MLQA and natural language interface task XNLI. The results
show that WEAM can significantly improve the zero-shot performance.
| 2,021 |
Computation and Language
|
Auto-tagging of Short Conversational Sentences using Transformer Methods
|
The problem of categorizing short speech sentences according to their
semantic features with high accuracy is a subject studied in natural language
processing. In this study, a data set created with samples classified in 46
different categories was used. Examples consist of sentences taken from chat
conversations between a company's customer representatives and the company's
website visitors. The primary purpose is to automatically tag questions and
requests from visitors in the most accurate way for 46 predetermined categories
for use in a chat application to generate meaningful answers to the questions
asked by the website visitors. For this, different BERT models and one GPT-2
model, pre-trained in Turkish, were preferred. The classification performances
of the relevant models were analyzed in detail and reported accordingly.
| 2,021 |
Computation and Language
|
Reordering Examples Helps during Priming-based Few-Shot Learning
|
The ability to learn from limited data, or few-shot learning, is a desirable
and often critical requirement for NLP systems. While many existing methods do
poorly at learning from a handful of examples, large pretrained language models
have recently been shown to be efficient few-shot learners. One approach to
few-shot learning, which does not require finetuning of model parameters, is to
augment the language model's input with priming text which is typically
constructed using task specific descriptions and examples. In this work, we
further explore priming-based few-shot learning, with focus on using examples
as prompts. We show that presenting examples in the right order is key for
generalization. We introduce PERO (Prompting with Examples in the Right Order),
where we formulate few-shot learning as search over the set of permutations of
the training examples. We show that PERO can learn to generalize efficiently
using as few as 10 examples, in contrast to existing approaches. While the
newline token is a natural choice for separating the examples in the prompt, we
show that learning a new separator token can potentially provide further gains
in performance. We demonstrate the effectiveness of the proposed method on the
tasks of sentiment classification, natural language inference and fact
retrieval. Finally, we analyze the learned prompts to reveal novel insights,
including the idea that two training examples in the right order alone can
provide competitive performance for sentiment classification and natural
language inference.
| 2,021 |
Computation and Language
|
Template-Based Named Entity Recognition Using BART
|
There is a recent interest in investigating few-shot NER, where the
low-resource target domain has different label sets compared with a
resource-rich source domain. Existing methods use a similarity-based metric.
However, they cannot make full use of knowledge transfer in NER model
parameters. To address the issue, we propose a template-based method for NER,
treating NER as a language model ranking problem in a sequence-to-sequence
framework, where original sentences and statement templates filled by candidate
named entity span are regarded as the source sequence and the target sequence,
respectively. For inference, the model is required to classify each candidate
span based on the corresponding template scores. Our experiments demonstrate
that the proposed method achieves 92.55% F1 score on the CoNLL03 (rich-resource
task), and significantly better than fine-tuning BERT 10.88%, 15.34%, and
11.73% F1 score on the MIT Movie, the MIT Restaurant, and the ATIS
(low-resource task), respectively.
| 2,021 |
Computation and Language
|
Three Sentences Are All You Need: Local Path Enhanced Document Relation
Extraction
|
Document-level Relation Extraction (RE) is a more challenging task than
sentence RE as it often requires reasoning over multiple sentences. Yet, human
annotators usually use a small number of sentences to identify the relationship
between a given entity pair. In this paper, we present an embarrassingly simple
but effective method to heuristically select evidence sentences for
document-level RE, which can be easily combined with BiLSTM to achieve good
performance on benchmark datasets, even better than fancy graph neural network
based methods. We have released our code at
https://github.com/AndrewZhe/Three-Sentences-Are-All-You-Need.
| 2,021 |
Computation and Language
|
TVDIM: Enhancing Image Self-Supervised Pretraining via Noisy Text Data
|
Among ubiquitous multimodal data in the real world, text is the modality
generated by human, while image reflects the physical world honestly. In a
visual understanding application, machines are expected to understand images
like human. Inspired by this, we propose a novel self-supervised learning
method, named Text-enhanced Visual Deep InfoMax (TVDIM), to learn better visual
representations by fully utilizing the naturally-existing multimodal data. Our
core idea of self-supervised learning is to maximize the mutual information
between features extracted from multiple views of a shared context to a
rational degree. Different from previous methods which only consider multiple
views from a single modality, our work produces multiple views from different
modalities, and jointly optimizes the mutual information for features pairs of
intra-modality and inter-modality. Considering the information gap between
inter-modality features pairs from data noise, we adopt a \emph{ranking-based}
contrastive learning to optimize the mutual information. During evaluation, we
directly use the pre-trained visual representations to complete various image
classification tasks. Experimental results show that, TVDIM significantly
outperforms previous visual self-supervised methods when processing the same
set of images.
| 2,021 |
Computation and Language
|
Exploring Distantly-Labeled Rationales in Neural Network Models
|
Recent studies strive to incorporate various human rationales into neural
networks to improve model performance, but few pay attention to the quality of
the rationales. Most existing methods distribute their models' focus to
distantly-labeled rationale words entirely and equally, while ignoring the
potential important non-rationale words and not distinguishing the importance
of different rationale words. In this paper, we propose two novel auxiliary
loss functions to make better use of distantly-labeled rationales, which
encourage models to maintain their focus on important words beyond labeled
rationales (PINs) and alleviate redundant training on non-helpful rationales
(NoIRs). Experiments on two representative classification tasks show that our
proposed methods can push a classification model to effectively learn crucial
clues from non-perfect rationales while maintaining the ability to spread its
focus to other unlabeled important words, thus significantly outperform
existing methods.
| 2,021 |
Computation and Language
|
Defending Against Backdoor Attacks in Natural Language Generation
|
The frustratingly fragile nature of neural network models make current
natural language generation (NLG) systems prone to backdoor attacks and
generate malicious sequences that could be sexist or offensive. Unfortunately,
little effort has been invested to how backdoor attacks can affect current NLG
models and how to defend against these attacks. In this work, by giving a
formal definition of backdoor attack and defense, we investigate this problem
on two important NLG tasks, machine translation and dialog generation. Tailored
to the inherent nature of NLG models (e.g., producing a sequence of coherent
words given contexts), we design defending strategies against attacks. We find
that testing the backward probability of generating sources given targets
yields effective defense performance against all different types of attacks,
and is able to handle the {\it one-to-many} issue in many NLG tasks such as
dialog generation. We hope that this work can raise the awareness of backdoor
risks concealed in deep NLG systems and inspire more future work (both attack
and defense) towards this direction.
| 2,023 |
Computation and Language
|
SimCLS: A Simple Framework for Contrastive Learning of Abstractive
Summarization
|
In this paper, we present a conceptually simple while empirically powerful
framework for abstractive summarization, SimCLS, which can bridge the gap
between the learning objective and evaluation metrics resulting from the
currently dominated sequence-to-sequence learning framework by formulating text
generation as a reference-free evaluation problem (i.e., quality estimation)
assisted by contrastive learning. Experimental results show that, with minor
modification over existing top-scoring systems, SimCLS can improve the
performance of existing top-performing models by a large margin. Particularly,
2.51 absolute improvement against BART and 2.50 over PEGASUS w.r.t ROUGE-1 on
the CNN/DailyMail dataset, driving the state-of-the-art performance to a new
level. We have open-sourced our codes and results:
https://github.com/yixinL7/SimCLS. Results of our proposed models have been
deployed into ExplainaBoard platform, which allows researchers to understand
our systems in a more fine-grained way.
| 2,021 |
Computation and Language
|
Representing Syntax and Composition with Geometric Transformations
|
The exploitation of syntactic graphs (SyGs) as a word's context has been
shown to be beneficial for distributional semantic models (DSMs), both at the
level of individual word representations and in deriving phrasal
representations via composition. However, notwithstanding the potential
performance benefit, the syntactically-aware DSMs proposed to date have huge
numbers of parameters (compared to conventional DSMs) and suffer from data
sparsity. Furthermore, the encoding of the SyG links (i.e., the syntactic
relations) has been largely limited to linear maps. The knowledge graphs'
literature, on the other hand, has proposed light-weight models employing
different geometric transformations (GTs) to encode edges in a knowledge graph
(KG). Our work explores the possibility of adopting this family of models to
encode SyGs. Furthermore, we investigate which GT better encodes syntactic
relations, so that these representations can be used to enhance phrase-level
composition via syntactic contextualisation.
| 2,021 |
Computation and Language
|
GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple
Intent Detection and Slot Filling
|
Multi-intent SLU can handle multiple intents in an utterance, which has
attracted increasing attention. However, the state-of-the-art joint models
heavily rely on autoregressive approaches, resulting in two issues: slow
inference speed and information leakage. In this paper, we explore a
non-autoregressive model for joint multiple intent detection and slot filling,
achieving more fast and accurate. Specifically, we propose a Global-Locally
Graph Interaction Network (GL-GIN) where a local slot-aware graph interaction
layer is proposed to model slot dependency for alleviating uncoordinated slots
problem while a global intent-slot graph interaction layer is introduced to
model the interaction between multiple intents and all slots in the utterance.
Experimental results on two public datasets show that our framework achieves
state-of-the-art performance while being 11.5 times faster.
| 2,021 |
Computation and Language
|
The Case for Translation-Invariant Self-Attention in Transformer-Based
Language Models
|
Mechanisms for encoding positional information are central for
transformer-based language models. In this paper, we analyze the position
embeddings of existing language models, finding strong evidence of translation
invariance, both for the embeddings themselves and for their effect on
self-attention. The degree of translation invariance increases during training
and correlates positively with model performance. Our findings lead us to
propose translation-invariant self-attention (TISA), which accounts for the
relative position between tokens in an interpretable fashion without needing
conventional position embeddings. Our proposal has several theoretical
advantages over existing position-representation approaches. Experiments show
that it improves on regular ALBERT on GLUE tasks, while only adding orders of
magnitude less positional parameters.
| 2,021 |
Computation and Language
|
SOCCER: An Information-Sparse Discourse State Tracking Collection in the
Sports Commentary Domain
|
In the pursuit of natural language understanding, there has been a long
standing interest in tracking state changes throughout narratives. Impressive
progress has been made in modeling the state of transaction-centric dialogues
and procedural texts. However, this problem has been less intensively studied
in the realm of general discourse where ground truth descriptions of states may
be loosely defined and state changes are less densely distributed over
utterances. This paper proposes to turn to simplified, fully observable systems
that show some of these properties: Sports events. We curated 2,263 soccer
matches including time-stamped natural language commentary accompanied by
discrete events such as a team scoring goals, switching players or being
penalized with cards. We propose a new task formulation where, given paragraphs
of commentary of a game at different timestamps, the system is asked to
recognize the occurrence of in-game events. This domain allows for rich
descriptions of state while avoiding the complexities of many other real-world
settings. As an initial point of performance measurement, we include two
baseline methods from the perspectives of sentence classification with temporal
dependence and current state-of-the-art generative model, respectively, and
demonstrate that even sophisticated existing methods struggle on the state
tracking task when the definition of state broadens or non-event chatter
becomes prevalent.
| 2,021 |
Computation and Language
|
DialogueCRN: Contextual Reasoning Networks for Emotion Recognition in
Conversations
|
Emotion Recognition in Conversations (ERC) has gained increasing attention
for developing empathetic machines. Recently, many approaches have been devoted
to perceiving conversational context by deep learning models. However, these
approaches are insufficient in understanding the context due to lacking the
ability to extract and integrate emotional clues. In this work, we propose
novel Contextual Reasoning Networks (DialogueCRN) to fully understand the
conversational context from a cognitive perspective. Inspired by the Cognitive
Theory of Emotion, we design multi-turn reasoning modules to extract and
integrate emotional clues. The reasoning module iteratively performs an
intuitive retrieving process and a conscious reasoning process, which imitates
human unique cognitive thinking. Extensive experiments on three public
benchmark datasets demonstrate the effectiveness and superiority of the
proposed model.
| 2,021 |
Computation and Language
|
CCPM: A Chinese Classical Poetry Matching Dataset
|
Poetry is one of the most important art forms of human languages. Recently
many studies have focused on incorporating some linguistic features of poetry,
such as style and sentiment, into its understanding or generation system.
However, there is no focus on understanding or evaluating the semantics of
poetry. Therefore, we propose a novel task to assess a model's semantic
understanding of poetry by poem matching. Specifically, this task requires the
model to select one line of Chinese classical poetry among four candidates
according to the modern Chinese translation of a line of poetry. To construct
this dataset, we first obtain a set of parallel data of Chinese classical
poetry and modern Chinese translation. Then we retrieve similar lines of poetry
with the lines in a poetry corpus as negative choices. We name the dataset
Chinese Classical Poetry Matching Dataset (CCPM) and release it at
https://github.com/THUNLP-AIPoet/CCPM. We hope this dataset can further enhance
the study on incorporating deep semantics into the understanding and generation
system of Chinese classical poetry. We also preliminarily run two variants of
BERT on this dataset as the baselines for this dataset.
| 2,021 |
Computation and Language
|
A Case Study of Spanish Text Transformations for Twitter Sentiment
Analysis
|
Sentiment analysis is a text mining task that determines the polarity of a
given text, i.e., its positiveness or negativeness. Recently, it has received a
lot of attention given the interest in opinion mining in micro-blogging
platforms. These new forms of textual expressions present new challenges to
analyze text given the use of slang, orthographic and grammatical errors, among
others. Along with these challenges, a practical sentiment classifier should be
able to handle efficiently large workloads.
The aim of this research is to identify which text transformations
(lemmatization, stemming, entity removal, among others), tokenizers (e.g.,
words $n$-grams), and tokens weighting schemes impact the most the accuracy of
a classifier (Support Vector Machine) trained on two Spanish corpus. The
methodology used is to exhaustively analyze all the combinations of the text
transformations and their respective parameters to find out which
characteristics the best performing classifiers have in common. Furthermore,
among the different text transformations studied, we introduce a novel approach
based on the combination of word based $n$-grams and character based $q$-grams.
The results show that this novel combination of words and characters produces a
classifier that outperforms the traditional word based combination by $11.17\%$
and $5.62\%$ on the INEGI and TASS'15 dataset, respectively.
| 2,021 |
Computation and Language
|
Provably Secure Generative Linguistic Steganography
|
Generative linguistic steganography mainly utilized language models and
applied steganographic sampling (stegosampling) to generate high-security
steganographic text (stegotext). However, previous methods generally lead to
statistical differences between the conditional probability distributions of
stegotext and natural text, which brings about security risks. In this paper,
to further ensure security, we present a novel provably secure generative
linguistic steganographic method ADG, which recursively embeds secret
information by Adaptive Dynamic Grouping of tokens according to their
probability given by an off-the-shelf language model. We not only prove the
security of ADG mathematically, but also conduct extensive experiments on three
public corpora to further verify its imperceptibility. The experimental results
reveal that the proposed method is able to generate stegotext with nearly
perfect security.
| 2,021 |
Computation and Language
|
Semantic-WER: A Unified Metric for the Evaluation of ASR Transcript for
End Usability
|
Recent advances in supervised, semi-supervised and self-supervised deep
learning algorithms have shown significant improvement in the performance of
automatic speech recognition(ASR) systems. The state-of-the-art systems have
achieved a word error rate (WER) less than 5%. However, in the past,
researchers have argued the non-suitability of the WER metric for the
evaluation of ASR systems for downstream tasks such as spoken language
understanding (SLU) and information retrieval. The reason is that the WER works
at the surface level and does not include any syntactic and semantic
knowledge.The current work proposes Semantic-WER (SWER), a metric to evaluate
the ASR transcripts for downstream applications in general. The SWER can be
easily customized for any down-stream task.
| 2,021 |
Computation and Language
|
A Dataset and Baselines for Multilingual Reply Suggestion
|
Reply suggestion models help users process emails and chats faster. Previous
work only studies English reply suggestion. Instead, we present MRS, a
multilingual reply suggestion dataset with ten languages. MRS can be used to
compare two families of models: 1) retrieval models that select the reply from
a fixed set and 2) generation models that produce the reply from scratch.
Therefore, MRS complements existing cross-lingual generalization benchmarks
that focus on classification and sequence labeling tasks. We build a generation
model and a retrieval model as baselines for MRS. The two models have different
strengths in the monolingual setting, and they require different strategies to
generalize across languages. MRS is publicly available at
https://github.com/zhangmozhi/mrs.
| 2,021 |
Computation and Language
|
Language Embeddings for Typology and Cross-lingual Transfer Learning
|
Cross-lingual language tasks typically require a substantial amount of
annotated data or parallel translation data. We explore whether language
representations that capture relationships among languages can be learned and
subsequently leveraged in cross-lingual tasks without the use of parallel data.
We generate dense embeddings for 29 languages using a denoising autoencoder,
and evaluate the embeddings using the World Atlas of Language Structures (WALS)
and two extrinsic tasks in a zero-shot setting: cross-lingual dependency
parsing and cross-lingual natural language inference.
| 2,021 |
Computation and Language
|
A diachronic evaluation of gender asymmetry in euphemism
|
The use of euphemisms is a known driver of language change. It has been
proposed that women use euphemisms more than men. Although there have been
several studies investigating gender differences in language, the claim about
euphemism usage has not been tested comprehensively through time. If women do
use euphemisms more, this could mean that women also lead the formation of new
euphemisms and language change over time. Using four large diachronic text
corpora of English, we evaluate the claim that women use euphemisms more than
men through a quantitative analysis. We assembled a list of 106 euphemism-taboo
pairs to analyze their relative use through time by each gender in the corpora.
Contrary to the existing belief, our results show that women do not use
euphemisms with a higher proportion than men. We repeated the analysis using
different subsets of the euphemism-taboo pairs list and found that our result
was robust. Our study indicates that in a broad range of settings involving
both speech and writing, and with varying degrees of formality, women do not
use or form euphemisms more than men.
| 2,021 |
Computation and Language
|
How to Adapt Your Pretrained Multilingual Model to 1600 Languages
|
Pretrained multilingual models (PMMs) enable zero-shot learning via
cross-lingual transfer, performing best for languages seen during pretraining.
While methods exist to improve performance for unseen languages, they have
almost exclusively been evaluated using amounts of raw text only available for
a small fraction of the world's languages. In this paper, we evaluate the
performance of existing methods to adapt PMMs to new languages using a resource
available for over 1600 languages: the New Testament. This is challenging for
two reasons: (1) the small corpus size, and (2) the narrow domain. While
performance drops for all approaches, we surprisingly still see gains of up to
$17.69\%$ accuracy for part-of-speech tagging and $6.29$ F1 for NER on average
over all languages as compared to XLM-R. Another unexpected finding is that
continued pretraining, the simplest approach, performs best. Finally, we
perform a case study to disentangle the effects of domain and size and to shed
light on the influence of the finetuning source language.
| 2,021 |
Computation and Language
|
Syntax-augmented Multilingual BERT for Cross-lingual Transfer
|
In recent years, we have seen a colossal effort in pre-training multilingual
text encoders using large-scale corpora in many languages to facilitate
cross-lingual transfer learning. However, due to typological differences across
languages, the cross-lingual transfer is challenging. Nevertheless, language
syntax, e.g., syntactic dependencies, can bridge the typological gap. Previous
works have shown that pre-trained multilingual encoders, such as mBERT
\cite{devlin-etal-2019-bert}, capture language syntax, helping cross-lingual
transfer. This work shows that explicitly providing language syntax and
training mBERT using an auxiliary objective to encode the universal dependency
tree structure helps cross-lingual transfer. We perform rigorous experiments on
four NLP tasks, including text classification, question answering, named entity
recognition, and task-oriented semantic parsing. The experiment results show
that syntax-augmented mBERT improves cross-lingual transfer on popular
benchmarks, such as PAWS-X and MLQA, by 1.4 and 1.6 points on average across
all languages. In the \emph{generalized} transfer setting, the performance
boosted significantly, with 3.9 and 3.1 points on average in PAWS-X and MLQA.
| 2,021 |
Computation and Language
|
nmT5 -- Is parallel data still relevant for pre-training massively
multilingual language models?
|
Recently, mT5 - a massively multilingual version of T5 - leveraged a unified
text-to-text format to attain state-of-the-art results on a wide variety of
multilingual NLP tasks. In this paper, we investigate the impact of
incorporating parallel data into mT5 pre-training. We find that multi-tasking
language modeling with objectives such as machine translation during
pre-training is a straightforward way to improve performance on downstream
multilingual and cross-lingual tasks. However, the gains start to diminish as
the model capacity increases, suggesting that parallel data might not be as
essential for larger models. At the same time, even at larger model sizes, we
find that pre-training with parallel data still provides benefits in the
limited labelled data regime.
| 2,021 |
Computation and Language
|
Self-supervised Dialogue Learning for Spoken Conversational Question
Answering
|
In spoken conversational question answering (SCQA), the answer to the
corresponding question is generated by retrieving and then analyzing a fixed
spoken document, including multi-part conversations. Most SCQA systems have
considered only retrieving information from ordered utterances. However, the
sequential order of dialogue is important to build a robust spoken
conversational question answering system, and the changes of utterances order
may severely result in low-quality and incoherent corpora. To this end, we
introduce a self-supervised learning approach, including incoherence
discrimination, insertion detection, and question prediction, to explicitly
capture the coreference resolution and dialogue coherence among spoken
documents. Specifically, we design a joint learning framework where the
auxiliary self-supervised tasks can enable the pre-trained SCQA systems towards
more coherent and meaningful spoken dialogue learning. We also utilize the
proposed self-supervised learning tasks to capture intra-sentence coherence.
Experimental results demonstrate that our proposed method provides more
coherent, meaningful, and appropriate responses, yielding superior performance
gains compared to the original pre-trained language models. Our method achieves
state-of-the-art results on the Spoken-CoQA dataset.
| 2,021 |
Computation and Language
|
Towards Equal Gender Representation in the Annotations of Toxic Language
Detection
|
Classifiers tend to propagate biases present in the data on which they are
trained. Hence, it is important to understand how the demographic identities of
the annotators of comments affect the fairness of the resulting model. In this
paper, we focus on the differences in the ways men and women annotate comments
for toxicity, investigating how these differences result in models that amplify
the opinions of male annotators. We find that the BERT model as-sociates toxic
comments containing offensive words with male annotators, causing the model to
predict 67.7% of toxic comments as having been annotated by men. We show that
this disparity between gender predictions can be mitigated by removing
offensive words and highly toxic comments from the training data. We then apply
the learned associations between gender and language to toxic language
classifiers, finding that models trained exclusively on female-annotated data
perform 1.8% better than those trained solely on male-annotated data and that
training models on data after removing all offensive words reduces bias in the
model by 55.5% while increasing the sensitivity by 0.4%.
| 2,021 |
Computation and Language
|
Grounding 'Grounding' in NLP
|
The NLP community has seen substantial recent interest in grounding to
facilitate interaction between language technologies and the world. However, as
a community, we use the term broadly to reference any linking of text to data
or non-textual modality. In contrast, Cognitive Science more formally defines
"grounding" as the process of establishing what mutual information is required
for successful communication between two interlocutors -- a definition which
might implicitly capture the NLP usage but differs in intent and scope. We
investigate the gap between these definitions and seek answers to the following
questions: (1) What aspects of grounding are missing from NLP tasks? Here we
present the dimensions of coordination, purviews and constraints. (2) How is
the term "grounding" used in the current research? We study the trends in
datasets, domains, and tasks introduced in recent NLP conferences. And finally,
(3) How to advance our current definition to bridge the gap with Cognitive
Science? We present ways to both create new tasks or repurpose existing ones to
make advancements towards achieving a more complete sense of grounding.
| 2,021 |
Computation and Language
|
BERTTune: Fine-Tuning Neural Machine Translation with BERTScore
|
Neural machine translation models are often biased toward the limited
translation references seen during training. To amend this form of overfitting,
in this paper we propose fine-tuning the models with a novel training objective
based on the recently-proposed BERTScore evaluation metric. BERTScore is a
scoring function based on contextual embeddings that overcomes the typical
limitations of n-gram-based metrics (e.g. synonyms, paraphrases), allowing
translations that are different from the references, yet close in the
contextual embedding space, to be treated as substantially correct. To be able
to use BERTScore as a training objective, we propose three approaches for
generating soft predictions, allowing the network to remain completely
differentiable end-to-end. Experiments carried out over four, diverse language
pairs have achieved improvements of up to 0.58 pp (3.28%) in BLEU score and up
to 0.76 pp (0.98%) in BERTScore (F_BERT) when fine-tuning a strong baseline.
| 2,021 |
Computation and Language
|
NAST: A Non-Autoregressive Generator with Word Alignment for
Unsupervised Text Style Transfer
|
Autoregressive models have been widely used in unsupervised text style
transfer. Despite their success, these models still suffer from the content
preservation problem that they usually ignore part of the source sentence and
generate some irrelevant words with strong styles. In this paper, we propose a
Non-Autoregressive generator for unsupervised text Style Transfer (NAST), which
alleviates the problem from two aspects. First, we observe that most words in
the transferred sentence can be aligned with related words in the source
sentence, so we explicitly model word alignments to suppress irrelevant words.
Second, existing models trained with the cycle loss align sentences in two
stylistic text spaces, which lacks fine-grained control at the word level. The
proposed non-autoregressive generator focuses on the connections between
aligned words, which learns the word-level transfer between styles. For
experiments, we integrate the proposed generator into two base models and
evaluate them on two style transfer tasks. The results show that NAST can
significantly improve the overall performance and provide explainable word
alignments. Moreover, the non-autoregressive generator achieves over 10x
speedups at inference. Our codes are available at
https://github.com/thu-coai/NAST.
| 2,021 |
Computation and Language
|
Conversations Are Not Flat: Modeling the Dynamic Information Flow across
Dialogue Utterances
|
Nowadays, open-domain dialogue models can generate acceptable responses
according to the historical context based on the large-scale pre-trained
language models. However, they generally concatenate the dialogue history
directly as the model input to predict the response, which we named as the flat
pattern and ignores the dynamic information flow across dialogue utterances. In
this work, we propose the DialoFlow model, in which we introduce a dynamic flow
mechanism to model the context flow, and design three training objectives to
capture the information dynamics across dialogue utterances by addressing the
semantic influence brought about by each utterance in large-scale pre-training.
Experiments on the multi-reference Reddit Dataset and DailyDialog Dataset
demonstrate that our DialoFlow significantly outperforms the DialoGPT on the
dialogue generation task. Besides, we propose the Flow score, an effective
automatic metric for evaluating interactive human-bot conversation quality
based on the pre-trained DialoFlow, which presents high chatbot-level
correlation ($r=0.9$) with human ratings among 11 chatbots. Code and
pre-trained models will be public.
\footnote{\url{https://github.com/ictnlp/DialoFlow}}
| 2,021 |
Computation and Language
|
Addressing Inquiries about History: An Efficient and Practical Framework
for Evaluating Open-domain Chatbot Consistency
|
A good open-domain chatbot should avoid presenting contradictory responses
about facts or opinions in a conversational session, known as its consistency
capacity. However, evaluating the consistency capacity of a chatbot is still
challenging. Employing human judges to interact with chatbots on purpose to
check their capacities is costly and low-efficient, and difficult to get rid of
subjective bias. In this paper, we propose the Addressing Inquiries about
History (AIH), an efficient and practical framework for the consistency
evaluation. At the conversation stage, AIH attempts to address appropriate
inquiries about the dialogue history to induce the chatbot to redeclare the
historical facts or opinions. We carry out the conversation between chatbots,
which is more efficient than the human-bot interaction and can also alleviate
the subjective bias. In this way, we manage to rapidly obtain a dialog session
that contains responses with high contradiction possibilities. At the
contradiction recognition stage, we can either employ human judges or a natural
language inference (NLI) model to recognize whether the answers to the
inquiries are contradictory with history. Finally, we are able to rank chatbots
according to the contradiction statistics. Experiments on open-domain chatbots
show that our approach can efficiently and reliably assess the consistency
capacity of chatbots and achieve a high ranking correlation with the human
evaluation. We release the framework and hope to help improve the consistency
capacity of chatbots. \footnote{\url{https://github.com/ictnlp/AIH}}
| 2,021 |
Computation and Language
|
Language Scaling for Universal Suggested Replies Model
|
We consider the problem of scaling automated suggested replies for Outlook
email system to multiple languages. Faced with increased compute requirements
and low resources for language expansion, we build a single universal model for
improving the quality and reducing run-time costs of our production system.
However, restricted data movement across regional centers prevents joint
training across languages. To this end, we propose a multi-task continual
learning framework, with auxiliary tasks and language adapters to learn
universal language representation across regions. The experimental results show
positive cross-lingual transfer across languages while reducing catastrophic
forgetting across regions. Our online results on real user traffic show
significant gains in CTR and characters saved, as well as 65% training cost
reduction compared with per-language models. As a consequence, we have scaled
the feature in multiple languages including low-resource markets.
| 2,021 |
Computation and Language
|
ERNIE-Tiny : A Progressive Distillation Framework for Pretrained
Transformer Compression
|
Pretrained language models (PLMs) such as BERT adopt a training paradigm
which first pretrain the model in general data and then finetune the model on
task-specific data, and have recently achieved great success. However, PLMs are
notorious for their enormous parameters and hard to be deployed on real-life
applications. Knowledge distillation has been prevailing to address this
problem by transferring knowledge from a large teacher to a much smaller
student over a set of data. We argue that the selection of thee three key
components, namely teacher, training data, and learning objective, is crucial
to the effectiveness of distillation. We, therefore, propose a four-stage
progressive distillation framework ERNIE-Tiny to compress PLM, which varies the
three components gradually from general level to task-specific level.
Specifically, the first stage, General Distillation, performs distillation with
guidance from pretrained teacher, gerenal data and latent distillation loss.
Then, General-Enhanced Distillation changes teacher model from pretrained
teacher to finetuned teacher. After that, Task-Adaptive Distillation shifts
training data from general data to task-specific data. In the end,
Task-Specific Distillation, adds two additional losses, namely Soft-Label and
Hard-Label loss onto the last stage. Empirical results demonstrate the
effectiveness of our framework and generalization gain brought by ERNIE-Tiny.In
particular, experiments show that a 4-layer ERNIE-Tiny maintains over
98.0%performance of its 12-layer teacher BERT base on GLUE benchmark,
surpassing state-of-the-art (SOTA) by 1.0% GLUE score with the same amount of
parameters. Moreover, ERNIE-Tiny achieves a new compression SOTA on five
Chinese NLP tasks, outperforming BERT base by 0.4% accuracy with 7.5x fewer
parameters and9.4x faster inference speed.
| 2,021 |
Computation and Language
|
Scalable Transformers for Neural Machine Translation
|
Transformer has been widely adopted in Neural Machine Translation (NMT)
because of its large capacity and parallel training of sequence generation.
However, the deployment of Transformer is challenging because different
scenarios require models of different complexities and scales. Naively training
multiple Transformers is redundant in terms of both computation and memory. In
this paper, we propose a novel Scalable Transformers, which naturally contains
sub-Transformers of different scales and have shared parameters. Each
sub-Transformer can be easily obtained by cropping the parameters of the
largest Transformer. A three-stage training scheme is proposed to tackle the
difficulty of training the Scalable Transformers, which introduces additional
supervisions from word-level and sequence-level self-distillation. Extensive
experiments were conducted on WMT EN-De and En-Fr to validate our proposed
Scalable Transformers.
| 2,021 |
Computation and Language
|
Knowing the No-match: Entity Alignment with Dangling Cases
|
This paper studies a new problem setting of entity alignment for knowledge
graphs (KGs). Since KGs possess different sets of entities, there could be
entities that cannot find alignment across them, leading to the problem of
dangling entities. As the first attempt to this problem, we construct a new
dataset and design a multi-task learning framework for both entity alignment
and dangling entity detection. The framework can opt to abstain from predicting
alignment for the detected dangling entities. We propose three techniques for
dangling entity detection that are based on the distribution of
nearest-neighbor distances, i.e., nearest neighbor classification, marginal
ranking and background ranking. After detecting and removing dangling entities,
an incorporated entity alignment model in our framework can provide more robust
alignment for remaining entities. Comprehensive experiments and analyses
demonstrate the effectiveness of our framework. We further discover that the
dangling entity detection module can, in turn, improve alignment learning and
the final performance. The contributed resource is publicly available to foster
further research.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.