Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
AgreeSum: Agreement-Oriented Multi-Document Summarization
|
We aim to renew interest in a particular multi-document summarization (MDS)
task which we call AgreeSum: agreement-oriented multi-document summarization.
Given a cluster of articles, the goal is to provide abstractive summaries that
represent information common and faithful to all input articles. Given the lack
of existing datasets, we create a dataset for AgreeSum, and provide annotations
on article-summary entailment relations for a subset of the clusters in the
dataset. We aim to create strong baselines for the task by applying the
top-performing pretrained single-document summarization model PEGASUS onto
AgreeSum, leveraging both annotated clusters by supervised losses, and
unannotated clusters by T5-based entailment-related and language-related
losses. Compared to other baselines, both automatic evaluation and human
evaluation show better article-summary and cluster-summary entailment in
generated summaries. On a separate note, we hope that our article-summary
entailment annotations contribute to the community's effort in improving
abstractive summarization faithfulness.
| 2,021 |
Computation and Language
|
Decoupled Dialogue Modeling and Semantic Parsing for Multi-Turn
Text-to-SQL
|
Recently, Text-to-SQL for multi-turn dialogue has attracted great interest.
Here, the user input of the current turn is parsed into the corresponding SQL
query of the appropriate database, given all previous dialogue history. Current
approaches mostly employ end-to-end models and consequently face two
challenges. First, dialogue history modeling and Text-to-SQL parsing are
implicitly combined, hence it is hard to carry out interpretable analysis and
obtain targeted improvement. Second, SQL annotation of multi-turn dialogue is
very expensive, leading to training data sparsity. In this paper, we propose a
novel decoupled multi-turn Text-to-SQL framework, where an utterance rewrite
model first explicitly solves completion of dialogue context, and then a
single-turn Text-to-SQL parser follows. A dual learning approach is also
proposed for the utterance rewrite model to address the data sparsity problem.
Compared with end-to-end approaches, the proposed decoupled method can achieve
excellent performance without any annotated in-domain data. With just a few
annotated rewrite cases, the decoupled method outperforms the released
state-of-the-art end-to-end models on both SParC and CoSQL datasets.
| 2,021 |
Computation and Language
|
Dutch Named Entity Recognition and De-identification Methods for the
Human Resource Domain
|
The human resource (HR) domain contains various types of privacy-sensitive
textual data, such as e-mail correspondence and performance appraisal. Doing
research on these documents brings several challenges, one of them
anonymisation. In this paper, we evaluate the current Dutch text
de-identification methods for the HR domain in four steps. First, by updating
one of these methods with the latest named entity recognition (NER) models. The
result is that the NER model based on the CoNLL 2002 corpus in combination with
the BERTje transformer give the best combination for suppressing persons
(recall 0.94) and locations (recall 0.82). For suppressing gender, DEDUCE is
performing best (recall 0.53). Second NER evaluation is based on both strict
de-identification of entities (a person must be suppressed as a person) and
third evaluation on a loose sense of de-identification (no matter what how a
person is suppressed, as long it is suppressed). In the fourth and last step a
new kind of NER dataset is tested for recognising job titles in texts.
| 2,020 |
Computation and Language
|
Modeling the Unigram Distribution
|
The unigram distribution is the non-contextual probability of finding a
specific word form in a corpus. While of central importance to the study of
language, it is commonly approximated by each word's sample frequency in the
corpus. This approach, being highly dependent on sample size, assigns zero
probability to any out-of-vocabulary (oov) word form. As a result, it produces
negatively biased probabilities for any oov word form, while positively biased
probabilities to in-corpus words. In this work, we argue in favor of properly
modeling the unigram distribution -- claiming it should be a central task in
natural language processing. With this in mind, we present a novel model for
estimating it in a language (a neuralization of Goldwater et al.'s (2011)
model) and show it produces much better estimates across a diverse set of 7
languages than the na\"ive use of neural character-level language models.
| 2,021 |
Computation and Language
|
Cross-language Sentence Selection via Data Augmentation and Rationale
Training
|
This paper proposes an approach to cross-language sentence selection in a
low-resource setting. It uses data augmentation and negative sampling
techniques on noisy parallel sentence data to directly learn a cross-lingual
embedding-based query relevance model. Results show that this approach performs
as well as or better than multiple state-of-the-art machine translation +
monolingual retrieval systems trained on the same parallel data. Moreover, when
a rationale training secondary objective is applied to encourage the model to
match word alignment hints from a phrase-based statistical machine translation
model, consistent improvements are seen across three language pairs
(English-Somali, English-Swahili and English-Tagalog) over a variety of
state-of-the-art baselines.
| 2,021 |
Computation and Language
|
AdvPicker: Effectively Leveraging Unlabeled Data via Adversarial
Discriminator for Cross-Lingual NER
|
Neural methods have been shown to achieve high performance in Named Entity
Recognition (NER), but rely on costly high-quality labeled data for training,
which is not always available across languages. While previous works have shown
that unlabeled data in a target language can be used to improve cross-lingual
model performance, we propose a novel adversarial approach (AdvPicker) to
better leverage such data and further improve results. We design an adversarial
learning framework in which an encoder learns entity domain knowledge from
labeled source-language data and better shared features are captured via
adversarial training - where a discriminator selects less language-dependent
target-language data via similarity to the source language. Experimental
results on standard benchmark datasets well demonstrate that the proposed
method benefits strongly from this data selection process and outperforms
existing state-of-the-art methods; without requiring any additional external
resources (e.g., gazetteers or via machine translation). The code is available
at https://aka.ms/AdvPicker
| 2,021 |
Computation and Language
|
Retrieve & Memorize: Dialog Policy Learning with Multi-Action Memory
|
Dialogue policy learning, a subtask that determines the content of system
response generation and then the degree of task completion, is essential for
task-oriented dialogue systems. However, the unbalanced distribution of system
actions in dialogue datasets often causes difficulty in learning to generate
desired actions and responses. In this paper, we propose a
retrieve-and-memorize framework to enhance the learning of system actions.
Specially, we first design a neural context-aware retrieval module to retrieve
multiple candidate system actions from the training set given a dialogue
context. Then, we propose a memory-augmented multi-decoder network to generate
the system actions conditioned on the candidate actions, which allows the
network to adaptively select key information in the candidate actions and
ignore noises. We conduct experiments on the large-scale multi-domain
task-oriented dialogue dataset MultiWOZ 2.0 and MultiWOZ 2.1. Experimental
results show that our method achieves competitive performance among several
state-of-the-art models in the context-to-response generation task.
| 2,021 |
Computation and Language
|
AdaTag: Multi-Attribute Value Extraction from Product Profiles with
Adaptive Decoding
|
Automatic extraction of product attribute values is an important enabling
technology in e-Commerce platforms. This task is usually modeled using sequence
labeling architectures, with several extensions to handle multi-attribute
extraction. One line of previous work constructs attribute-specific models,
through separate decoders or entirely separate models. However, this approach
constrains knowledge sharing across different attributes. Other contributions
use a single multi-attribute model, with different techniques to embed
attribute information. But sharing the entire network parameters across all
attributes can limit the model's capacity to capture attribute-specific
characteristics. In this paper we present AdaTag, which uses adaptive decoding
to handle extraction. We parameterize the decoder with pretrained attribute
embeddings, through a hypernetwork and a Mixture-of-Experts (MoE) module. This
allows for separate, but semantically correlated, decoders to be generated on
the fly for different attributes. This approach facilitates knowledge sharing,
while maintaining the specificity of each attribute. Our experiments on a
real-world e-Commerce dataset show marked improvements over previous methods.
| 2,021 |
Computation and Language
|
ERICA: An Empathetic Android Companion for Covid-19 Quarantine
|
Over the past year, research in various domains, including Natural Language
Processing (NLP), has been accelerated to fight against the COVID-19 pandemic,
yet such research has just started on dialogue systems. In this paper, we
introduce an end-to-end dialogue system which aims to ease the isolation of
people under self-quarantine. We conduct a control simulation experiment to
assess the effects of the user interface, a web-based virtual agent called Nora
vs. the android ERICA via a video call. The experimental results show that the
android offers a more valuable user experience by giving the impression of
being more empathetic and engaging in the conversation due to its nonverbal
information, such as facial expressions and body gestures.
| 2,021 |
Computation and Language
|
Bi-Granularity Contrastive Learning for Post-Training in Few-Shot Scene
|
The major paradigm of applying a pre-trained language model to downstream
tasks is to fine-tune it on labeled task data, which often suffers instability
and low performance when the labeled examples are scarce.~One way to alleviate
this problem is to apply post-training on unlabeled task data before
fine-tuning, adapting the pre-trained model to target domains by contrastive
learning that considers either token-level or sequence-level similarity.
Inspired by the success of sequence masking, we argue that both token-level and
sequence-level similarities can be captured with a pair of masked
sequences.~Therefore, we propose complementary random masking (CRM) to generate
a pair of masked sequences from an input sequence for sequence-level
contrastive learning and then develop contrastive masked language modeling
(CMLM) for post-training to integrate both token-level and sequence-level
contrastive learnings.~Empirical results show that CMLM surpasses several
recent post-training methods in few-shot settings without the need for data
augmentation.
| 2,021 |
Computation and Language
|
cs60075_team2 at SemEval-2021 Task 1 : Lexical Complexity Prediction
using Transformer-based Language Models pre-trained on various text corpora
|
This paper describes the performance of the team cs60075_team2 at SemEval
2021 Task 1 - Lexical Complexity Prediction. The main contribution of this
paper is to fine-tune transformer-based language models pre-trained on several
text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the
corpora from which the CompLex Dataset was extracted, and others being from
other specific domains such as Finance, Law, etc. We perform ablation studies
on selecting the transformer models and how their individual complexity scores
are aggregated to get the resulting complexity scores. Our method achieves a
best Pearson Correlation of $0.784$ in sub-task 1 (single word) and $0.836$ in
sub-task 2 (multiple word expressions).
| 2,021 |
Computation and Language
|
How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social
Impact
|
Recent years have seen many breakthroughs in natural language processing
(NLP), transitioning it from a mostly theoretical field to one with many
real-world applications. Noting the rising number of applications of other
machine learning and AI techniques with pervasive societal impact, we
anticipate the rising importance of developing NLP technologies for social
good. Inspired by theories in moral philosophy and global priorities research,
we aim to promote a guideline for social good in the context of NLP. We lay the
foundations via the moral philosophy definition of social good, propose a
framework to evaluate the direct and indirect real-world impact of NLP tasks,
and adopt the methodology of global priorities research to identify priority
causes for NLP research. Finally, we use our theoretical framework to provide
some practical guidelines for future NLP research for social good. Our data and
code are available at http://github.com/zhijing-jin/nlp4sg_acl2021. In
addition, we curate a list of papers and resources on NLP for social good at
https://github.com/zhijing-jin/NLP4SocialGood_Papers.
| 2,023 |
Computation and Language
|
Annotation Curricula to Implicitly Train Non-Expert Annotators
|
Annotation studies often require annotators to familiarize themselves with
the task, its annotation scheme, and the data domain. This can be overwhelming
in the beginning, mentally taxing, and induce errors into the resulting
annotations; especially in citizen science or crowd sourcing scenarios where
domain expertise is not required and only annotation guidelines are provided.
To alleviate these issues, we propose annotation curricula, a novel approach to
implicitly train annotators. Our goal is to gradually introduce annotators into
the task by ordering instances that are annotated according to a learning
curriculum. To do so, we first formalize annotation curricula for sentence- and
paragraph-level annotation tasks, define an ordering strategy, and identify
well-performing heuristics and interactively trained models on three existing
English datasets. We then conduct a user study with 40 voluntary participants
who are asked to identify the most fitting misconception for English tweets
about the Covid-19 pandemic. Our results show that using a simple heuristic to
order instances can already significantly reduce the total annotation time
while preserving a high annotation quality. Annotation curricula thus can
provide a novel way to improve data collection. To facilitate future research,
we further share our code and data consisting of 2,400 annotations.
| 2,021 |
Computation and Language
|
Prediction or Comparison: Toward Interpretable Qualitative Reasoning
|
Qualitative relationships illustrate how changing one property (e.g., moving
velocity) affects another (e.g., kinetic energy) and constitutes a considerable
portion of textual knowledge. Current approaches use either semantic parsers to
transform natural language inputs into logical expressions or a "black-box"
model to solve them in one step. The former has a limited application range,
while the latter lacks interpretability. In this work, we categorize
qualitative reasoning tasks into two types: prediction and comparison. In
particular, we adopt neural network modules trained in an end-to-end manner to
simulate the two reasoning processes. Experiments on two qualitative reasoning
question answering datasets, QuaRTz and QuaRel, show our methods' effectiveness
and generalization capability, and the intermediate outputs provided by the
modules make the reasoning process interpretable.
| 2,021 |
Computation and Language
|
Entity Concept-enhanced Few-shot Relation Extraction
|
Few-shot relation extraction (FSRE) is of great importance in long-tail
distribution problem, especially in special domain with low-resource data. Most
existing FSRE algorithms fail to accurately classify the relations merely based
on the information of the sentences together with the recognized entity pairs,
due to limited samples and lack of knowledge. To address this problem, in this
paper, we proposed a novel entity CONCEPT-enhanced FEw-shot Relation Extraction
scheme (ConceptFERE), which introduces the inherent concepts of entities to
provide clues for relation prediction and boost the relations classification
performance. Firstly, a concept-sentence attention module is developed to
select the most appropriate concept from multiple concepts of each entity by
calculating the semantic similarity between sentences and concepts. Secondly, a
self-attention based fusion module is presented to bridge the gap of concept
embedding and sentence embedding from different semantic spaces. Extensive
experiments on the FSRE benchmark dataset FewRel have demonstrated the
effectiveness and the superiority of the proposed ConceptFERE scheme as
compared to the state-of-the-art baselines. Code is available at
https://github.com/LittleGuoKe/ConceptFERE.
| 2,021 |
Computation and Language
|
You Only Compress Once: Towards Effective and Elastic BERT Compression
via Exploit-Explore Stochastic Nature Gradient
|
Despite superior performance on various natural language processing tasks,
pre-trained models such as BERT are challenged by deploying on
resource-constraint devices. Most existing model compression approaches require
re-compression or fine-tuning across diverse constraints to accommodate various
hardware deployments. This practically limits the further application of model
compression. Moreover, the ineffective training and searching process of
existing elastic compression paradigms[4,27] prevents the direct migration to
BERT compression. Motivated by the necessity of efficient inference across
various constraints on BERT, we propose a novel approach, YOCO-BERT, to achieve
compress once and deploy everywhere. Specifically, we first construct a huge
search space with 10^13 architectures, which covers nearly all configurations
in BERT model. Then, we propose a novel stochastic nature gradient optimization
method to guide the generation of optimal candidate architecture which could
keep a balanced trade-off between explorations and exploitation. When a certain
resource constraint is given, a lightweight distribution optimization approach
is utilized to obtain the optimal network for target deployment without
fine-tuning. Compared with state-of-the-art algorithms, YOCO-BERT provides more
compact models, yet achieving 2.1%-4.5% average accuracy improvement on the
GLUE benchmark. Besides, YOCO-BERT is also more effective, e.g.,the training
complexity is O(1)for N different devices. Code is
availablehttps://github.com/MAC-AutoML/YOCO-BERT.
| 2,021 |
Computation and Language
|
Language Model Metrics and Procrustes Analysis for Improved Vector
Transformation of NLP Embeddings
|
Artificial Neural networks are mathematical models at their core. This
truismpresents some fundamental difficulty when networks are tasked with
Natural Language Processing. A key problem lies in measuring the similarity or
distance among vectors in NLP embedding space, since the mathematical concept
of distance does not always agree with the linguistic concept. We suggest that
the best way to measure linguistic distance among vectors is by employing the
Language Model (LM) that created them. We introduce Language Model Distance
(LMD) for measuring accuracy of vector transformations based on the
Distributional Hypothesis ( LMD Accuracy ). We show the efficacy of this metric
by applying it to a simple neural network learning the Procrustes algorithm for
bilingual word mapping.
| 2,020 |
Computation and Language
|
COINS: Dynamically Generating COntextualized Inference Rules for
Narrative Story Completion
|
Despite recent successes of large pre-trained language models in solving
reasoning tasks, their inference capabilities remain opaque. We posit that such
models can be made more interpretable by explicitly generating interim
inference rules, and using them to guide the generation of task-specific
textual outputs. In this paper we present COINS, a recursive inference
framework that i) iteratively reads context sentences, ii) dynamically
generates contextualized inference rules, encodes them, and iii) uses them to
guide task-specific output generation. We apply COINS to a Narrative Story
Completion task that asks a model to complete a story with missing sentences,
to produce a coherent story with plausible logical connections, causal
relationships, and temporal dependencies. By modularizing inference and
sentence generation steps in a recurrent model, we aim to make reasoning steps
and their effects on next sentence generation transparent. Our automatic and
manual evaluations show that the model generates better story sentences than
SOTA baselines, especially in terms of coherence. We further demonstrate
improved performance over strong pre-trained LMs in generating commonsense
inference rules. The recursive nature of COINS holds the potential for
controlled generation of longer sequences.
| 2,021 |
Computation and Language
|
Improving Computer Generated Dialog with Auxiliary Loss Functions and
Custom Evaluation Metrics
|
Although people have the ability to engage in vapid dialogue without effort,
this may not be a uniquely human trait. Since the 1960's researchers have been
trying to create agents that can generate artificial conversation. These
programs are commonly known as chatbots. With increasing use of neural networks
for dialog generation, some conclude that this goal has been achieved. This
research joins the quest by creating a dialog generating Recurrent Neural
Network (RNN) and by enhancing the ability of this network with auxiliary loss
functions and a beam search. Our custom loss functions achieve better cohesion
and coherence by including calculations of Maximum Mutual Information (MMI) and
entropy. We demonstrate the effectiveness of this system by using a set of
custom evaluation metrics inspired by an abundance of previous research and
based on tried-and-true principles of Natural Language Processing.
| 2,018 |
Computation and Language
|
CLIP: A Dataset for Extracting Action Items for Physicians from Hospital
Discharge Notes
|
Continuity of care is crucial to ensuring positive health outcomes for
patients discharged from an inpatient hospital setting, and improved
information sharing can help. To share information, caregivers write discharge
notes containing action items to share with patients and their future
caregivers, but these action items are easily lost due to the lengthiness of
the documents. In this work, we describe our creation of a dataset of clinical
action items annotated over MIMIC-III, the largest publicly available dataset
of real clinical notes. This dataset, which we call CLIP, is annotated by
physicians and covers 718 documents representing 100K sentences. We describe
the task of extracting the action items from these documents as multi-aspect
extractive summarization, with each aspect representing a type of action to be
taken. We evaluate several machine learning models on this task, and show that
the best models exploit in-domain language model pre-training on 59K
unannotated documents, and incorporate context from neighboring sentences. We
also propose an approach to pre-training data selection that allows us to
explore the trade-off between size and domain-specificity of pre-training
datasets for this task.
| 2,021 |
Computation and Language
|
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
|
Analysing whether neural language models encode linguistic information has
become popular in NLP. One method of doing so, which is frequently cited to
support the claim that models like BERT encode syntax, is called probing;
probes are small supervised models trained to extract linguistic information
from another model's output. If a probe is able to predict a particular
structure, it is argued that the model whose output it is trained on must have
implicitly learnt to encode it. However, drawing a generalisation about a
model's linguistic knowledge about a specific phenomena based on what a probe
is able to learn may be problematic: in this work, we show that semantic cues
in training data means that syntactic probes do not properly isolate syntax. We
generate a new corpus of semantically nonsensical but syntactically well-formed
Jabberwocky sentences, which we use to evaluate two probes trained on normal
data. We train the probes on several popular language models (BERT, GPT, and
RoBERTa), and find that in all settings they perform worse when evaluated on
these data, for one probe by an average of 15.4 UUAS points absolute. Although
in most cases they still outperform the baselines, their lead is reduced
substantially, e.g. by 53% in the case of BERT for one probe. This begs the
question: what empirical scores constitute knowing syntax?
| 2,021 |
Computation and Language
|
Great Service! Fine-grained Parsing of Implicit Arguments
|
Broad-coverage meaning representations in NLP mostly focus on explicitly
expressed content. More importantly, the scarcity of datasets annotating
diverse implicit roles limits empirical studies into their linguistic nuances.
For example, in the web review "Great service!", the provider and consumer are
implicit arguments of different types. We examine an annotated corpus of
fine-grained implicit arguments (Cui and Hershcovich, 2020) by carefully
re-annotating it, resolving several inconsistencies. Subsequently, we present
the first transition-based neural parser that can handle implicit arguments
dynamically, and experiment with two different transition systems on the
improved dataset. We find that certain types of implicit arguments are more
difficult to parse than others and that the simpler system is more accurate in
recovering implicit arguments, despite having a lower overall parsing score,
attesting current reasoning limitations of NLP models. This work will
facilitate a better understanding of implicit and underspecified language, by
incorporating it holistically into meaning representations.
| 2,021 |
Computation and Language
|
Recurrent Neural Networks with Mixed Hierarchical Structures for Natural
Language Processing
|
Hierarchical structures exist in both linguistics and Natural Language
Processing (NLP) tasks. How to design RNNs to learn hierarchical
representations of natural languages remains a long-standing challenge. In this
paper, we define two different types of boundaries referred to as static and
dynamic boundaries, respectively, and then use them to construct a multi-layer
hierarchical structure for document classification tasks. In particular, we
focus on a three-layer hierarchical structure with static word- and sentence-
layers and a dynamic phrase-layer. LSTM cells and two boundary detectors are
used to implement the proposed structure, and the resulting network is called
the {\em Recurrent Neural Network with Mixed Hierarchical Structures}
(MHS-RNN). We further add three layers of attention mechanisms to the MHS-RNN
model. Incorporating attention mechanisms allows our model to use more
important content to construct document representation and enhance its
performance on document classification tasks. Experiments on five different
datasets show that the proposed architecture outperforms previous methods on
all the five tasks.
| 2,021 |
Computation and Language
|
Neural semi-Markov CRF for Monolingual Word Alignment
|
Monolingual word alignment is important for studying fine-grained editing
operations (i.e., deletion, addition, and substitution) in text-to-text
generation tasks, such as paraphrase generation, text simplification,
neutralizing biased language, etc. In this paper, we present a novel neural
semi-Markov CRF alignment model, which unifies word and phrase alignments
through variable-length spans. We also create a new benchmark with human
annotations that cover four different text genres to evaluate monolingual word
alignment models in more realistic settings. Experimental results show that our
proposed model outperforms all previous approaches for monolingual word
alignment as well as a competitive QA-based baseline, which was previously only
applied to bilingual data. Our model demonstrates good generalizability to
three out-of-domain datasets and shows great utility in two downstream
applications: automatic text simplification and sentence pair classification
tasks.
| 2,021 |
Computation and Language
|
W-RST: Towards a Weighted RST-style Discourse Framework
|
Aiming for a better integration of data-driven and linguistically-inspired
approaches, we explore whether RST Nuclearity, assigning a binary assessment of
importance between text segments, can be replaced by automatically generated,
real-valued scores, in what we call a Weighted-RST framework. In particular, we
find that weighted discourse trees from auxiliary tasks can benefit key NLP
downstream applications, compared to nuclearity-centered approaches. We further
show that real-valued importance distributions partially and interestingly
align with the assessment and uncertainty of human annotators.
| 2,021 |
Computation and Language
|
Emergent Communication of Generalizations
|
To build agents that can collaborate effectively with others, recent research
has trained artificial agents to communicate with each other in Lewis-style
referential games. However, this often leads to successful but uninterpretable
communication. We argue that this is due to the game objective: communicating
about a single object in a shared visual context is prone to overfitting and
does not encourage language useful beyond concrete reference. In contrast,
human language conveys a rich variety of abstract ideas. To promote such
skills, we propose games that require communicating generalizations over sets
of objects representing abstract visual concepts, optionally with separate
contexts for each agent. We find that these games greatly improve systematicity
and interpretability of the learned languages, according to several metrics in
the literature. Finally, we propose a method for identifying logical operations
embedded in the emergent languages by learning an approximate compositional
reconstruction of the language.
| 2,022 |
Computation and Language
|
The R-U-A-Robot Dataset: Helping Avoid Chatbot Deception by Detecting
User Questions About Human or Non-Human Identity
|
Humans are increasingly interacting with machines through language, sometimes
in contexts where the user may not know they are talking to a machine (like
over the phone or a text chatbot). We aim to understand how system designers
and researchers might allow their systems to confirm its non-human identity. We
collect over 2,500 phrasings related to the intent of ``Are you a robot?". This
is paired with over 2,500 adversarially selected utterances where only
confirming the system is non-human would be insufficient or disfluent. We
compare classifiers to recognize the intent and discuss the precision/recall
and model complexity tradeoffs. Such classifiers could be integrated into
dialog systems to avoid undesired deception. We then explore how both a
generative research model (Blender) as well as two deployed systems (Amazon
Alexa, Google Assistant) handle this intent, finding that systems often fail to
confirm their non-human identity. Finally, we try to understand what a good
response to the intent would be, and conduct a user study to compare the
important aspects when responding to this intent.
| 2,021 |
Computation and Language
|
MultiOpEd: A Corpus of Multi-Perspective News Editorials
|
We propose MultiOpEd, an open-domain news editorial corpus that supports
various tasks pertaining to the argumentation structure in news editorials,
focusing on automatic perspective discovery. News editorial is a genre of
persuasive text, where the argumentation structure is usually implicit.
However, the arguments presented in an editorial typically center around a
concise, focused thesis, which we refer to as their perspective. MultiOpEd aims
at supporting the study of multiple tasks relevant to automatic perspective
discovery, where a system is expected to produce a single-sentence thesis
statement summarizing the arguments presented. We argue that identifying and
abstracting such natural language perspectives from editorials is a crucial
step toward studying the implicit argumentation structure in news editorials.
We first discuss the challenges and define a few conceptual tasks towards our
goal. To demonstrate the utility of MultiOpEd and the induced tasks, we study
the problem of perspective summarization in a multi-task learning setting, as a
case study. We show that, with the induced tasks as auxiliary tasks, we can
improve the quality of the perspective summary generated. We hope that
MultiOpEd will be a useful resource for future studies on argumentation in the
news editorial domain.
| 2,021 |
Computation and Language
|
BiToD: A Bilingual Multi-Domain Dataset For Task-Oriented Dialogue
Modeling
|
Task-oriented dialogue (ToD) benchmarks provide an important avenue to
measure progress and develop better conversational agents. However, existing
datasets for end-to-end ToD modeling are limited to a single language,
hindering the development of robust end-to-end ToD systems for multilingual
countries and regions. Here we introduce BiToD, the first bilingual
multi-domain dataset for end-to-end task-oriented dialogue modeling. BiToD
contains over 7k multi-domain dialogues (144k utterances) with a large and
realistic bilingual knowledge base. It serves as an effective benchmark for
evaluating bilingual ToD systems and cross-lingual transfer learning
approaches. We provide state-of-the-art baselines under three evaluation
settings (monolingual, bilingual, and cross-lingual). The analysis of our
baselines in different settings highlights 1) the effectiveness of training a
bilingual ToD system compared to two independent monolingual ToD systems, and
2) the potential of leveraging a bilingual knowledge base and cross-lingual
transfer learning to improve the system performance under low resource
condition.
| 2,021 |
Computation and Language
|
Weakly-Supervised Methods for Suicide Risk Assessment: Role of Related
Domains
|
Social media has become a valuable resource for the study of suicidal
ideation and the assessment of suicide risk. Among social media platforms,
Reddit has emerged as the most promising one due to its anonymity and its focus
on topic-based communities (subreddits) that can be indicative of someone's
state of mind or interest regarding mental health disorders such as
r/SuicideWatch, r/Anxiety, r/depression. A challenge for previous work on
suicide risk assessment has been the small amount of labeled data. We propose
an empirical investigation into several classes of weakly-supervised
approaches, and show that using pseudo-labeling based on related issues around
mental health (e.g., anxiety, depression) helps improve model performance for
suicide risk assessment.
| 2,021 |
Computation and Language
|
Lifelong Learning of Hate Speech Classification on Social Media
|
Existing work on automated hate speech classification assumes that the
dataset is fixed and the classes are pre-defined. However, the amount of data
in social media increases every day, and the hot topics changes rapidly,
requiring the classifiers to be able to continuously adapt to new data without
forgetting the previously learned knowledge. This ability, referred to as
lifelong learning, is crucial for the real-word application of hate speech
classifiers in social media. In this work, we propose lifelong learning of hate
speech classification on social media. To alleviate catastrophic forgetting, we
propose to use Variational Representation Learning (VRL) along with a memory
module based on LB-SOINN (Load-Balancing Self-Organizing Incremental Neural
Network). Experimentally, we show that combining variational representation
learning and the LB-SOINN memory module achieves better performance than the
commonly-used lifelong learning techniques.
| 2,021 |
Computation and Language
|
Improving Automated Evaluation of Open Domain Dialog via Diverse
Reference Augmentation
|
Multiple different responses are often plausible for a given open domain
dialog context. Prior work has shown the importance of having multiple valid
reference responses for meaningful and robust automated evaluations. In such
cases, common practice has been to collect more human written references.
However, such collection can be expensive, time consuming, and not easily
scalable. Instead, we propose a novel technique for automatically expanding a
human generated reference to a set of candidate references. We fetch plausible
references from knowledge sources, and adapt them so that they are more fluent
in context of the dialog instance in question. More specifically, we use (1) a
commonsense knowledge base to elicit a large number of plausible reactions
given the dialog history (2) relevant instances retrieved from dialog corpus,
using similar past as well as future contexts. We demonstrate that our
automatically expanded reference sets lead to large improvements in
correlations of automated metrics with human ratings of system outputs for
DailyDialog dataset.
| 2,021 |
Computation and Language
|
MergeDistill: Merging Pre-trained Language Models using Distillation
|
Pre-trained multilingual language models (LMs) have achieved state-of-the-art
results in cross-lingual transfer, but they often lead to an inequitable
representation of languages due to limited capacity, skewed pre-training data,
and sub-optimal vocabularies. This has prompted the creation of an ever-growing
pre-trained model universe, where each model is trained on large amounts of
language or domain specific data with a carefully curated, linguistically
informed vocabulary. However, doing so brings us back full circle and prevents
one from leveraging the benefits of multilinguality. To address the gaps at
both ends of the spectrum, we propose MergeDistill, a framework to merge
pre-trained LMs in a way that can best leverage their assets with minimal
dependencies, using task-agnostic knowledge distillation. We demonstrate the
applicability of our framework in a practical setting by leveraging
pre-existing teacher LMs and training student LMs that perform competitively
with or even outperform teacher LMs trained on several orders of magnitude more
data and with a fixed model capacity. We also highlight the importance of
teacher selection and its impact on student model performance.
| 2,021 |
Computation and Language
|
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
|
Probing complex language models has recently revealed several insights into
linguistic and semantic patterns found in the learned representations. In this
article, we probe BERT specifically to understand and measure the relational
knowledge it captures in its parametric memory. While probing for linguistic
understanding is commonly applied to all layers of BERT as well as fine-tuned
models, this has not been done for factual knowledge. We utilize existing
knowledge base completion tasks (LAMA) to probe every layer of pre-trained as
well as fine-tuned BERT models(ranking, question answering, NER). Our findings
show that knowledge is not just contained in BERT's final layers. Intermediate
layers contribute a significant amount (17-60%) to the total knowledge found.
Probing intermediate layers also reveals how different types of knowledge
emerge at varying rates. When BERT is fine-tuned, relational knowledge is
forgotten. The extent of forgetting is impacted by the fine-tuning objective
and the training data. We found that ranking models forget the least and retain
more knowledge in their final layer compared to masked language modeling and
question-answering. However, masked language modeling performed the best at
acquiring new knowledge from the training data. When it comes to learning
facts, we found that capacity and fact density are key factors. We hope this
initial work will spur further research into understanding the parametric
memory of language models and the effect of training objectives on factual
knowledge. The code to repeat the experiments is publicly available on GitHub.
| 2,021 |
Computation and Language
|
Denoising Word Embeddings by Averaging in a Shared Space
|
We introduce a new approach for smoothing and improving the quality of word
embeddings. We consider a method of fusing word embeddings that were trained on
the same corpus but with different initializations. We project all the models
to a shared vector space using an efficient implementation of the Generalized
Procrustes Analysis (GPA) procedure, previously used in multilingual word
translation. Our word representation demonstrates consistent improvements over
the raw models as well as their simplistic average, on a range of tasks. As the
new representations are more stable and reliable, there is a noticeable
improvement in rare word evaluations.
| 2,021 |
Computation and Language
|
Meta-Learning with Variational Semantic Memory for Word Sense
Disambiguation
|
A critical challenge faced by supervised word sense disambiguation (WSD) is
the lack of large annotated datasets with sufficient coverage of words in their
diversity of senses. This inspired recent research on few-shot WSD using
meta-learning. While such work has successfully applied meta-learning to learn
new word senses from very few examples, its performance still lags behind its
fully supervised counterpart. Aiming to further close this gap, we propose a
model of semantic memory for WSD in a meta-learning setting. Semantic memory
encapsulates prior experiences seen throughout the lifetime of the model, which
aids better generalization in limited data settings. Our model is based on
hierarchical variational inference and incorporates an adaptive memory update
rule via a hypernetwork. We show our model advances the state of the art in
few-shot WSD, supports effective learning in extremely data scarce (e.g.
one-shot) scenarios and produces meaning prototypes that capture similar senses
of distinct words.
| 2,021 |
Computation and Language
|
Enhancing Taxonomy Completion with Concept Generation via Fusing
Relational Representations
|
Automatic construction of a taxonomy supports many applications in
e-commerce, web search, and question answering. Existing taxonomy expansion or
completion methods assume that new concepts have been accurately extracted and
their embedding vectors learned from the text corpus. However, one critical and
fundamental challenge in fixing the incompleteness of taxonomies is the
incompleteness of the extracted concepts, especially for those whose names have
multiple words and consequently low frequency in the corpus. To resolve the
limitations of extraction-based methods, we propose GenTaxo to enhance taxonomy
completion by identifying positions in existing taxonomies that need new
concepts and then generating appropriate concept names. Instead of relying on
the corpus for concept embeddings, GenTaxo learns the contextual embeddings
from their surrounding graph-based and language-based relational information,
and leverages the corpus for pre-training a concept name generator.
Experimental results demonstrate that GenTaxo improves the completeness of
taxonomies over existing methods.
| 2,021 |
Computation and Language
|
Embracing Ambiguity: Shifting the Training Target of NLI Models
|
Natural Language Inference (NLI) datasets contain examples with highly
ambiguous labels. While many research works do not pay much attention to this
fact, several recent efforts have been made to acknowledge and embrace the
existence of ambiguity, such as UNLI and ChaosNLI. In this paper, we explore
the option of training directly on the estimated label distribution of the
annotators in the NLI task, using a learning loss based on this ambiguity
distribution instead of the gold-labels. We prepare AmbiNLI, a trial dataset
obtained from readily available sources, and show it is possible to reduce
ChaosNLI divergence scores when finetuning on this data, a promising first step
towards learning how to capture linguistic ambiguity. Additionally, we show
that training on the same amount of data but targeting the ambiguity
distribution instead of gold-labels can result in models that achieve higher
performance and learn better representations for downstream tasks.
| 2,021 |
Computation and Language
|
Do Grammatical Error Correction Models Realize Grammatical
Generalization?
|
There has been an increased interest in data generation approaches to
grammatical error correction (GEC) using pseudo data. However, these approaches
suffer from several issues that make them inconvenient for real-world
deployment including a demand for large amounts of training data. On the other
hand, some errors based on grammatical rules may not necessarily require a
large amount of data if GEC models can realize grammatical generalization. This
study explores to what extent GEC models generalize grammatical knowledge
required for correcting errors. We introduce an analysis method using synthetic
and real GEC datasets with controlled vocabularies to evaluate whether models
can generalize to unseen errors. We found that a current standard
Transformer-based GEC model fails to realize grammatical generalization even in
simple settings with limited vocabulary and syntax, suggesting that it lacks
the generalization ability required to correct errors from provided training
examples.
| 2,021 |
Computation and Language
|
Emotion-aware Chat Machine: Automatic Emotional Response Generation for
Human-like Emotional Interaction
|
The consistency of a response to a given post at semantic-level and
emotional-level is essential for a dialogue system to deliver human-like
interactions. However, this challenge is not well addressed in the literature,
since most of the approaches neglect the emotional information conveyed by a
post while generating responses. This article addresses this problem by
proposing a unifed end-to-end neural architecture, which is capable of
simultaneously encoding the semantics and the emotions in a post for generating
more intelligent responses with appropriately expressed emotions. Extensive
experiments on real-world data demonstrate that the proposed method outperforms
the state-of-the-art methods in terms of both content coherence and emotion
appropriateness.
| 2,021 |
Computation and Language
|
Empowering Language Understanding with Counterfactual Reasoning
|
Present language understanding methods have demonstrated extraordinary
ability of recognizing patterns in texts via machine learning. However,
existing methods indiscriminately use the recognized patterns in the testing
phase that is inherently different from us humans who have counterfactual
thinking, e.g., to scrutinize for the hard testing samples. Inspired by this,
we propose a Counterfactual Reasoning Model, which mimics the counterfactual
thinking by learning from few counterfactual samples. In particular, we devise
a generation module to generate representative counterfactual samples for each
factual sample, and a retrospective module to retrospect the model prediction
by comparing the counterfactual and factual samples. Extensive experiments on
sentiment analysis (SA) and natural language inference (NLI) validate the
effectiveness of our method.
| 2,021 |
Computation and Language
|
How Did This Get Funded?! Automatically Identifying Quirky Scientific
Achievements
|
Humor is an important social phenomenon, serving complex social and
psychological functions. However, despite being studied for millennia humor is
computationally not well understood, often considered an AI-complete problem.
In this work, we introduce a novel setting in humor mining: automatically
detecting funny and unusual scientific papers. We are inspired by the Ig Nobel
prize, a satirical prize awarded annually to celebrate funny scientific
achievements (example past winner: "Are cows more likely to lie down the longer
they stand?"). This challenging task has unique characteristics that make it
particularly suitable for automatic learning. We construct a dataset containing
thousands of funny papers and use it to learn classifiers, combining findings
from psychology and linguistics with recent advances in NLP. We use our models
to identify potentially funny papers in a large dataset of over 630,000
articles. The results demonstrate the potential of our methods, and more
broadly the utility of integrating state-of-the-art NLP methods with insights
from more traditional disciplines.
| 2,021 |
Computation and Language
|
Semantic-Enhanced Explainable Finetuning for Open-Domain Dialogues
|
This paper propose to combine pretrained language models with the modular
dialogue paradigm for open-domain dialogue modeling. Our method,
semantic-enhanced finetuning, instantiates conversation understanding,
planning, and response generation as a language model finetuning task. At
inference, we disentangle semantic and token variations by specifying sampling
methods and constraints for each module separately. For training and
evaluation, we present X-Weibo, a Chinese multi-turn open-domain dialogue
dataset with automatic annotation for emotions, DAs, and topical words.
Experiments show that semantic-enhanced finetuning outperforms strong baselines
on non-semantic and semantic metrics, improves the human-evaluated relevance,
coherence, and informativeness, and exhibits considerable controllability over
semantic variables.
| 2,022 |
Computation and Language
|
Combining Static Word Embeddings and Contextual Representations for
Bilingual Lexicon Induction
|
Bilingual Lexicon Induction (BLI) aims to map words in one language to their
translations in another, and is typically through learning linear projections
to align monolingual word representation spaces. Two classes of word
representations have been explored for BLI: static word embeddings and
contextual representations, but there is no studies to combine both. In this
paper, we propose a simple yet effective mechanism to combine the static word
embeddings and the contextual representations to utilize the advantages of both
paradigms. We test the combination mechanism on various language pairs under
the supervised and unsupervised BLI benchmark settings. Experiments show that
our mechanism consistently improves performances over robust BLI baselines on
all language pairs by averagely improving 3.2 points in the supervised setting,
and 3.1 points in the unsupervised setting.
| 2,021 |
Computation and Language
|
Enhancing Label Correlation Feedback in Multi-Label Text Classification
via Multi-Task Learning
|
In multi-label text classification (MLTC), each given document is associated
with a set of correlated labels. To capture label correlations, previous
classifier-chain and sequence-to-sequence models transform MLTC to a sequence
prediction task. However, they tend to suffer from label order dependency,
label combination over-fitting and error propagation problems. To address these
problems, we introduce a novel approach with multi-task learning to enhance
label correlation feedback. We first utilize a joint embedding (JE) mechanism
to obtain the text and label representation simultaneously. In MLTC task, a
document-label cross attention (CA) mechanism is adopted to generate a more
discriminative document representation. Furthermore, we propose two auxiliary
label co-occurrence prediction tasks to enhance label correlation learning: 1)
Pairwise Label Co-occurrence Prediction (PLCP), and 2) Conditional Label
Co-occurrence Prediction (CLCP). Experimental results on AAPD and RCV1-V2
datasets show that our method outperforms competitive baselines by a large
margin. We analyze low-frequency label performance, label dependency, label
combination diversity and coverage speed to show the effectiveness of our
proposed method on label correlation learning.
| 2,021 |
Computation and Language
|
Lexical Semantic Change Discovery
|
While there is a large amount of research in the field of Lexical Semantic
Change Detection, only few approaches go beyond a standard benchmark evaluation
of existing models. In this paper, we propose a shift of focus from change
detection to change discovery, i.e., discovering novel word senses over time
from the full corpus vocabulary. By heavily fine-tuning a type-based and a
token-based approach on recently published German data, we demonstrate that
both models can successfully be applied to discover new words undergoing
meaning change. Furthermore, we provide an almost fully automated framework for
both evaluation and discovery.
| 2,021 |
Computation and Language
|
Attend and select: A segment selective transformer for microblog hashtag
generation
|
Hashtag generation aims to generate short and informal topical tags from a
microblog post, in which tokens or phrases form the hashtags. These tokens or
phrases may originate from primary fragmental textual pieces (e.g., segments)
in the original text and are separated into different segments. However,
conventional sequence-to-sequence generation methods are hard to filter out
secondary information from different textual granularity and are not good at
selecting crucial tokens. Thus, they are suboptimal in generating more
condensed hashtags. In this work, we propose a modified Transformer-based
generation model with adding a segments-selection procedure for the original
encoding and decoding phases. The segments-selection phase is based on a novel
Segments Selection Mechanism (SSM) to model different textual granularity on
global text, local segments, and tokens, contributing to generating condensed
hashtags. Specifically, it first attends to primary semantic segments and then
transforms discontinuous segments from the source text into a sequence of
hashtags by selecting crucial tokens. Extensive evaluations on the two datasets
reveal our approach's superiority with significant improvements to the
extraction and generation baselines. The code and datasets are available at
https://github.com/OpenSUM/HashtagGen.
| 2,022 |
Computation and Language
|
Identifying Populist Paragraphs in Text: A machine-learning approach
|
Abstract: In this paper we present an approach to develop a
text-classification model which would be able to identify populist content in
text. The developed BERT-based model is largely successful in identifying
populist content in text and produces only a negligible amount of False
Negatives, which makes it well-suited as a content analysis automation tool,
which shortlists potentially relevant content for human validation.
| 2,021 |
Computation and Language
|
On the Effectiveness of Adapter-based Tuning for Pretrained Language
Model Adaptation
|
Adapter-based tuning has recently arisen as an alternative to fine-tuning. It
works by adding light-weight adapter modules to a pretrained language model
(PrLM) and only updating the parameters of adapter modules when learning on a
downstream task. As such, it adds only a few trainable parameters per new task,
allowing a high degree of parameter sharing. Prior studies have shown that
adapter-based tuning often achieves comparable results to fine-tuning. However,
existing work only focuses on the parameter-efficient aspect of adapter-based
tuning while lacking further investigation on its effectiveness. In this paper,
we study the latter. We first show that adapter-based tuning better mitigates
forgetting issues than fine-tuning since it yields representations with less
deviation from those generated by the initial PrLM. We then empirically compare
the two tuning methods on several downstream NLP tasks and settings. We
demonstrate that 1) adapter-based tuning outperforms fine-tuning on
low-resource and cross-lingual tasks; 2) it is more robust to overfitting and
less sensitive to changes in learning rates.
| 2,021 |
Computation and Language
|
Transient Chaos in BERT
|
Language is an outcome of our complex and dynamic human-interactions and the
technique of natural language processing (NLP) is hence built on human
linguistic activities. Bidirectional Encoder Representations from Transformers
(BERT) has recently gained its popularity by establishing the state-of-the-art
scores in several NLP benchmarks. A Lite BERT (ALBERT) is literally
characterized as a lightweight version of BERT, in which the number of BERT
parameters is reduced by repeatedly applying the same neural network called
Transformer's encoder layer. By pre-training the parameters with a massive
amount of natural language data, ALBERT can convert input sentences into
versatile high-dimensional vectors potentially capable of solving multiple NLP
tasks. In that sense, ALBERT can be regarded as a well-designed
high-dimensional dynamical system whose operator is the Transformer's encoder,
and essential structures of human language are thus expected to be encapsulated
in its dynamics. In this study, we investigated the embedded properties of
ALBERT to reveal how NLP tasks are effectively solved by exploiting its
dynamics. We thereby aimed to explore the nature of human language from the
dynamical expressions of the NLP model. Our short-term analysis clarified that
the pre-trained model stably yields trajectories with higher dimensionality,
which would enhance the expressive capacity required for NLP tasks. Also, our
long-term analysis revealed that ALBERT intrinsically shows transient chaos, a
typical nonlinear phenomenon showing chaotic dynamics only in its transient,
and the pre-trained ALBERT model tends to produce the chaotic trajectory for a
significantly longer time period compared to a randomly-initialized one. Our
results imply that local chaoticity would contribute to improving NLP
performance, uncovering a novel aspect in the role of chaotic dynamics in human
language behaviors.
| 2,022 |
Computation and Language
|
Let's be explicit about that: Distant supervision for implicit discourse
relation classification via connective prediction
|
In implicit discourse relation classification, we want to predict the
relation between adjacent sentences in the absence of any overt discourse
connectives. This is challenging even for humans, leading to shortage of
annotated data, a fact that makes the task even more difficult for supervised
machine learning approaches. In the current study, we perform implicit
discourse relation classification without relying on any labeled implicit
relation. We sidestep the lack of data through explicitation of implicit
relations to reduce the task to two sub-problems: language modeling and
explicit discourse relation classification, a much easier problem. Our
experimental results show that this method can even marginally outperform the
state-of-the-art, in spite of being much simpler than alternative models of
comparable performance. Moreover, we show that the achieved performance is
robust across domains as suggested by the zero-shot experiments on a completely
different domain. This indicates that recent advances in language modeling have
made language models sufficiently good at capturing inter-sentence relations
without the help of explicit discourse markers.
| 2,021 |
Computation and Language
|
The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual
Machine Translation
|
One of the biggest challenges hindering progress in low-resource and
multilingual machine translation is the lack of good evaluation benchmarks.
Current evaluation benchmarks either lack good coverage of low-resource
languages, consider only restricted domains, or are low quality because they
are constructed using semi-automatic procedures. In this work, we introduce the
FLORES-101 evaluation benchmark, consisting of 3001 sentences extracted from
English Wikipedia and covering a variety of different topics and domains. These
sentences have been translated in 101 languages by professional translators
through a carefully controlled process. The resulting dataset enables better
assessment of model quality on the long tail of low-resource languages,
including the evaluation of many-to-many multilingual translation systems, as
all translations are multilingually aligned. By publicly releasing such a
high-quality and high-coverage dataset, we hope to foster progress in the
machine translation community and beyond.
| 2,021 |
Computation and Language
|
A Targeted Assessment of Incremental Processing in Neural LanguageModels
and Humans
|
We present a targeted, scaled-up comparison of incremental processing in
humans and neural language models by collecting by-word reaction time data for
sixteen different syntactic test suites across a range of structural phenomena.
Human reaction time data comes from a novel online experimental paradigm called
the Interpolated Maze task. We compare human reaction times to by-word
probabilities for four contemporary language models, with different
architectures and trained on a range of data set sizes. We find that across
many phenomena, both humans and language models show increased processing
difficulty in ungrammatical sentence regions with human and model `accuracy'
scores (a la Marvin and Linzen(2018)) about equal. However, although language
model outputs match humans in direction, we show that models systematically
under-predict the difference in magnitude of incremental processing difficulty
between grammatical and ungrammatical sentences. Specifically, when models
encounter syntactic violations they fail to accurately predict the longer
reaction times observed in the human data. These results call into question
whether contemporary language models are approaching human-like performance for
sensitivity to syntactic violations.
| 2,023 |
Computation and Language
|
Extractive Research Slide Generation Using Windowed Labeling Ranking
|
Presentation slides describing the content of scientific and technical papers
are an efficient and effective way to present that work. However, manually
generating presentation slides is labor intensive. We propose a method to
automatically generate slides for scientific papers based on a corpus of 5000
paper-slide pairs compiled from conference proceedings websites. The sentence
labeling module of our method is based on SummaRuNNer, a neural sequence model
for extractive summarization. Instead of ranking sentences based on semantic
similarities in the whole document, our algorithm measures importance and
novelty of sentences by combining semantic and lexical features within a
sentence window. Our method outperforms several baseline methods including
SummaRuNNer by a significant margin in terms of ROUGE score.
| 2,021 |
Computation and Language
|
Structured Reordering for Modeling Latent Alignments in Sequence
Transduction
|
Despite success in many domains, neural models struggle in settings where
train and test examples are drawn from different distributions. In particular,
in contrast to humans, conventional sequence-to-sequence (seq2seq) models fail
to generalize systematically, i.e., interpret sentences representing novel
combinations of concepts (e.g., text segments) seen in training. Traditional
grammar formalisms excel in such settings by implicitly encoding alignments
between input and output segments, but are hard to scale and maintain. Instead
of engineering a grammar, we directly model segment-to-segment alignments as
discrete structured latent variables within a neural seq2seq model. To
efficiently explore the large space of alignments, we introduce a reorder-first
align-later framework whose central component is a neural reordering module
producing {\it separable} permutations. We present an efficient dynamic
programming algorithm performing exact marginal inference of separable
permutations, and, thus, enabling end-to-end differentiable training of our
model. The resulting seq2seq model exhibits better systematic generalization
than standard models on synthetic problems and NLP tasks (i.e., semantic
parsing and machine translation).
| 2,021 |
Computation and Language
|
Itihasa: A large-scale corpus for Sanskrit to English translation
|
This work introduces Itihasa, a large-scale translation dataset containing
93,000 pairs of Sanskrit shlokas and their English translations. The shlokas
are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We
first describe the motivation behind the curation of such a dataset and follow
up with empirical analysis to bring out its nuances. We then benchmark the
performance of standard translation models on this corpus and show that even
state-of-the-art transformer architectures perform poorly, emphasizing the
complexity of the dataset.
| 2,021 |
Computation and Language
|
Meta-learning for downstream aware and agnostic pretraining
|
Neural network pretraining is gaining attention due to its outstanding
performance in natural language processing applications. However, pretraining
usually leverages predefined task sequences to learn general linguistic clues.
The lack of mechanisms in choosing proper tasks during pretraining makes the
learning and knowledge encoding inefficient. We thus propose using
meta-learning to select tasks that provide the most informative learning
signals in each episode of pretraining. With the proposed method, we aim to
achieve better efficiency in computation and memory usage for the pretraining
process and resulting networks while maintaining the performance. In this
preliminary work, we discuss the algorithm of the method and its two variants,
downstream-aware and downstream-agnostic pretraining. Our experiment plan is
also summarized, while empirical results will be shared in our future works.
| 2,021 |
Computation and Language
|
On the Language Coverage Bias for Neural Machine Translation
|
Language coverage bias, which indicates the content-dependent differences
between sentence pairs originating from the source and target languages, is
important for neural machine translation (NMT) because the target-original
training data is not well exploited in current practice. By carefully designing
experiments, we provide comprehensive analyses of the language coverage bias in
the training data, and find that using only the source-original data achieves
comparable performance with using full training data. Based on these
observations, we further propose two simple and effective approaches to
alleviate the language coverage bias problem through explicitly distinguishing
between the source- and target-original training data, which consistently
improve the performance over strong baselines on six WMT20 translation tasks.
Complementary to the translationese effect, language coverage bias provides
another explanation for the performance drop caused by back-translation. We
also apply our approach to both back- and forward-translation and find that
mitigating the language coverage bias can improve the performance of both the
two representative data augmentation methods and their tagged variants.
| 2,021 |
Computation and Language
|
Semantic and Syntactic Enhanced Aspect Sentiment Triplet Extraction
|
Aspect Sentiment Triplet Extraction (ASTE) aims to extract triplets from
sentences, where each triplet includes an entity, its associated sentiment, and
the opinion span explaining the reason for the sentiment. Most existing
research addresses this problem in a multi-stage pipeline manner, which
neglects the mutual information between such three elements and has the problem
of error propagation. In this paper, we propose a Semantic and Syntactic
Enhanced aspect Sentiment triplet Extraction model (S3E2) to fully exploit the
syntactic and semantic relationships between the triplet elements and jointly
extract them. Specifically, we design a Graph-Sequence duel representation and
modeling paradigm for the task of ASTE: we represent the semantic and syntactic
relationships between word pairs in a sentence by graph and encode it by Graph
Neural Networks (GNNs), as well as modeling the original sentence by LSTM to
preserve the sequential information. Under this setting, we further apply a
more efficient inference strategy for the extraction of triplets. Extensive
evaluations on four benchmark datasets show that S3E2 significantly outperforms
existing approaches, which proves our S3E2's superiority and flexibility in an
end-to-end fashion.
| 2,021 |
Computation and Language
|
Summary Grounded Conversation Generation
|
Many conversation datasets have been constructed in the recent years using
crowdsourcing. However, the data collection process can be time consuming and
presents many challenges to ensure data quality. Since language generation has
improved immensely in recent years with the advancement of pre-trained language
models, we investigate how such models can be utilized to generate entire
conversations, given only a summary of a conversation as the input. We explore
three approaches to generate summary grounded conversations, and evaluate the
generated conversations using automatic measures and human judgements. We also
show that the accuracy of conversation summarization can be improved by
augmenting a conversation summarization dataset with generated conversations.
| 2,021 |
Computation and Language
|
A Joint Model for Dropped Pronoun Recovery and Conversational Discourse
Parsing in Chinese Conversational Speech
|
In this paper, we present a neural model for joint dropped pronoun recovery
(DPR) and conversational discourse parsing (CDP) in Chinese conversational
speech. We show that DPR and CDP are closely related, and a joint model
benefits both tasks. We refer to our model as DiscProReco, and it first encodes
the tokens in each utterance in a conversation with a directed Graph
Convolutional Network (GCN). The token states for an utterance are then
aggregated to produce a single state for each utterance. The utterance states
are then fed into a biaffine classifier to construct a conversational discourse
graph. A second (multi-relational) GCN is then applied to the utterance states
to produce a discourse relation-augmented representation for the utterances,
which are then fused together with token states in each utterance as input to a
dropped pronoun recovery layer. The joint model is trained and evaluated on a
new Structure Parsing-enhanced Dropped Pronoun Recovery (SPDPR) dataset that we
annotated with both two types of information. Experimental results on the SPDPR
dataset and other benchmarks show that DiscProReco significantly outperforms
the state-of-the-art baselines of both tasks.
| 2,021 |
Computation and Language
|
A Globally Normalized Neural Model for Semantic Parsing
|
In this paper, we propose a globally normalized model for context-free
grammar (CFG)-based semantic parsing. Instead of predicting a probability, our
model predicts a real-valued score at each step and does not suffer from the
label bias problem. Experiments show that our approach outperforms locally
normalized models on small datasets, but it does not yield improvement on a
large dataset.
| 2,021 |
Computation and Language
|
LAWDR: Language-Agnostic Weighted Document Representations from
Pre-trained Models
|
Cross-lingual document representations enable language understanding in
multilingual contexts and allow transfer learning from high-resource to
low-resource languages at the document level. Recently large pre-trained
language models such as BERT, XLM and XLM-RoBERTa have achieved great success
when fine-tuned on sentence-level downstream tasks. It is tempting to apply
these cross-lingual models to document representation learning. However, there
are two challenges: (1) these models impose high costs on long document
processing and thus many of them have strict length limit; (2) model
fine-tuning requires extra data and computational resources, which is not
practical in resource-limited settings. In this work, we address these
challenges by proposing unsupervised Language-Agnostic Weighted Document
Representations (LAWDR). We study the geometry of pre-trained sentence
embeddings and leverage it to derive document representations without
fine-tuning. Evaluated on cross-lingual document alignment, LAWDR demonstrates
comparable performance to state-of-the-art models on benchmark datasets.
| 2,021 |
Computation and Language
|
Never guess what I heard... Rumor Detection in Finnish News: a Dataset
and a Baseline
|
This study presents a new dataset on rumor detection in Finnish language news
headlines. We have evaluated two different LSTM based models and two different
BERT models, and have found very significant differences in the results. A
fine-tuned FinBERT reaches the best overall accuracy of 94.3% and rumor label
accuracy of 96.0% of the time. However, a model fine-tuned on Multilingual BERT
reaches the best factual label accuracy of 97.2%. Our results suggest that the
performance difference is due to a difference in the original training data.
Furthermore, we find that a regular LSTM model works better than one trained
with a pretrained word2vec model. These findings suggest that more work needs
to be done for pretrained models in Finnish language as they have been trained
on small and biased corpora.
| 2,021 |
Computation and Language
|
Apurin\~a Universal Dependencies Treebank
|
This paper presents and discusses the first Universal Dependencies treebank
for the Apurin\~a language. The treebank contains 76 fully annotated sentences,
applies 14 parts-of-speech, as well as seven augmented or new features - some
of which are unique to Apurin\~a. The construction of the treebank has also
served as an opportunity to develop finite-state description of the language
and facilitate the transfer of open-source infrastructure possibilities to an
endangered language of the Amazon. The source materials used in the initial
treebank represent fieldwork practices where not all tokens of all sentences
are equally annotated. For this reason, establishing regular annotation
practices for the entire Apurin\~a treebank is an ongoing project.
| 2,021 |
Computation and Language
|
Generating Relevant and Coherent Dialogue Responses using Self-separated
Conditional Variational AutoEncoders
|
Conditional Variational AutoEncoder (CVAE) effectively increases the
diversity and informativeness of responses in open-ended dialogue generation
tasks through enriching the context vector with sampled latent variables.
However, due to the inherent one-to-many and many-to-one phenomena in human
dialogues, the sampled latent variables may not correctly reflect the contexts'
semantics, leading to irrelevant and incoherent generated responses. To resolve
this problem, we propose Self-separated Conditional Variational AutoEncoder
(abbreviated as SepaCVAE) that introduces group information to regularize the
latent variables, which enhances CVAE by improving the responses' relevance and
coherence while maintaining their diversity and informativeness. SepaCVAE
actively divides the input data into groups, and then widens the absolute
difference between data pairs from distinct groups, while narrowing the
relative distance between data pairs in the same group. Empirical results from
automatic evaluation and detailed analysis demonstrate that SepaCVAE can
significantly boost responses in well-established open-domain dialogue
datasets.
| 2,021 |
Computation and Language
|
Attention Temperature Matters in Abstractive Summarization Distillation
|
Recent progress of abstractive text summarization largely relies on large
pre-trained sequence-to-sequence Transformer models, which are computationally
expensive. This paper aims to distill these large models into smaller ones for
faster inference and minimal performance loss. Pseudo-labeling based methods
are popular in sequence-to-sequence model distillation. In this paper, we find
simply manipulating attention temperatures in Transformers can make pseudo
labels easier to learn for student models. Our experiments on three
summarization datasets show our proposed method consistently improves over
vanilla pseudo-labeling based methods. We also find that both the pseudo labels
and summaries produced by our students are shorter and more abstractive. Our
code is available at \url{https://github.com/Shengqiang-Zhang/plate}.
| 2,022 |
Computation and Language
|
Multilingual Neural Semantic Parsing for Low-Resourced Languages
|
Multilingual semantic parsing is a cost-effective method that allows a single
model to understand different languages. However, researchers face a great
imbalance of availability of training data, with English being resource rich,
and other languages having much less data. To tackle the data limitation
problem, we propose using machine translation to bootstrap multilingual
training data from the more abundant English data. To compensate for the data
quality of machine translated training data, we utilize transfer learning from
pretrained multilingual encoders to further improve the model. To evaluate our
multilingual models on human-written sentences as opposed to machine translated
ones, we introduce a new multilingual semantic parsing dataset in English,
Italian and Japanese based on the Facebook Task Oriented Parsing (TOP) dataset.
We show that joint multilingual training with pretrained encoders substantially
outperforms our baselines on the TOP dataset and outperforms the
state-of-the-art model on the public NLMaps dataset. We also establish a new
baseline for zero-shot learning on the TOP dataset. We find that a semantic
parser trained only on English data achieves a zero-shot performance of 44.9%
exact-match accuracy on Italian sentences.
| 2,021 |
Computation and Language
|
Relative Importance in Sentence Processing
|
Determining the relative importance of the elements in a sentence is a key
factor for effortless natural language understanding. For human language
processing, we can approximate patterns of relative importance by measuring
reading fixations using eye-tracking technology. In neural language models,
gradient-based saliency methods indicate the relative importance of a token for
the target objective. In this work, we compare patterns of relative importance
in English language processing by humans and models and analyze the underlying
linguistic patterns. We find that human processing patterns in English
correlate strongly with saliency-based importance in language models and not
with attention-based importance. Our results indicate that saliency could be a
cognitively more plausible metric for interpreting neural language models. The
code is available on GitHub: https://github.com/beinborn/relative_importance
| 2,021 |
Computation and Language
|
BERTGEN: Multi-task Generation through BERT
|
We present BERTGEN, a novel generative, decoder-only model which extends BERT
by fusing multimodal and multilingual pretrained models VL-BERT and M-BERT,
respectively. BERTGEN is auto-regressively trained for language generation
tasks, namely image captioning, machine translation and multimodal machine
translation, under a multitask setting. With a comprehensive set of
evaluations, we show that BERTGEN outperforms many strong baselines across the
tasks explored. We also show BERTGEN's ability for zero-shot language
generation, where it exhibits competitive performance to supervised
counterparts. Finally, we conduct ablation studies which demonstrate that
BERTGEN substantially benefits from multi-tasking and effectively transfers
relevant inductive biases from the pre-trained models.
| 2,021 |
Computation and Language
|
Position Bias Mitigation: A Knowledge-Aware Graph Model for Emotion
Cause Extraction
|
The Emotion Cause Extraction (ECE)} task aims to identify clauses which
contain emotion-evoking information for a particular emotion expressed in text.
We observe that a widely-used ECE dataset exhibits a bias that the majority of
annotated cause clauses are either directly before their associated emotion
clauses or are the emotion clauses themselves. Existing models for ECE tend to
explore such relative position information and suffer from the dataset bias. To
investigate the degree of reliance of existing ECE models on clause relative
positions, we propose a novel strategy to generate adversarial examples in
which the relative position information is no longer the indicative feature of
cause clauses. We test the performance of existing models on such adversarial
examples and observe a significant performance drop. To address the dataset
bias, we propose a novel graph-based method to explicitly model the emotion
triggering paths by leveraging the commonsense knowledge to enhance the
semantic dependencies between a candidate clause and an emotion clause.
Experimental results show that our proposed approach performs on par with the
existing state-of-the-art methods on the original ECE dataset, and is more
robust against adversarial attacks compared to existing models.
| 2,023 |
Computation and Language
|
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of
Conversational Language Models
|
Text representation models are prone to exhibit a range of societal biases,
reflecting the non-controlled and biased nature of the underlying pretraining
data, which consequently leads to severe ethical issues and even bias
amplification. Recent work has predominantly focused on measuring and
mitigating bias in pretrained language models. Surprisingly, the landscape of
bias measurements and mitigation resources and methods for conversational
language models is still very scarce: it is limited to only a few types of
bias, artificially constructed resources, and completely ignores the impact
that debiasing methods may have on the final performance in dialog tasks, e.g.,
conversational response generation. In this work, we present RedditBias, the
first conversational data set grounded in the actual human conversations from
Reddit, allowing for bias measurement and mitigation across four important bias
dimensions: gender, race, religion, and queerness. Further, we develop an
evaluation framework which simultaneously 1) measures bias on the developed
RedditBias resource, and 2) evaluates model capability in dialog tasks after
model debiasing. We use the evaluation framework to benchmark the widely used
conversational DialoGPT model along with the adaptations of four debiasing
methods. Our results indicate that DialoGPT is biased with respect to religious
groups and that some debiasing techniques can remove this bias while preserving
downstream task performance.
| 2,021 |
Computation and Language
|
CAiRE in DialDoc21: Data Augmentation for Information-Seeking Dialogue
System
|
Information-seeking dialogue systems, including knowledge identification and
response generation, aim to respond to users with fluent, coherent, and
informative responses based on users' needs, which. To tackle this challenge,
we utilize data augmentation methods and several training techniques with the
pre-trained language models to learn a general pattern of the task and thus
achieve promising performance. In DialDoc21 competition, our system achieved
74.95 F1 score and 60.74 Exact Match score in subtask 1, and 37.72 SacreBLEU
score in subtask 2. Empirical analysis is provided to explain the effectiveness
of our approaches.
| 2,021 |
Computation and Language
|
SciFive: a text-to-text transformer model for biomedical literature
|
In this report, we introduce SciFive, a domain-specific T5 model that has
been pre-trained on large biomedical corpora. Our model outperforms the current
SOTA methods (i.e. BERT, BioBERT, Base T5) on tasks in named entity relation,
relation extraction, natural language inference, and question-answering. We
show that text-generation methods have significant potential in a broad array
of biomedical NLP tasks, particularly those requiring longer, more complex
outputs. Our results support the exploration of more difficult text generation
tasks and the development of new methods in this area
| 2,021 |
Computation and Language
|
RoSearch: Search for Robust Student Architectures When Distilling
Pre-trained Language Models
|
Pre-trained language models achieve outstanding performance in NLP tasks.
Various knowledge distillation methods have been proposed to reduce the heavy
computation and storage requirements of pre-trained language models. However,
from our observations, student models acquired by knowledge distillation suffer
from adversarial attacks, which limits their usage in security sensitive
scenarios. In order to overcome these security problems, RoSearch is proposed
as a comprehensive framework to search the student models with better
adversarial robustness when performing knowledge distillation. A directed
acyclic graph based search space is built and an evolutionary search strategy
is utilized to guide the searching approach. Each searched architecture is
trained by knowledge distillation on pre-trained language model and then
evaluated under a robustness-, accuracy- and efficiency-aware metric as
environmental fitness. Experimental results show that RoSearch can improve
robustness of student models from 7%~18% up to 45.8%~47.8% on different
datasets with comparable weight compression ratio to existing distillation
methods (4.6$\times$~6.5$\times$ improvement from teacher model BERT_BASE) and
low accuracy drop. In addition, we summarize the relationship between student
architecture and robustness through statistics of searched models.
| 2,021 |
Computation and Language
|
Document-level Relation Extraction as Semantic Segmentation
|
Document-level relation extraction aims to extract relations among multiple
entity pairs from a document. Previously proposed graph-based or
transformer-based models utilize the entities independently, regardless of
global information among relational triples. This paper approaches the problem
by predicting an entity-level relation matrix to capture local and global
information, parallel to the semantic segmentation task in computer vision.
Herein, we propose a Document U-shaped Network for document-level relation
extraction. Specifically, we leverage an encoder module to capture the context
information of entities and a U-shaped segmentation module over the image-style
feature map to capture global interdependency among triples. Experimental
results show that our approach can obtain state-of-the-art performance on three
benchmark datasets DocRED, CDR, and GDA.
| 2,023 |
Computation and Language
|
Unsupervised Representation Disentanglement of Text: An Evaluation on
Synthetic Datasets
|
To highlight the challenges of achieving representation disentanglement for
text domain in an unsupervised setting, in this paper we select a
representative set of successfully applied models from the image domain. We
evaluate these models on 6 disentanglement metrics, as well as on downstream
classification tasks and homotopy. To facilitate the evaluation, we propose two
synthetic datasets with known generative factors. Our experiments highlight the
existing gap in the text domain and illustrate that certain elements such as
representation sparsity (as an inductive bias), or representation coupling with
the decoder could impact disentanglement. To the best of our knowledge, our
work is the first attempt on the intersection of unsupervised representation
disentanglement and text, and provides the experimental framework and datasets
for examining future developments in this direction.
| 2,021 |
Computation and Language
|
PROST: Physical Reasoning of Objects through Space and Time
|
We present a new probing dataset named PROST: Physical Reasoning about
Objects Through Space and Time. This dataset contains 18,736 multiple-choice
questions made from 14 manually curated templates, covering 10 physical
reasoning concepts. All questions are designed to probe both causal and masked
language models in a zero-shot setting. We conduct an extensive analysis which
demonstrates that state-of-the-art pretrained models are inadequate at physical
reasoning: they are influenced by the order in which answer options are
presented to them, they struggle when the superlative in a question is inverted
(e.g., most <-> least), and increasing the amount of pretraining data and
parameters only yields minimal improvements. These results provide support for
the hypothesis that current pretrained models' ability to reason about physical
interactions is inherently limited by a lack of real world experience. By
highlighting these limitations, we hope to motivate the development of models
with a human-like understanding of the physical world.
| 2,021 |
Computation and Language
|
GTM: A Generative Triple-Wise Model for Conversational Question
Generation
|
Generating some appealing questions in open-domain conversations is an
effective way to improve human-machine interactions and lead the topic to a
broader or deeper direction. To avoid dull or deviated questions, some
researchers tried to utilize answer, the "future" information, to guide
question generation. However, they separate a post-question-answer (PQA) triple
into two parts: post-question (PQ) and question-answer (QA) pairs, which may
hurt the overall coherence. Besides, the QA relationship is modeled as a
one-to-one mapping that is not reasonable in open-domain conversations. To
tackle these problems, we propose a generative triple-wise model with
hierarchical variations for open-domain conversational question generation
(CQG). Latent variables in three hierarchies are used to represent the shared
background of a triple and one-to-many semantic mappings in both PQ and QA
pairs. Experimental results on a large-scale CQG dataset show that our method
significantly improves the quality of questions in terms of fluency, coherence
and diversity over competitive baselines.
| 2,021 |
Computation and Language
|
A Comprehensive Assessment of Dialog Evaluation Metrics
|
Automatic evaluation metrics are a crucial component of dialog systems
research. Standard language evaluation metrics are known to be ineffective for
evaluating dialog. As such, recent research has proposed a number of novel,
dialog-specific metrics that correlate better with human judgements. Due to the
fast pace of research, many of these metrics have been assessed on different
datasets and there has as yet been no time for a systematic comparison between
them. To this end, this paper provides a comprehensive assessment of recently
proposed dialog evaluation metrics on a number of datasets. In this paper, 23
different automatic evaluation metrics are evaluated on 10 different datasets.
Furthermore, the metrics are assessed in different settings, to better qualify
their respective strengths and weaknesses. Metrics are assessed (1) on both the
turn level and the dialog level, (2) for different dialog lengths, (3) for
different dialog qualities (e.g., coherence, engaging), (4) for different types
of response generation models (i.e., generative, retrieval, simple models and
state-of-the-art models), (5) taking into account the similarity of different
metrics and (6) exploring combinations of different metrics. This comprehensive
assessment offers several takeaways pertaining to dialog evaluation metrics in
general. It also suggests how to best assess evaluation metrics and indicates
promising directions for future work.
| 2,021 |
Computation and Language
|
Diverse Pretrained Context Encodings Improve Document Translation
|
We propose a new architecture for adapting a sentence-level
sequence-to-sequence transformer by incorporating multiple pretrained document
context signals and assess the impact on translation performance of (1)
different pretraining approaches for generating these signals, (2) the quantity
of parallel data for which document context is available, and (3) conditioning
on source, target, or source and target contexts. Experiments on the NIST
Chinese-English, and IWSLT and WMT English-German tasks support four general
conclusions: that using pretrained context representations markedly improves
sample efficiency, that adequate parallel data resources are crucial for
learning to use document context, that jointly conditioning on multiple context
representations outperforms any single representation, and that source context
is more valuable for translation performance than target side context. Our best
multi-context model consistently outperforms the best existing context-aware
transformers.
| 2,021 |
Computation and Language
|
Encouraging Neural Machine Translation to Satisfy Terminology
Constraints
|
We present a new approach to encourage neural machine translation to satisfy
lexical constraints. Our method acts at the training step and thereby avoiding
the introduction of any extra computational overhead at inference step. The
proposed method combines three main ingredients. The first one consists in
augmenting the training data to specify the constraints. Intuitively, this
encourages the model to learn a copy behavior when it encounters constraint
terms. Compared to previous work, we use a simplified augmentation strategy
without source factors. The second ingredient is constraint token masking,
which makes it even easier for the model to learn the copy behavior and
generalize better. The third one, is a modification of the standard cross
entropy loss to bias the model towards assigning high probabilities to
constraint words. Empirical results show that our method improves upon related
baselines in terms of both BLEU score and the percentage of generated
constraint terms.
| 2,021 |
Computation and Language
|
X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented
Compositional Semantic Parsing
|
Task-oriented compositional semantic parsing (TCSP) handles complex nested
user queries and serves as an essential component of virtual assistants.
Current TCSP models rely on numerous training data to achieve decent
performance but fail to generalize to low-resource target languages or domains.
In this paper, we present X2Parser, a transferable Cross-lingual and
Cross-domain Parser for TCSP. Unlike previous models that learn to generate the
hierarchical representations for nested intents and slots, we propose to
predict flattened intents and slots representations separately and cast both
prediction tasks into sequence labeling problems. After that, we further
propose a fertility-based slot predictor that first learns to dynamically
detect the number of labels for each token, and then predicts the slot types.
Experimental results illustrate that our model can significantly outperform
existing strong baselines in cross-lingual and cross-domain settings, and our
model can also achieve a good generalization ability on target languages of
target domains. Furthermore, our model tackles the problem in an efficient
non-autoregressive way that reduces the latency by up to 66% compared to the
generative model.
| 2,021 |
Computation and Language
|
COVID-Fact: Fact Extraction and Verification of Real-World Claims on
COVID-19 Pandemic
|
We introduce a FEVER-like dataset COVID-Fact of $4,086$ claims concerning the
COVID-19 pandemic. The dataset contains claims, evidence for the claims, and
contradictory claims refuted by the evidence. Unlike previous approaches, we
automatically detect true claims and their source articles and then generate
counter-claims using automatic methods rather than employing human annotators.
Along with our constructed resource, we formally present the task of
identifying relevant evidence for the claims and verifying whether the evidence
refutes or supports a given claim. In addition to scientific claims, our data
contains simplified general claims from media sources, making it better suited
for detecting general misinformation regarding COVID-19. Our experiments
indicate that COVID-Fact will provide a challenging testbed for the development
of new systems and our approach will reduce the costs of building
domain-specific datasets for detecting misinformation.
| 2,021 |
Computation and Language
|
Deep Context- and Relation-Aware Learning for Aspect-based Sentiment
Analysis
|
Existing works for aspect-based sentiment analysis (ABSA) have adopted a
unified approach, which allows the interactive relations among subtasks.
However, we observe that these methods tend to predict polarities based on the
literal meaning of aspect and opinion terms and mainly consider relations
implicitly among subtasks at the word level. In addition, identifying multiple
aspect-opinion pairs with their polarities is much more challenging. Therefore,
a comprehensive understanding of contextual information w.r.t. the aspect and
opinion are further required in ABSA. In this paper, we propose Deep
Contextualized Relation-Aware Network (DCRAN), which allows interactive
relations among subtasks with deep contextual information based on two modules
(i.e., Aspect and Opinion Propagation and Explicit Self-Supervised Strategies).
Especially, we design novel self-supervised strategies for ABSA, which have
strengths in dealing with multiple aspects. Experimental results show that
DCRAN significantly outperforms previous state-of-the-art methods by large
margins on three widely used benchmarks.
| 2,021 |
Computation and Language
|
Diversity driven Query Rewriting in Search Advertising
|
Retrieving keywords (bidwords) with the same intent as query, referred to as
close variant keywords, is of prime importance for effective targeted search
advertising. For head and torso search queries, sponsored search engines use a
huge repository of same intent queries and keywords, mined ahead of time.
Online, this repository is used to rewrite the query and then lookup the
rewrite in a repository of bid keywords contributing to significant revenue.
Recently generative retrieval models have been shown to be effective at the
task of generating such query rewrites. We observe two main limitations of such
generative models. First, rewrites generated by these models exhibit low
lexical diversity, and hence the rewrites fail to retrieve relevant keywords
that have diverse linguistic variations. Second, there is a misalignment
between the training objective - the likelihood of training data, v/s what we
desire - improved quality and coverage of rewrites. In this work, we introduce
CLOVER, a framework to generate both high-quality and diverse rewrites by
optimizing for human assessment of rewrite quality using our diversity-driven
reinforcement learning algorithm. We use an evaluation model, trained to
predict human judgments, as the reward function to finetune the generation
policy. We empirically show the effectiveness of our proposed approach through
offline experiments on search queries across geographies spanning three major
languages. We also perform online A/B experiments on Bing, a large commercial
search engine, which shows (i) better user engagement with an average increase
in clicks by 12.83% accompanied with an average defect reduction by 13.97%, and
(ii) improved revenue by 21.29%.
| 2,021 |
Computation and Language
|
Narrative Question Answering with Cutting-Edge Open-Domain QA
Techniques: A Comprehensive Study
|
Recent advancements in open-domain question answering (ODQA), i.e., finding
answers from large open-domain corpus like Wikipedia, have led to human-level
performance on many datasets. However, progress in QA over book stories (Book
QA) lags behind despite its similar task formulation to ODQA. This work
provides a comprehensive and quantitative analysis about the difficulty of Book
QA: (1) We benchmark the research on the NarrativeQA dataset with extensive
experiments with cutting-edge ODQA techniques. This quantifies the challenges
Book QA poses, as well as advances the published state-of-the-art with a
$\sim$7\% absolute improvement on Rouge-L. (2) We further analyze the detailed
challenges in Book QA through human
studies.\footnote{\url{https://github.com/gorov/BookQA}.} Our findings indicate
that the event-centric questions dominate this task, which exemplifies the
inability of existing QA models to handle event-oriented scenarios.
| 2,021 |
Computation and Language
|
A Simple Recipe for Multilingual Grammatical Error Correction
|
This paper presents a simple recipe to train state-of-the-art multilingual
Grammatical Error Correction (GEC) models. We achieve this by first proposing a
language-agnostic method to generate a large number of synthetic examples. The
second ingredient is to use large-scale multilingual language models (up to 11B
parameters). Once fine-tuned on language-specific supervised sets we surpass
the previous state-of-the-art results on GEC benchmarks in four languages:
English, Czech, German and Russian. Having established a new set of baselines
for GEC, we make our results easily reproducible and accessible by releasing a
cLang-8 dataset. It is produced by using our best model, which we call gT5, to
clean the targets of a widely used yet noisy lang-8 dataset. cLang-8 greatly
simplifies typical GEC training pipelines composed of multiple fine-tuning
stages -- we demonstrate that performing a single fine-tuning step on cLang-8
with the off-the-shelf language models yields further accuracy improvements
over an already top-performing gT5 model for English.
| 2,022 |
Computation and Language
|
Measuring Conversational Uptake: A Case Study on Student-Teacher
Interactions
|
In conversation, uptake happens when a speaker builds on the contribution of
their interlocutor by, for example, acknowledging, repeating or reformulating
what they have said. In education, teachers' uptake of student contributions
has been linked to higher student achievement. Yet measuring and improving
teachers' uptake at scale is challenging, as existing methods require expensive
annotation by experts. We propose a framework for computationally measuring
uptake, by (1) releasing a dataset of student-teacher exchanges extracted from
US math classroom transcripts annotated for uptake by experts; (2) formalizing
uptake as pointwise Jensen-Shannon Divergence (pJSD), estimated via next
utterance classification; (3) conducting a linguistically-motivated comparison
of different unsupervised measures and (4) correlating these measures with
educational outcomes. We find that although repetition captures a significant
part of uptake, pJSD outperforms repetition-based baselines, as it is capable
of identifying a wider range of uptake phenomena like question answering and
reformulation. We apply our uptake measure to three different educational
datasets with outcome indicators. Unlike baseline measures, pJSD correlates
significantly with instruction quality in all three, providing evidence for its
generalizability and for its potential to serve as an automated professional
development tool for teachers.
| 2,021 |
Computation and Language
|
SIGTYP 2021 Shared Task: Robust Spoken Language Identification
|
While language identification is a fundamental speech and language processing
task, for many languages and language families it remains a challenging task.
For many low-resource and endangered languages this is in part due to resource
availability: where larger datasets exist, they may be single-speaker or have
different domains than desired application scenarios, demanding a need for
domain and speaker-invariant language identification systems. This year's
shared task on robust spoken language identification sought to investigate just
this scenario: systems were to be trained on largely single-speaker speech from
one domain, but evaluated on data in other domains recorded from speakers under
different recording circumstances, mimicking realistic low-resource scenarios.
We see that domain and speaker mismatch proves very challenging for current
methods which can perform above 95% accuracy in-domain, which domain adaptation
can address to some degree, but that these conditions merit further
investigation to make spoken language identification accessible in many
scenarios.
| 2,021 |
Computation and Language
|
Measuring and Improving BERT's Mathematical Abilities by Predicting the
Order of Reasoning
|
Imagine you are in a supermarket. You have two bananas in your basket and
want to buy four apples. How many fruits do you have in total? This seemingly
straightforward question can be challenging for data-driven language models,
even if trained at scale. However, we would expect such generic language models
to possess some mathematical abilities in addition to typical linguistic
competence. Towards this goal, we investigate if a commonly used language
model, BERT, possesses such mathematical abilities and, if so, to what degree.
For that, we fine-tune BERT on a popular dataset for word math problems,
AQuA-RAT, and conduct several tests to understand learned representations
better. Since we teach models trained on natural language to do formal
mathematics, we hypothesize that such models would benefit from training on
semi-formal steps that explain how math results are derived. To better
accommodate such training, we also propose new pretext tasks for learning
mathematical rules. We call them (Neighbor) Reasoning Order Prediction (ROP or
NROP). With this new model, we achieve significantly better outcomes than
data-driven baselines and even on-par with more tailored models. We also show
how to reduce positional bias in such models.
| 2,021 |
Computation and Language
|
Predicting Different Types of Subtle Toxicity in Unhealthy Online
Conversations
|
This paper investigates the use of machine learning models for the
classification of unhealthy online conversations containing one or more forms
of subtler abuse, such as hostility, sarcasm, and generalization. We leveraged
a public dataset of 44K online comments containing healthy and unhealthy
comments labeled with seven forms of subtle toxicity. We were able to
distinguish between these comments with a top micro F1-score, macro F1-score,
and ROC-AUC of 88.76%, 67.98%, and 0.71, respectively. Hostile comments were
easier to detect than other types of unhealthy comments. We also conducted a
sentiment analysis which revealed that most types of unhealthy comments were
associated with a slight negative sentiment, with hostile comments being the
most negative ones.
| 2,022 |
Computation and Language
|
Neural Abstractive Unsupervised Summarization of Online News Discussions
|
Summarization has usually relied on gold standard summaries to train
extractive or abstractive models. Social media brings a hurdle to summarization
techniques since it requires addressing a multi-document multi-author approach.
We address this challenging task by introducing a novel method that generates
abstractive summaries of online news discussions. Our method extends a
BERT-based architecture, including an attention encoding that fed comments'
likes during the training stage. To train our model, we define a task which
consists of reconstructing high impact comments based on popularity (likes).
Accordingly, our model learns to summarize online discussions based on their
most relevant comments. Our novel approach provides a summary that represents
the most relevant aspects of a news item that users comment on, incorporating
the social context as a source of information to summarize texts in online
social networks. Our model is evaluated using ROUGE scores between the
generated summary and each comment on the thread. Our model, including the
social attention encoding, significantly outperforms both extractive and
abstractive summarization methods based on such evaluation.
| 2,021 |
Computation and Language
|
Exploiting Language Relatedness for Low Web-Resource Language Model
Adaptation: An Indic Languages Study
|
Recent research in multilingual language models (LM) has demonstrated their
ability to effectively handle multiple languages in a single model. This holds
promise for low web-resource languages (LRL) as multilingual models can enable
transfer of supervision from high resource languages to LRLs. However,
incorporating a new language in an LM still remains a challenge, particularly
for languages with limited corpora and in unseen scripts. In this paper we
argue that relatedness among languages in a language family may be exploited to
overcome some of the corpora limitations of LRLs, and propose RelateLM. We
focus on Indian languages, and exploit relatedness along two dimensions: (1)
script (since many Indic scripts originated from the Brahmic script), and (2)
sentence structure. RelateLM uses transliteration to convert the unseen script
of limited LRL text into the script of a Related Prominent Language (RPL)
(Hindi in our case). While exploiting similar sentence structures, RelateLM
utilizes readily available bilingual dictionaries to pseudo translate RPL text
into LRL corpora. Experiments on multiple real-world benchmark datasets provide
validation to our hypothesis that using a related language as pivot, along with
transliteration and pseudo translation based data augmentation, can be an
effective way to adapt LMs for LRLs, rather than direct training or pivoting
through English.
| 2,021 |
Computation and Language
|
Generating Hypothetical Events for Abductive Inference
|
Abductive reasoning starts from some observations and aims at finding the
most plausible explanation for these observations. To perform abduction, humans
often make use of temporal and causal inferences, and knowledge about how some
hypothetical situation can result in different outcomes. This work offers the
first study of how such knowledge impacts the Abductive NLI task -- which
consists in choosing the more likely explanation for given observations. We
train a specialized language model LMI that is tasked to generate what could
happen next from a hypothetical scenario that evolves from a given event. We
then propose a multi-task model MTL to solve the Abductive NLI task, which
predicts a plausible explanation by a) considering different possible events
emerging from candidate hypotheses -- events generated by LMI -- and b)
selecting the one that is most similar to the observed outcome. We show that
our MTL model improves over prior vanilla pre-trained LMs fine-tuned on
Abductive NLI. Our manual evaluation and analysis suggest that learning about
possible next events from different hypothetical scenarios supports abductive
inference.
| 2,021 |
Computation and Language
|
Expressivity of Emergent Language is a Trade-off between Contextual
Complexity and Unpredictability
|
Researchers are using deep learning models to explore the emergence of
language in various language games, where agents interact and develop an
emergent language to solve tasks. We focus on the factors that determine the
expressivity of emergent languages, which reflects the amount of information
about input spaces those languages are capable of encoding. We measure the
expressivity of emergent languages based on the generalisation performance
across different games, and demonstrate that the expressivity of emergent
languages is a trade-off between the complexity and unpredictability of the
context those languages emerged from. Another contribution of this work is the
discovery of message type collapse, i.e. the number of unique messages is lower
than that of inputs. We also show that using the contrastive loss proposed by
Chen et al. (2020) can alleviate this problem.
| 2,022 |
Computation and Language
|
Investigating Transfer Learning in Multilingual Pre-trained Language
Models through Chinese Natural Language Inference
|
Multilingual transformers (XLM, mT5) have been shown to have remarkable
transfer skills in zero-shot settings. Most transfer studies, however, rely on
automatically translated resources (XNLI, XQuAD), making it hard to discern the
particular linguistic knowledge that is being transferred, and the role of
expert annotated monolingual datasets when developing task-specific models. We
investigate the cross-lingual transfer abilities of XLM-R for Chinese and
English natural language inference (NLI), with a focus on the recent
large-scale Chinese dataset OCNLI. To better understand linguistic transfer, we
created 4 categories of challenge and adversarial tasks (totaling 17 new
datasets) for Chinese that build on several well-known resources for English
(e.g., HANS, NLI stress-tests). We find that cross-lingual models trained on
English NLI do transfer well across our Chinese tasks (e.g., in 3/4 of our
challenge categories, they perform as well/better than the best monolingual
models, even on 3/5 uniquely Chinese linguistic phenomena such as idioms, pro
drop). These results, however, come with important caveats: cross-lingual
models often perform best when trained on a mixture of English and high-quality
monolingual NLI data (OCNLI), and are often hindered by automatically
translated resources (XNLI-zh). For many phenomena, all models continue to
struggle, highlighting the need for our new diagnostics to help benchmark
Chinese and cross-lingual models. All new datasets/code are released at
https://github.com/huhailinguist/ChineseNLIProbing.
| 2,021 |
Computation and Language
|
Lexicon Learning for Few-Shot Neural Sequence Modeling
|
Sequence-to-sequence transduction is the core problem in language processing
applications as diverse as semantic parsing, machine translation, and
instruction following. The neural network models that provide the dominant
solution to these problems are brittle, especially in low-resource settings:
they fail to generalize correctly or systematically from small datasets. Past
work has shown that many failures of systematic generalization arise from
neural models' inability to disentangle lexical phenomena from syntactic ones.
To address this, we augment neural decoders with a lexical translation
mechanism that generalizes existing copy mechanisms to incorporate learned,
decontextualized, token-level translation rules. We describe how to initialize
this mechanism using a variety of lexicon learning algorithms, and show that it
improves systematic generalization on a diverse set of sequence modeling tasks
drawn from cognitive science, formal semantics, and machine translation.
| 2,021 |
Computation and Language
|
Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question
Answering
|
Disfluencies is an under-studied topic in NLP, even though it is ubiquitous
in human conversation. This is largely due to the lack of datasets containing
disfluencies. In this paper, we present a new challenge question answering
dataset, Disfl-QA, a derivative of SQuAD, where humans introduce contextual
disfluencies in previously fluent questions. Disfl-QA contains a variety of
challenging disfluencies that require a more comprehensive understanding of the
text than what was necessary in prior datasets. Experiments show that the
performance of existing state-of-the-art question answering models degrades
significantly when tested on Disfl-QA in a zero-shot setting.We show data
augmentation methods partially recover the loss in performance and also
demonstrate the efficacy of using gold data for fine-tuning. We argue that we
need large-scale disfluency datasets in order for NLP models to be robust to
them. The dataset is publicly available at:
https://github.com/google-research-datasets/disfl-qa.
| 2,021 |
Computation and Language
|
Self-supervised and Supervised Joint Training for Resource-rich Machine
Translation
|
Self-supervised pre-training of text representations has been successfully
applied to low-resource Neural Machine Translation (NMT). However, it usually
fails to achieve notable gains on resource-rich NMT. In this paper, we propose
a joint training approach, $F_2$-XEnDec, to combine self-supervised and
supervised learning to optimize NMT models. To exploit complementary
self-supervised signals for supervised learning, NMT models are trained on
examples that are interbred from monolingual and parallel sentences through a
new process called crossover encoder-decoder. Experiments on two resource-rich
translation benchmarks, WMT'14 English-German and WMT'14 English-French,
demonstrate that our approach achieves substantial improvements over several
strong baseline methods and obtains a new state of the art of 46.19 BLEU on
English-French when incorporating back translation. Results also show that our
approach is capable of improving model robustness to input perturbations such
as code-switching noise which frequently appears on social media.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.