Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Counter-Interference Adapter for Multilingual Machine Translation
|
Developing a unified multilingual model has long been a pursuit for machine
translation. However, existing approaches suffer from performance degradation
-- a single multilingual model is inferior to separately trained bilingual ones
on rich-resource languages. We conjecture that such a phenomenon is due to
interference caused by joint training with multiple languages. To accommodate
the issue, we propose CIAT, an adapted Transformer model with a small parameter
overhead for multilingual machine translation. We evaluate CIAT on multiple
benchmark datasets, including IWSLT, OPUS-100, and WMT. Experiments show that
CIAT consistently outperforms strong multilingual baselines on 64 of total 66
language directions, 42 of which see above 0.5 BLEU improvement. Our code is
available at \url{https://github.com/Yaoming95/CIAT}~.
| 2,021 |
Computation and Language
|
Back to Square One: Artifact Detection, Training and Commonsense
Disentanglement in the Winograd Schema
|
The Winograd Schema (WS) has been proposed as a test for measuring
commonsense capabilities of models. Recently, pre-trained language model-based
approaches have boosted performance on some WS benchmarks but the source of
improvement is still not clear. This paper suggests that the apparent progress
on WS may not necessarily reflect progress in commonsense reasoning. To support
this claim, we first show that the current evaluation method of WS is
sub-optimal and propose a modification that uses twin sentences for evaluation.
We also propose two new baselines that indicate the existence of artifacts in
WS benchmarks. We then develop a method for evaluating WS-like sentences in a
zero-shot setting to account for the commonsense reasoning abilities acquired
during the pretraining and observe that popular language models perform
randomly in this setting when using our more strict evaluation. We conclude
that the observed progress is mostly due to the use of supervision in training
WS models, which is not likely to successfully support all the required
commonsense reasoning skills and knowledge.
| 2,021 |
Computation and Language
|
Editing Factual Knowledge in Language Models
|
The factual knowledge acquired during pre-training and stored in the
parameters of Language Models (LMs) can be useful in downstream tasks (e.g.,
question answering or textual inference). However, some facts can be
incorrectly induced or become obsolete over time. We present KnowledgeEditor, a
method which can be used to edit this knowledge and, thus, fix 'bugs' or
unexpected predictions without the need for expensive re-training or
fine-tuning. Besides being computationally efficient, KnowledgeEditordoes not
require any modifications in LM pre-training (e.g., the use of meta-learning).
In our approach, we train a hyper-network with constrained optimization to
modify a fact without affecting the rest of the knowledge; the trained
hyper-network is then used to predict the weight update at test time. We show
KnowledgeEditor's efficacy with two popular architectures and
knowledge-intensive tasks: i) a BERT model fine-tuned for fact-checking, and
ii) a sequence-to-sequence BART model for question answering. With our method,
changing a prediction on the specific wording of a query tends to result in a
consistent change in predictions also for its paraphrases. We show that this
can be further encouraged by exploiting (e.g., automatically-generated)
paraphrases during training. Interestingly, our hyper-network can be regarded
as a 'probe' revealing which components need to be changed to manipulate
factual knowledge; our analysis shows that the updates tend to be concentrated
on a small subset of components. Source code available at
https://github.com/nicola-decao/KnowledgeEditor
| 2,021 |
Computation and Language
|
Word2rate: training and evaluating multiple word embeddings as
statistical transitions
|
Using pretrained word embeddings has been shown to be a very effective way in
improving the performance of natural language processing tasks. In fact almost
any natural language tasks that can be thought of has been improved by these
pretrained embeddings. These tasks range from sentiment analysis, translation,
sequence prediction amongst many others. One of the most successful word
embeddings is the Word2vec CBOW model proposed by Mikolov trained by the
negative sampling technique. Mai et al. modifies this objective to train CMOW
embeddings that are sensitive to word order. We used a modified version of the
negative sampling objective for our context words, modelling the context
embeddings as a Taylor series of rate matrices. We show that different modes of
the Taylor series produce different types of embeddings. We compare these
embeddings to their similar counterparts like CBOW and CMOW and show that they
achieve comparable performance. We also introduce a novel left-right context
split objective that improves performance for tasks sensitive to word order.
Our Word2rate model is grounded in a statistical foundation using rate matrices
while being competitive in variety of language tasks.
| 2,021 |
Computation and Language
|
IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural
Language Generation
|
Natural language generation (NLG) benchmarks provide an important avenue to
measure progress and develop better NLG systems. Unfortunately, the lack of
publicly available NLG benchmarks for low-resource languages poses a
challenging barrier for building NLG systems that work well for languages with
limited amounts of data. Here we introduce IndoNLG, the first benchmark to
measure natural language generation (NLG) progress in three low-resource -- yet
widely spoken -- languages of Indonesia: Indonesian, Javanese, and Sundanese.
Altogether, these languages are spoken by more than 100 million native
speakers, and hence constitute an important use case of NLG systems today.
Concretely, IndoNLG covers six tasks: summarization, question answering,
chit-chat, and three different pairs of machine translation (MT) tasks. We
collate a clean pretraining corpus of Indonesian, Sundanese, and Javanese
datasets, Indo4B-Plus, which is used to pretrain our models: IndoBART and
IndoGPT. We show that IndoBART and IndoGPT achieve competitive performance on
all tasks -- despite using only one-fifth the parameters of a larger
multilingual model, mBART-LARGE (Liu et al., 2020). This finding emphasizes the
importance of pretraining on closely related, local languages to achieve more
efficient learning and faster inference for very low-resource languages like
Javanese and Sundanese.
| 2,021 |
Computation and Language
|
$Q^{2}$: Evaluating Factual Consistency in Knowledge-Grounded Dialogues
via Question Generation and Question Answering
|
Neural knowledge-grounded generative models for dialogue often produce
content that is factually inconsistent with the knowledge they rely on, making
them unreliable and limiting their applicability. Inspired by recent work on
evaluating factual consistency in abstractive summarization, we propose an
automatic evaluation metric for factual consistency in knowledge-grounded
dialogue using automatic question generation and question answering. Our
metric, denoted $Q^2$, compares answer spans using natural language inference
(NLI), instead of token-based matching as done in previous work. To foster
proper evaluation, we curate a novel dataset of dialogue system outputs for the
Wizard-of-Wikipedia dataset, manually annotated for factual consistency. We
perform a thorough meta-evaluation of $Q^2$ against other metrics using this
dataset and two others, where it consistently shows higher correlation with
human judgements.
| 2,021 |
Computation and Language
|
Robust Open-Vocabulary Translation from Visual Text Representations
|
Machine translation models have discrete vocabularies and commonly use
subword segmentation techniques to achieve an 'open vocabulary.' This approach
relies on consistent and correct underlying unicode sequences, and makes models
susceptible to degradation from common types of noise and variation. Motivated
by the robustness of human language processing, we propose the use of visual
text representations, which dispense with a finite set of text embeddings in
favor of continuous vocabularies created by processing visually rendered text
with sliding windows. We show that models using visual text representations
approach or match performance of traditional text models on small and larger
datasets. More importantly, models with visual embeddings demonstrate
significant robustness to varied types of noise, achieving e.g., 25.9 BLEU on a
character permuted German-English task where subword models degrade to 1.9.
| 2,021 |
Computation and Language
|
Flexible Instance-Specific Rationalization of NLP Models
|
Recent research on model interpretability in natural language processing
extensively uses feature scoring methods for identifying which parts of the
input are the most important for a model to make a prediction (i.e. explanation
or rationale). However, previous research has shown that there is no clear best
scoring method across various text classification tasks while practitioners
typically have to make several other ad-hoc choices regarding the length and
the type of the rationale (e.g. short or long, contiguous or not). Inspired by
this, we propose a simple yet effective and flexible method that allows
selecting optimally for each data instance: (1) a feature scoring method; (2)
the length; and (3) the type of the rationale. Our method is inspired by input
erasure approaches to interpretability which assume that the most faithful
rationale for a prediction should be the one with the highest difference
between the model's output distribution using the full text and the text after
removing the rationale as input respectively. Evaluation on four standard text
classification datasets shows that our proposed method provides more faithful,
comprehensive and highly sufficient explanations compared to using a fixed
feature scoring method, rationale length and type. More importantly, we
demonstrate that a practitioner is not required to make any ad-hoc choices in
order to extract faithful rationales using our approach.
| 2,021 |
Computation and Language
|
Distantly Supervised Relation Extraction with Sentence Reconstruction
and Knowledge Base Priors
|
We propose a multi-task, probabilistic approach to facilitate distantly
supervised relation extraction by bringing closer the representations of
sentences that contain the same Knowledge Base pairs. To achieve this, we bias
the latent space of sentences via a Variational Autoencoder (VAE) that is
trained jointly with a relation classifier. The latent code guides the pair
representations and influences sentence reconstruction. Experimental results on
two datasets created via distant supervision indicate that multi-task learning
results in performance benefits. Additional exploration of employing Knowledge
Base priors into the VAE reveals that the sentence space can be shifted towards
that of the Knowledge Base, offering interpretability and further improving
results.
| 2,021 |
Computation and Language
|
An Adversarially-Learned Turing Test for Dialog Generation Models
|
The design of better automated dialogue evaluation metrics offers the
potential of accelerate evaluation research on conversational AI. However,
existing trainable dialogue evaluation models are generally restricted to
classifiers trained in a purely supervised manner, which suffer a significant
risk from adversarial attacking (e.g., a nonsensical response that enjoys a
high classification score). To alleviate this risk, we propose an adversarial
training approach to learn a robust model, ATT (Adversarial Turing Test), that
discriminates machine-generated responses from human-written replies. In
contrast to previous perturbation-based methods, our discriminator is trained
by iteratively generating unrestricted and diverse adversarial examples using
reinforcement learning. The key benefit of this unrestricted adversarial
training approach is allowing the discriminator to improve robustness in an
iterative attack-defense game. Our discriminator shows high accuracy on strong
attackers including DialoGPT and GPT-3.
| 2,021 |
Computation and Language
|
What to Pre-Train on? Efficient Intermediate Task Selection
|
Intermediate task fine-tuning has been shown to culminate in large transfer
gains across many NLP tasks. With an abundance of candidate datasets as well as
pre-trained language models, it has become infeasible to run the cross-product
of all combinations to find the best transfer setting. In this work we first
establish that similar sequential fine-tuning gains can be achieved in adapter
settings, and subsequently consolidate previously proposed methods that
efficiently identify beneficial tasks for intermediate transfer learning. We
experiment with a diverse set of 42 intermediate and 11 target English
classification, multiple choice, question answering, and sequence tagging
tasks. Our results show that efficient embedding based methods that rely solely
on the respective datasets outperform computational expensive few-shot
fine-tuning approaches. Our best methods achieve an average Regret@3 of less
than 1% across all target tasks, demonstrating that we are able to efficiently
identify the best datasets for intermediate training.
| 2,021 |
Computation and Language
|
proScript: Partially Ordered Scripts Generation via Pre-trained Language
Models
|
Scripts - standardized event sequences describing typical everyday activities
- have been shown to help understand narratives by providing expectations,
resolving ambiguity, and filling in unstated information. However, to date they
have proved hard to author or extract from text. In this work, we demonstrate
for the first time that pre-trained neural language models (LMs) can be be
finetuned to generate high-quality scripts, at varying levels of granularity,
for a wide range of everyday scenarios (e.g., bake a cake). To do this, we
collected a large (6.4k), crowdsourced partially ordered scripts (named
proScript), which is substantially larger than prior datasets, and developed
models that generate scripts with combining language generation and structure
prediction. We define two complementary tasks: (i) edge prediction: given a
scenario and unordered events, organize the events into a valid (possibly
partial-order) script, and (ii) script generation: given only a scenario,
generate events and organize them into a (possibly partial-order) script. Our
experiments show that our models perform well (e.g., F1=75.7 in task (i)),
illustrating a new approach to overcoming previous barriers to script
collection. We also show that there is still significant room for improvement
toward human level performance. Together, our tasks, dataset, and models offer
a new research direction for learning script knowledge.
| 2,021 |
Computation and Language
|
Condenser: a Pre-training Architecture for Dense Retrieval
|
Pre-trained Transformer language models (LM) have become go-to text
representation encoders. Prior research fine-tunes deep LMs to encode text
sequences such as sentences and passages into single dense vector
representations for efficient text comparison and retrieval. However, dense
encoders require a lot of data and sophisticated techniques to effectively
train and suffer in low data situations. This paper finds a key reason is that
standard LMs' internal attention structure is not ready-to-use for dense
encoders, which needs to aggregate text information into the dense
representation. We propose to pre-train towards dense encoder with a novel
Transformer architecture, Condenser, where LM prediction CONditions on DENSE
Representation. Our experiments show Condenser improves over standard LM by
large margins on various text retrieval and similarity tasks.
| 2,021 |
Computation and Language
|
Modeling Fuzzy Cluster Transitions for Topic Tracing
|
Twitter can be viewed as a data source for Natural Language Processing (NLP)
tasks. The continuously updating data streams on Twitter make it challenging to
trace real-time topic evolution. In this paper, we propose a framework for
modeling fuzzy transitions of topic clusters. We extend our previous work on
crisp cluster transitions by incorporating fuzzy logic in order to enrich the
underlying structures identified by the framework. We apply the methodology to
both computer generated clusters of nouns from tweets and human tweet
annotations. The obtained fuzzy transitions are compared with the crisp
transitions, on both computer generated clusters and human labeled topic sets.
| 2,021 |
Computation and Language
|
Context-Adaptive Document-Level Neural Machine Translation
|
Most existing document-level neural machine translation (NMT) models leverage
a fixed number of the previous or all global source sentences to handle the
context-independent problem in standard NMT. However, the translating of each
source sentence benefits from various sizes of context, and inappropriate
context may harm the translation performance. In this work, we introduce a
data-adaptive method that enables the model to adopt the necessary and useful
context. Specifically, we introduce a light predictor into two document-level
translation models to select the explicit context. Experiments demonstrate the
proposed approach can significantly improve the performance over the previous
methods with a gain up to 1.99 BLEU points.
| 2,021 |
Computation and Language
|
Data Augmentation for Voice-Assistant NLU using BERT-based
Interchangeable Rephrase
|
We introduce a data augmentation technique based on byte pair encoding and a
BERT-like self-attention model to boost performance on spoken language
understanding tasks. We compare and evaluate this method with a range of
augmentation techniques encompassing generative models such as VAEs and
performance-boosting techniques such as synonym replacement and
back-translation. We show our method performs strongly on domain and intent
classification tasks for a voice assistant and in a user-study focused on
utterance naturalness and semantic similarity.
| 2,021 |
Computation and Language
|
Learning Evolved Combinatorial Symbols with a Neuro-symbolic Generative
Model
|
Humans have the ability to rapidly understand rich combinatorial concepts
from limited data. Here we investigate this ability in the context of auditory
signals, which have been evolved in a cultural transmission experiment to study
the emergence of combinatorial structure in language. We propose a
neuro-symbolic generative model which combines the strengths of previous
approaches to concept learning. Our model performs fast inference drawing on
neural network methods, while still retaining the interpretability and
generalization from limited data seen in structured generative approaches. This
model outperforms a purely neural network-based approach on classification as
evaluated against both ground truth and human experimental classification
preferences, and produces superior reproductions of observed signals as well.
Our results demonstrate the power of flexible combined neural-symbolic
architectures for human-like generalization in raw perceptual domains and
offers a step towards developing precise computational models of inductive
biases in language evolution.
| 2,021 |
Computation and Language
|
Learning to Reason for Text Generation from Scientific Tables
|
In this paper, we introduce SciGen, a new challenge dataset for the task of
reasoning-aware data-to-text generation consisting of tables from scientific
articles and their corresponding descriptions. Describing scientific tables
goes beyond the surface realization of the table content and requires reasoning
over table values. The unique properties of SciGen are that (1) tables mostly
contain numerical values, and (2) the corresponding descriptions require
arithmetic reasoning. SciGen is therefore the first dataset that assesses the
arithmetic reasoning capabilities of generation models on complex input
structures, i.e., tables from scientific articles. We study the effectiveness
of state-of-the-art data-to-text generation models on SciGen and evaluate the
results using common metrics as well as human evaluation. Our results and
analyses show that (a) while humans like to reason for describing scientific
tables, the ability of state-of-the-art models is severely limited on this
task, (b) while adding more training data improves the results, it is not the
solution for reasoning-aware text generation, and (c) one of the main
bottlenecks for this task is the lack of proper automatic evaluation metrics.
The data, code, and annotations for human evaluation will be available at
https://github.com/UKPLab/SciGen. SciGen opens new avenues for future research
in reasoning-aware text generation and evaluation.
| 2,021 |
Computation and Language
|
Text2App: A Framework for Creating Android Apps from Text Descriptions
|
We present Text2App -- a framework that allows users to create functional
Android applications from natural language specifications. The conventional
method of source code generation tries to generate source code directly, which
is impractical for creating complex software. We overcome this limitation by
transforming natural language into an abstract intermediate formal language
representing an application with a substantially smaller number of tokens. The
intermediate formal representation is then compiled into target source codes.
This abstraction of programming details allows seq2seq networks to learn
complex application structures with less overhead. In order to train sequence
models, we introduce a data synthesis method grounded in a human survey. We
demonstrate that Text2App generalizes well to unseen combination of app
components and it is capable of handling noisy natural language instructions.
We explore the possibility of creating applications from highly abstract
instructions by coupling our system with GPT-3 -- a large pretrained language
model. We perform an extensive human evaluation and identify the capabilities
and limitations of our system. The source code, a ready-to-run demo notebook,
and a demo video are publicly available at
\url{https://github.com/text2app/Text2App}.
| 2,021 |
Computation and Language
|
Membership Inference Attack Susceptibility of Clinical Language Models
|
Deep Neural Network (DNN) models have been shown to have high empirical
privacy leakages. Clinical language models (CLMs) trained on clinical data have
been used to improve performance in biomedical natural language processing
tasks. In this work, we investigate the risks of training-data leakage through
white-box or black-box access to CLMs. We design and employ membership
inference attacks to estimate the empirical privacy leaks for model
architectures like BERT and GPT2. We show that membership inference attacks on
CLMs lead to non-trivial privacy leakages of up to 7%. Our results show that
smaller models have lower empirical privacy leakages than larger ones, and
masked LMs have lower leakages than auto-regressive LMs. We further show that
differentially private CLMs can have improved model utility on clinical domain
while ensuring low empirical privacy leakage. Lastly, we also study the effects
of group-level membership inference and disease rarity on CLM privacy leakages.
| 2,021 |
Computation and Language
|
Surface Form Competition: Why the Highest Probability Answer Isn't
Always Right
|
Large language models have shown promising results in zero-shot settings
(Brown et al.,2020; Radford et al., 2019). For example, they can perform
multiple choice tasks simply by conditioning on a question and selecting the
answer with the highest probability.
However, ranking by string probability can be problematic due to surface form
competition-wherein different surface forms compete for probability mass, even
if they represent the same underlying concept, e.g. "computer" and "PC." Since
probability mass is finite, this lowers the probability of the correct answer,
due to competition from other strings that are valid answers (but not one of
the multiple choice options).
We introduce Domain Conditional Pointwise Mutual Information, an alternative
scoring function that directly compensates for surface form competition by
simply reweighing each option according to a term that is proportional to its a
priori likelihood within the context of the specific zero-shot task. It
achieves consistent gains in zero-shot performance over both calibrated (Zhao
et al., 2021) and uncalibrated scoring functions on all GPT-2 and GPT-3 models
over a variety of multiple choice datasets.
| 2,022 |
Computation and Language
|
On the Importance of Effectively Adapting Pretrained Language Models for
Active Learning
|
Recent Active Learning (AL) approaches in Natural Language Processing (NLP)
proposed using off-the-shelf pretrained language models (LMs). In this paper,
we argue that these LMs are not adapted effectively to the downstream task
during AL and we explore ways to address this issue. We suggest to first adapt
the pretrained LM to the target task by continuing training with all the
available unlabeled data and then use it for AL. We also propose a simple yet
effective fine-tuning method to ensure that the adapted LM is properly trained
in both low and high resource scenarios during AL. Our experiments demonstrate
that our approach provides substantial data efficiency improvements compared to
the standard fine-tuning approach, suggesting that a poor training strategy can
be catastrophic for AL.
| 2,022 |
Computation and Language
|
ESTER: A Machine Reading Comprehension Dataset for Event Semantic
Relation Reasoning
|
Understanding how events are semantically related to each other is the
essence of reading comprehension. Recent event-centric reading comprehension
datasets focus mostly on event arguments or temporal relations. While these
tasks partially evaluate machines' ability of narrative understanding,
human-like reading comprehension requires the capability to process event-based
information beyond arguments and temporal reasoning. For example, to understand
causality between events, we need to infer motivation or purpose; to establish
event hierarchy, we need to understand the composition of events. To facilitate
these tasks, we introduce ESTER, a comprehensive machine reading comprehension
(MRC) dataset for Event Semantic Relation Reasoning. The dataset leverages
natural language queries to reason about the five most common event semantic
relations, provides more than 6K questions and captures 10.1K event relation
pairs. Experimental results show that the current SOTA systems achieve 22.1%,
63.3%, and 83.5% for token-based exact-match, F1, and event-based HIT@1 scores,
which are all significantly below human performances (36.0%, 79.6%, 100%
respectively), highlighting our dataset as a challenging benchmark.
| 2,021 |
Computation and Language
|
Concadia: Towards Image-Based Text Generation with a Purpose
|
Current deep learning models often achieve excellent results on benchmark
image-to-text datasets but fail to generate texts that are useful in practice.
We argue that to close this gap, it is vital to distinguish descriptions from
captions based on their distinct communicative roles. Descriptions focus on
visual features and are meant to replace an image (often to increase
accessibility), whereas captions appear alongside an image to supply additional
information. To motivate this distinction and help people put it into practice,
we introduce the publicly available Wikipedia-based dataset Concadia consisting
of 96,918 images with corresponding English-language descriptions, captions,
and surrounding context. Using insights from Concadia, models trained on it,
and a preregistered human-subjects experiment with human- and model-generated
texts, we characterize the commonalities and differences between descriptions
and captions. In addition, we show that, for generating both descriptions and
captions, it is useful to augment image-to-text models with representations of
the textual context in which the image appeared.
| 2,022 |
Computation and Language
|
"Wikily" Supervised Neural Translation Tailored to Cross-Lingual Tasks
|
We present a simple but effective approach for leveraging Wikipedia for
neural machine translation as well as cross-lingual tasks of image captioning
and dependency parsing without using any direct supervision from external
parallel data or supervised models in the target language. We show that first
sentences and titles of linked Wikipedia pages, as well as cross-lingual image
captions, are strong signals for a seed parallel data to extract bilingual
dictionaries and cross-lingual word embeddings for mining parallel text from
Wikipedia. Our final model achieves high BLEU scores that are close to or
sometimes higher than strong supervised baselines in low-resource languages;
e.g. supervised BLEU of 4.0 versus 12.1 from our model in English-to-Kazakh.
Moreover, we tailor our wikily supervised translation models to unsupervised
image captioning, and cross-lingual dependency parser transfer. In image
captioning, we train a multi-tasking machine translation and image captioning
pipeline for Arabic and English from which the Arabic training data is a
translated version of the English captioning data, using our wikily-supervised
translation models. Our captioning results on Arabic are slightly better than
that of its supervised model. In dependency parsing, we translate a large
amount of monolingual text, and use it as artificial training data in an
annotation projection framework. We show that our model outperforms recent work
on cross-lingual transfer of dependency parsers.
| 2,021 |
Computation and Language
|
Neural String Edit Distance
|
We propose the neural string edit distance model for string-pair matching and
string transduction based on learnable string edit distance. We modify the
original expectation-maximization learned edit distance algorithm into a
differentiable loss function, allowing us to integrate it into a neural network
providing a contextual representation of the input. We evaluate on cognate
detection, transliteration, and grapheme-to-phoneme conversion, and show that
we can trade off between performance and interpretability in a single
framework. Using contextual representations, which are difficult to interpret,
we match the performance of state-of-the-art string-pair matching models. Using
static embeddings and a slightly different loss function, we force
interpretability, at the expense of an accuracy drop.
| 2,022 |
Computation and Language
|
Unsupervised Extractive Summarization by Human Memory Simulation
|
Summarization systems face the core challenge of identifying and selecting
important information. In this paper, we tackle the problem of content
selection in unsupervised extractive summarization of long, structured
documents. We introduce a wide range of heuristics that leverage cognitive
representations of content units and how these are retained or forgotten in
human memory. We find that properties of these representations of human memory
can be exploited to capture relevance of content units in scientific articles.
Experiments show that our proposed heuristics are effective at leveraging
cognitive structures and the organization of the document (i.e.\ sections of an
article), and automatic and human evaluations provide strong evidence that
these heuristics extract more summary-worthy content units.
| 2,021 |
Computation and Language
|
Re-TACRED: Addressing Shortcomings of the TACRED Dataset
|
TACRED is one of the largest and most widely used sentence-level relation
extraction datasets. Proposed models that are evaluated using this dataset
consistently set new state-of-the-art performance. However, they still exhibit
large error rates despite leveraging external knowledge and unsupervised
pretraining on large text corpora. A recent study suggested that this may be
due to poor dataset quality. The study observed that over 50% of the most
challenging sentences from the development and test sets are incorrectly
labeled and account for an average drop of 8% f1-score in model performance.
However, this study was limited to a small biased sample of 5k (out of a total
of 106k) sentences, substantially restricting the generalizability and broader
implications of its findings. In this paper, we address these shortcomings by:
(i) performing a comprehensive study over the whole TACRED dataset, (ii)
proposing an improved crowdsourcing strategy and deploying it to re-annotate
the whole dataset, and (iii) performing a thorough analysis to understand how
correcting the TACRED annotations affects previously published results. After
verification, we observed that 23.9% of TACRED labels are incorrect. Moreover,
evaluating several models on our revised dataset yields an average f1-score
improvement of 14.3% and helps uncover significant relationships between the
different models (rather than simply offsetting or scaling their scores by a
constant factor). Finally, aside from our analysis we also release Re-TACRED, a
new completely re-annotated version of the TACRED dataset that can be used to
perform reliable evaluation of relation extraction models.
| 2,021 |
Computation and Language
|
Structure-Aware Abstractive Conversation Summarization via Discourse and
Action Graphs
|
Abstractive conversation summarization has received much attention recently.
However, these generated summaries often suffer from insufficient, redundant,
or incorrect content, largely due to the unstructured and complex
characteristics of human-human interactions. To this end, we propose to
explicitly model the rich structures in conversations for more precise and
accurate conversation summarization, by first incorporating discourse relations
between utterances and action triples ("who-doing-what") in utterances through
structured graphs to better encode conversations, and then designing a
multi-granularity decoder to generate summaries by combining all levels of
information. Experiments show that our proposed models outperform
state-of-the-art methods and generalize well in other domains in terms of both
automatic evaluations and human judgments. We have publicly released our code
at https://github.com/GT-SALT/Structure-Aware-BART.
| 2,021 |
Computation and Language
|
Enriching a Model's Notion of Belief using a Persistent Memory
|
Although pretrained language models (PTLMs) have been shown to contain
significant amounts of world knowledge, they can still produce inconsistent
answers to questions when probed, even after using specialized training
techniques to reduce inconsistency. As a result, it can be hard to identify
what the model actually "believes" about the world. Our goal is to reduce this
problem, so systems are more globally consistent and accurate in their answers.
Our approach is to add a memory component -- a BeliefBank -- that records a
model's answers, and two mechanisms that use it to improve consistency among
beliefs. First, a reasoning component -- a weighted SAT solver -- improves
consistency by flipping answers that significantly clash with others. Second, a
feedback component re-queries the model but using known beliefs as context. We
show that, in a controlled experimental setting, these two mechanisms improve
both accuracy and consistency. This is significant as it is a first step
towards endowing models with an evolving memory, allowing them to construct a
more coherent picture of the world.
| 2,021 |
Computation and Language
|
LAMPRET: Layout-Aware Multimodal PreTraining for Document Understanding
|
Document layout comprises both structural and visual (eg. font-sizes)
information that is vital but often ignored by machine learning models. The few
existing models which do use layout information only consider textual contents,
and overlook the existence of contents in other modalities such as images.
Additionally, spatial interactions of presented contents in a layout were never
really fully exploited. To bridge this gap, we parse a document into content
blocks (eg. text, table, image) and propose a novel layout-aware multimodal
hierarchical framework, LAMPreT, to model the blocks and the whole document.
Our LAMPreT encodes each block with a multimodal transformer in the lower-level
and aggregates the block-level representations and connections utilizing a
specifically designed transformer at the higher-level. We design hierarchical
pretraining objectives where the lower-level model is trained similarly to
multimodal grounding models, and the higher-level model is trained with our
proposed novel layout-aware objectives. We evaluate the proposed model on two
layout-aware tasks -- text block filling and image suggestion and show the
effectiveness of our proposed hierarchical architecture as well as pretraining
techniques.
| 2,021 |
Computation and Language
|
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained
Models
|
There is growing evidence that pretrained language models improve
task-specific fine-tuning not just for the languages seen in pretraining, but
also for new languages and even non-linguistic data. What is the nature of this
surprising cross-domain transfer? We offer a partial answer via a systematic
exploration of how much transfer occurs when models are denied any information
about word identity via random scrambling. In four classification tasks and two
sequence labeling tasks, we evaluate baseline models, LSTMs using GloVe
embeddings, and BERT. We find that only BERT shows high rates of transfer into
our scrambled domains, and for classification but not sequence labeling tasks.
Our analyses seek to explain why transfer succeeds for some tasks but not
others, to isolate the separate contributions of pretraining versus
fine-tuning, and to quantify the role of word frequency. These findings help
explain where and why cross-domain transfer occurs, which can guide future
studies and practical fine-tuning efforts.
| 2,021 |
Computation and Language
|
Sequential Cross-Document Coreference Resolution
|
Relating entities and events in text is a key component of natural language
understanding. Cross-document coreference resolution, in particular, is
important for the growing interest in multi-document analysis tasks. In this
work we propose a new model that extends the efficient sequential prediction
paradigm for coreference resolution to cross-document settings and achieves
competitive results for both entity and event coreference while provides strong
evidence of the efficacy of both sequential models and higher-order inference
in cross-document settings. Our model incrementally composes mentions into
cluster representations and predicts links between a mention and the already
constructed clusters, approximating a higher-order model. In addition, we
conduct extensive ablation studies that provide new insights into the
importance of various inputs and representation types in coreference.
| 2,021 |
Computation and Language
|
Robust Embeddings Via Distributions
|
Despite recent monumental advances in the field, many Natural Language
Processing (NLP) models still struggle to perform adequately on noisy domains.
We propose a novel probabilistic embedding-level method to improve the
robustness of NLP models. Our method, Robust Embeddings via Distributions
(RED), incorporates information from both noisy tokens and surrounding context
to obtain distributions over embedding vectors that can express uncertainty in
semantic space more fully than any deterministic method. We evaluate our method
on a number of downstream tasks using existing state-of-the-art models in the
presence of both natural and synthetic noise, and demonstrate a clear
improvement over other embedding approaches to robustness from the literature.
| 2,021 |
Computation and Language
|
A Full Text-Dependent End to End Mispronunciation Detection and
Diagnosis with Easy Data Augmentation Techniques
|
Recently, end-to-end mispronunciation detection and diagnosis (MD&D) systems
has become a popular alternative to greatly simplify the model-building process
of conventional hybrid DNN-HMM systems by representing complicated modules with
a single deep network architecture. In this paper, in order to utilize the
prior text in the end-to-end structure, we present a novel text-dependent model
which is difference with sed-mdd, the model achieves a fully end-to-end system
by aligning the audio with the phoneme sequences of the prior text inside the
model through the attention mechanism. Moreover, the prior text as input will
be a problem of imbalance between positive and negative samples in the phoneme
sequence. To alleviate this problem, we propose three simple data augmentation
methods, which effectively improve the ability of model to capture
mispronounced phonemes. We conduct experiments on L2-ARCTIC, and our best
performance improved from 49.29% to 56.08% in F-measure metric compared to the
CNN-RNN-CTC model.
| 2,021 |
Computation and Language
|
Are Word Embedding Methods Stable and Should We Care About It?
|
A representation learning method is considered stable if it consistently
generates similar representation of the given data across multiple runs. Word
Embedding Methods (WEMs) are a class of representation learning methods that
generate dense vector representation for each word in the given text data. The
central idea of this paper is to explore the stability measurement of WEMs
using intrinsic evaluation based on word similarity. We experiment with three
popular WEMs: Word2Vec, GloVe, and fastText. For stability measurement, we
investigate the effect of five parameters involved in training these models. We
perform experiments using four real-world datasets from different domains:
Wikipedia, News, Song lyrics, and European parliament proceedings. We also
observe the effect of WEM stability on three downstream tasks: Clustering, POS
tagging, and Fairness evaluation. Our experiments indicate that amongst the
three WEMs, fastText is the most stable, followed by GloVe and Word2Vec.
| 2,021 |
Computation and Language
|
A Graph-guided Multi-round Retrieval Method for Conversational
Open-domain Question Answering
|
In recent years, conversational agents have provided a natural and convenient
access to useful information in people's daily life, along with a broad and new
research topic, conversational question answering (QA). Among the popular
conversational QA tasks, conversational open-domain QA, which requires to
retrieve relevant passages from the Web to extract exact answers, is more
practical but less studied. The main challenge is how to well capture and fully
explore the historical context in conversation to facilitate effective
large-scale retrieval. The current work mainly utilizes history questions to
refine the current question or to enhance its representation, yet the relations
between history answers and the current answer in a conversation, which is also
critical to the task, are totally neglected. To address this problem, we
propose a novel graph-guided retrieval method to model the relations among
answers across conversation turns. In particular, it utilizes a passage graph
derived from the hyperlink-connected passages that contains history answers and
potential current answers, to retrieve more relevant passages for subsequent
answer extraction. Moreover, in order to collect more complementary information
in the historical context, we also propose to incorporate the multi-round
relevance feedback technique to explore the impact of the retrieval context on
current question understanding. Experimental results on the public dataset
verify the effectiveness of our proposed method. Notably, the F1 score is
improved by 5% and 11% with predicted history answers and true history answers,
respectively.
| 2,021 |
Computation and Language
|
Three-level Hierarchical Transformer Networks for Long-sequence and
Multiple Clinical Documents Classification
|
We present a Three-level Hierarchical Transformer Network (3-level-HTN) for
modeling long-term dependencies across clinical notes for the purpose of
patient-level prediction. The network is equipped with three levels of
Transformer-based encoders to learn progressively from words to sentences,
sentences to notes, and finally notes to patients. The first level from word to
sentence directly applies a pre-trained BERT model as a fully trainable
component. While the second and third levels both implement a stack of
transformer-based encoders, before the final patient representation is fed into
a classification layer for clinical predictions. Compared to conventional BERT
models, our model increases the maximum input length from 512 tokens to much
longer sequences that are appropriate for modeling large numbers of clinical
notes. We empirically examine different hyper-parameters to identify an optimal
trade-off given computational resource limits. Our experiment results on the
MIMIC-III dataset for different prediction tasks demonstrate that the proposed
Hierarchical Transformer Network outperforms previous state-of-the-art models,
including but not limited to BigBird.
| 2,021 |
Computation and Language
|
Joint Passage Ranking for Diverse Multi-Answer Retrieval
|
We study multi-answer retrieval, an under-explored problem that requires
retrieving passages to cover multiple distinct answers for a given question.
This task requires joint modeling of retrieved passages, as models should not
repeatedly retrieve passages containing the same answer at the cost of missing
a different valid answer. In this paper, we introduce JPR, the first joint
passage retrieval model for multi-answer retrieval. JPR makes use of an
autoregressive reranker that selects a sequence of passages, each conditioned
on previously selected passages. JPR is trained to select passages that cover
new answers at each timestep and uses a tree-decoding algorithm to enable
flexibility in the degree of diversity. Compared to prior approaches, JPR
achieves significantly better answer coverage on three multi-answer datasets.
When combined with downstream question answering, the improved retrieval
enables larger answer generation models since they need to consider fewer
passages, establishing a new state-of-the-art.
| 2,021 |
Computation and Language
|
Data Distillation for Text Classification
|
Deep learning techniques have achieved great success in many fields, while at
the same time deep learning models are getting more complex and expensive to
compute. It severely hinders the wide applications of these models. In order to
alleviate this problem, model distillation emerges as an effective means to
compress a large model into a smaller one without a significant drop in
accuracy. In this paper, we study a related but orthogonal issue, data
distillation, which aims to distill the knowledge from a large training dataset
down to a smaller and synthetic one. It has the potential to address the large
and growing neural network training problem based on the small dataset. We
develop a novel data distillation method for text classification. We evaluate
our method on eight benchmark datasets. The results that the distilled data
with the size of 0.1% of the original text data achieves approximately 90%
performance of the original is rather impressive.
| 2,021 |
Computation and Language
|
Context-Aware Interaction Network for Question Matching
|
Impressive milestones have been achieved in text matching by adopting a
cross-attention mechanism to capture pertinent semantic connections between two
sentence representations. However, regular cross-attention focuses on
word-level links between the two input sequences, neglecting the importance of
contextual information. We propose a context-aware interaction network (COIN)
to properly align two sequences and infer their semantic relationship.
Specifically, each interaction block includes (1) a context-aware
cross-attention mechanism to effectively integrate contextual information when
aligning two sequences, and (2) a gate fusion layer to flexibly interpolate
aligned representations. We apply multiple stacked interaction blocks to
produce alignments at different levels and gradually refine the attention
results. Experiments on two question matching datasets and detailed analyses
demonstrate the effectiveness of our model.
| 2,021 |
Computation and Language
|
R&R: Metric-guided Adversarial Sentence Generation
|
Adversarial examples are helpful for analyzing and improving the robustness
of text classifiers. Generating high-quality adversarial examples is a
challenging task as it requires generating fluent adversarial sentences that
are semantically similar to the original sentences and preserve the original
labels, while causing the classifier to misclassify them. Existing methods
prioritize misclassification by maximizing each perturbation's effectiveness at
misleading a text classifier; thus, the generated adversarial examples fall
short in terms of fluency and similarity. In this paper, we propose a rewrite
and rollback (R&R) framework for adversarial attack. It improves the quality of
adversarial examples by optimizing a critique score which combines the fluency,
similarity, and misclassification metrics. R&R generates high-quality
adversarial examples by allowing exploration of perturbations that do not have
immediate impact on the misclassification metric but can improve fluency and
similarity metrics. We evaluate our method on 5 representative datasets and 3
classifier architectures. Our method outperforms current state-of-the-art in
attack success rate by +16.2%, +12.8%, and +14.0% on the classifiers
respectively. Code is available at https://github.com/DAI-Lab/fibber
| 2,022 |
Computation and Language
|
Neural Path Hunter: Reducing Hallucination in Dialogue Systems via Path
Grounding
|
Dialogue systems powered by large pre-trained language models (LM) exhibit an
innate ability to deliver fluent and natural-looking responses. Despite their
impressive generation performance, these models can often generate factually
incorrect statements impeding their widespread adoption. In this paper, we
focus on the task of improving the faithfulness -- and thus reduce
hallucination -- of Neural Dialogue Systems to known facts supplied by a
Knowledge Graph (KG). We propose Neural Path Hunter which follows a
generate-then-refine strategy whereby a generated response is amended using the
k-hop subgraph of a KG. Neural Path Hunter leverages a separate token-level
fact critic to identify plausible sources of hallucination followed by a
refinement stage consisting of a chain of two neural LM's that retrieves
correct entities by crafting a query signal that is propagated over the k-hop
subgraph. Our proposed model can easily be applied to any dialogue generated
responses without retraining the model. We empirically validate our proposed
approach on the OpenDialKG dataset against a suite of metrics and report a
relative improvement of faithfulness over dialogue responses by 20.35% based on
FeQA (Durmus et al., 2020).
| 2,021 |
Computation and Language
|
Moving on from OntoNotes: Coreference Resolution Model Transfer
|
Academic neural models for coreference resolution (coref) are typically
trained on a single dataset, OntoNotes, and model improvements are benchmarked
on that same dataset. However, real-world applications of coref depend on the
annotation guidelines and the domain of the target dataset, which often differ
from those of OntoNotes. We aim to quantify transferability of coref models
based on the number of annotated documents available in the target dataset. We
examine eleven target datasets and find that continued training is consistently
effective and especially beneficial when there are few target documents. We
establish new benchmarks across several datasets, including state-of-the-art
results on PreCo.
| 2,021 |
Computation and Language
|
Syntactic structures and the general Markov models
|
We study phylogenetic signal present in syntactic information by considering
the syntactic structures data from Longobardi (2017b), Collins (2010), Ceolin
et al. (2020) and Koopman (2011). Focusing first on the general Markov models,
we explore how well the the syntactic structures data conform to the hypothesis
required by these models. We do this by comparing derived phylogenetic trees
against trees agreed on by the linguistics community. We then interpret the
methods of Ceolin et al. (2020) as an infinite sites evolutionary model and
compare the consistency of the data with this alternative. The ideas and
methods discussed in the present paper are more generally applicable than to
the specific setting of syntactic structures, and can be used in other
contexts, when analyzing consistency of data with against hypothesized
evolutionary models.
| 2,022 |
Computation and Language
|
A multilabel approach to morphosyntactic probing
|
We introduce a multilabel probing task to assess the morphosyntactic
representations of word embeddings from multilingual language models. We
demonstrate this task with multilingual BERT (Devlin et al., 2018), training
probes for seven typologically diverse languages of varying morphological
complexity: Afrikaans, Croatian, Finnish, Hebrew, Korean, Spanish, and Turkish.
Through this simple but robust paradigm, we show that multilingual BERT renders
many morphosyntactic features easily and simultaneously extractable (e.g.,
gender, grammatical case, pronominal type). We further evaluate the probes on
six "held-out" languages in a zero-shot transfer setting: Arabic, Chinese,
Marathi, Slovenian, Tagalog, and Yoruba. This style of probing has the added
benefit of revealing the linguistic properties that language models recognize
as being shared across languages. For instance, the probes performed well on
recognizing nouns in the held-out languages, suggesting that multilingual BERT
has a conception of noun-hood that transcends individual languages; yet, the
same was not true of adjectives.
| 2,021 |
Computation and Language
|
Frequency-based Distortions in Contextualized Word Embeddings
|
How does word frequency in pre-training data affect the behavior of
similarity metrics in contextualized BERT embeddings? Are there systematic ways
in which some word relationships are exaggerated or understated? In this work,
we explore the geometric characteristics of contextualized word embeddings with
two novel tools: (1) an identity probe that predicts the identity of a word
using its embedding; (2) the minimal bounding sphere for a word's
contextualized representations. Our results reveal that words of high and low
frequency differ significantly with respect to their representational geometry.
Such differences introduce distortions: when compared to human judgments, point
estimates of embedding similarity (e.g., cosine similarity) can over- or
under-estimate the semantic similarity of two words, depending on the frequency
of those words in the training data. This has downstream societal implications:
BERT-Base has more trouble differentiating between South American and African
countries than North American and European ones. We find that these distortions
persist when using BERT-Multilingual, suggesting that they cannot be easily
fixed with additional data, which in turn introduces new distortions.
| 2,021 |
Computation and Language
|
Sentence Concatenation Approach to Data Augmentation for Neural Machine
Translation
|
Neural machine translation (NMT) has recently gained widespread attention
because of its high translation accuracy. However, it shows poor performance in
the translation of long sentences, which is a major issue in low-resource
languages. It is assumed that this issue is caused by insufficient number of
long sentences in the training data. Therefore, this study proposes a simple
data augmentation method to handle long sentences. In this method, we use only
the given parallel corpora as the training data and generate long sentences by
concatenating two sentences. Based on the experimental results, we confirm
improvements in long sentence translation by the proposed data augmentation
method, despite its simplicity. Moreover, the translation quality is further
improved by the proposed method, when combined with back-translation.
| 2,021 |
Computation and Language
|
Learning to Share by Masking the Non-shared for Multi-domain Sentiment
Classification
|
Multi-domain sentiment classification deals with the scenario where labeled
data exists for multiple domains but insufficient for training effective
sentiment classifiers that work across domains. Thus, fully exploiting
sentiment knowledge shared across domains is crucial for real world
applications. While many existing works try to extract domain-invariant
features in high-dimensional space, such models fail to explicitly distinguish
between shared and private features at text-level, which to some extent lacks
interpretablity. Based on the assumption that removing domain-related tokens
from texts would help improve their domain-invariance, we instead first
transform original sentences to be domain-agnostic. To this end, we propose the
BertMasker network which explicitly masks domain-related words from texts,
learns domain-invariant sentiment features from these domain-agnostic texts,
and uses those masked words to form domain-aware sentence representations.
Empirical experiments on a well-adopted multiple domain sentiment
classification dataset demonstrate the effectiveness of our proposed model on
both multi-domain sentiment classification and cross-domain settings, by
increasing the accuracy by 0.94% and 1.8% respectively. Further analysis on
masking proves that removing those domain-related and sentiment irrelevant
tokens decreases texts' domain distinction, resulting in the performance
degradation of a BERT-based domain classifier by over 12%.
| 2,021 |
Computation and Language
|
Revisiting Few-shot Relation Classification: Evaluation Data and
Classification Schemes
|
We explore Few-Shot Learning (FSL) for Relation Classification (RC). Focusing
on the realistic scenario of FSL, in which a test instance might not belong to
any of the target categories (none-of-the-above, aka NOTA), we first revisit
the recent popular dataset structure for FSL, pointing out its unrealistic data
distribution. To remedy this, we propose a novel methodology for deriving more
realistic few-shot test data from available datasets for supervised RC, and
apply it to the TACRED dataset. This yields a new challenging benchmark for FSL
RC, on which state of the art models show poor performance. Next, we analyze
classification schemes within the popular embedding-based nearest-neighbor
approach for FSL, with respect to constraints they impose on the embedding
space. Triggered by this analysis we propose a novel classification scheme, in
which the NOTA category is represented as learned vectors, shown empirically to
be an appealing option for FSL.
| 2,021 |
Computation and Language
|
Minimal Supervision for Morphological Inflection
|
Neural models for the various flavours of morphological inflection tasks have
proven to be extremely accurate given ample labeled data -- data that may be
slow and costly to obtain. In this work we aim to overcome this annotation
bottleneck by bootstrapping labeled data from a seed as little as {\em five}
labeled paradigms, accompanied by a large bulk of unlabeled text. Our approach
exploits different kinds of regularities in morphological systems in a
two-phased setup, where word tagging based on {\em analogies} is followed by
word pairing based on {\em distances}. We experiment with the Paradigm Cell
Filling Problem over eight typologically different languages, and find that, in
languages with relatively simple morphology, orthographic regularities on their
own allow inflection models to achieve respectable accuracy. Combined
orthographic and semantic regularities alleviate difficulties with particularly
complex morpho-phonological systems. Our results suggest that hand-crafting
many tagged examples might be an unnecessary effort. However, more work is
needed in order to address rarely used forms.
| 2,021 |
Computation and Language
|
Multilingual and Cross-Lingual Intent Detection from Spoken Data
|
We present a systematic study on multilingual and cross-lingual intent
detection from spoken data. The study leverages a new resource put forth in
this work, termed MInDS-14, a first training and evaluation resource for the
intent detection task with spoken data. It covers 14 intents extracted from a
commercial system in the e-banking domain, associated with spoken examples in
14 diverse language varieties. Our key results indicate that combining machine
translation models with state-of-the-art multilingual sentence encoders (e.g.,
LaBSE) can yield strong intent detectors in the majority of target languages
covered in MInDS-14, and offer comparative analyses across different axes:
e.g., zero-shot versus few-shot learning, translation direction, and impact of
speech recognition. We see this work as an important step towards more
inclusive development and evaluation of multilingual intent detectors from
spoken data, in a much wider spectrum of languages compared to prior work.
| 2,021 |
Computation and Language
|
The Impact of ASR on the Automatic Analysis of Linguistic Complexity and
Sophistication in Spontaneous L2 Speech
|
In recent years, automated approaches to assessing linguistic complexity in
second language (L2) writing have made significant progress in gauging learner
performance, predicting human ratings of the quality of learner productions,
and benchmarking L2 development. In contrast, there is comparatively little
work in the area of speaking, particularly with respect to fully automated
approaches to assessing L2 spontaneous speech. While the importance of a
well-performing ASR system is widely recognized, little research has been
conducted to investigate the impact of its performance on subsequent automatic
text analysis. In this paper, we focus on this issue and examine the impact of
using a state-of-the-art ASR system for subsequent automatic analysis of
linguistic complexity in spontaneously produced L2 speech. A set of 30 selected
measures were considered, falling into four categories: syntactic, lexical,
n-gram frequency, and information-theoretic measures. The agreement between the
scores for these measures obtained on the basis of ASR-generated vs. manual
transcriptions was determined through correlation analysis. A more differential
effect of ASR performance on specific types of complexity measures when
controlling for task type effects is also presented.
| 2,021 |
Computation and Language
|
The Topic Confusion Task: A Novel Scenario for Authorship Attribution
|
Authorship attribution is the problem of identifying the most plausible
author of an anonymous text from a set of candidate authors. Researchers have
investigated same-topic and cross-topic scenarios of authorship attribution,
which differ according to whether new, unseen topics are used in the testing
phase. However, neither scenario allows us to explain whether errors are caused
by a failure to capture authorship writing style or by a topic shift. Motivated
by this, we propose the \emph{topic confusion} task where we switch the
author-topic configuration between the training and testing sets. This setup
allows us to distinguish two types of errors: those caused by the topic shift
and those caused by the features' inability to capture the writing styles. We
show that stylometric features with part-of-speech tags are the least
susceptible to topic variations. We further show that combining them with other
features leads to significantly lower topic confusion and higher attribution
accuracy. Finally, we show that pretrained language models such as BERT and
RoBERTa perform poorly on this task and are surpassed by simple features such
as word-level $n$-grams.
| 2,021 |
Computation and Language
|
The challenges of temporal alignment on Twitter during crises
|
Language use changes over time, and this impacts the effectiveness of NLP
systems. This phenomenon is even more prevalent in social media data during
crisis events where meaning and frequency of word usage may change over the
course of days. Contextual language models fail to adapt temporally,
emphasizing the need for temporal adaptation in models which need to be
deployed over an extended period of time. While existing approaches consider
data spanning large periods of time (from years to decades), shorter time spans
are critical for crisis data. We quantify temporal degradation for this
scenario and propose methods to cope with performance loss by leveraging
techniques from domain adaptation. To the best of our knowledge, this is the
first effort to explore effects of rapid language change driven by adversarial
adaptations, particularly during natural and human-induced disasters. Through
extensive experimentation on diverse crisis datasets, we analyze under what
conditions our approaches outperform strong baselines while highlighting the
current limitations of temporal adaptation methods in scenarios where access to
unlabeled data is scarce.
| 2,022 |
Computation and Language
|
Multi-Perspective Abstractive Answer Summarization
|
Community Question Answering (CQA) forums such as Stack Overflow and Yahoo!
Answers contain a rich resource of answers to a wide range of questions. Each
question thread can receive a large number of answers with different
perspectives. The goal of multi-perspective answer summarization is to produce
a summary that includes all perspectives of the answer. A major obstacle for
multi-perspective, abstractive answer summarization is the absence of a dataset
to provide supervision for producing such summaries. This work introduces a
novel dataset creation method to automatically create multi-perspective,
bullet-point abstractive summaries from an existing CQA forum. Supervision
provided by this dataset trains models to inherently produce multi-perspective
summaries. Additionally, to train models to output more diverse, faithful
answer summaries while retaining multiple perspectives, we propose a
multi-reward optimization technique coupled with a sentence-relevance
prediction multi-task loss. Our methods demonstrate improved coverage of
perspectives and faithfulness as measured by automatic and human evaluations
compared to a strong baseline.
| 2,021 |
Computation and Language
|
DWUG: A large Resource of Diachronic Word Usage Graphs in Four Languages
|
Word meaning is notoriously difficult to capture, both synchronically and
diachronically. In this paper, we describe the creation of the largest resource
of graded contextualized, diachronic word meaning annotation in four different
languages, based on 100,000 human semantic proximity judgments. We thoroughly
describe the multi-round incremental annotation process, the choice for a
clustering algorithm to group usages into senses, and possible - diachronic and
synchronic - uses for this dataset.
| 2,021 |
Computation and Language
|
Multi-source Neural Topic Modeling in Multi-view Embedding Spaces
|
Though word embeddings and topics are complementary representations, several
past works have only used pretrained word embeddings in (neural) topic modeling
to address data sparsity in short-text or small collection of documents. This
work presents a novel neural topic modeling framework using multi-view
embedding spaces: (1) pretrained topic-embeddings, and (2) pretrained
word-embeddings (context insensitive from Glove and context-sensitive from BERT
models) jointly from one or many sources to improve topic quality and better
deal with polysemy. In doing so, we first build respective pools of pretrained
topic (i.e., TopicPool) and word embeddings (i.e., WordPool). We then identify
one or more relevant source domain(s) and transfer knowledge to guide
meaningful learning in the sparse target domain. Within neural topic modeling,
we quantify the quality of topics and document representations via
generalization (perplexity), interpretability (topic coherence) and information
retrieval (IR) using short-text, long-text, small and large document
collections from news and medical domains. Introducing the multi-source
multi-view embedding spaces, we have shown state-of-the-art neural topic
modeling using 6 source (high-resource) and 5 target (low-resource) corpora.
| 2,021 |
Computation and Language
|
Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task
Feasibility in Interactive Visual Environments
|
In recent years, vision-language research has shifted to study tasks which
require more complex reasoning, such as interactive question answering, visual
common sense reasoning, and question-answer plausibility prediction. However,
the datasets used for these problems fail to capture the complexity of real
inputs and multimodal environments, such as ambiguous natural language requests
and diverse digital domains. We introduce Mobile app Tasks with Iterative
Feedback (MoTIF), a dataset with natural language commands for the greatest
number of interactive environments to date. MoTIF is the first to contain
natural language requests for interactive environments that are not
satisfiable, and we obtain follow-up questions on this subset to enable
research on task uncertainty resolution. We perform initial feasibility
classification experiments and only reach an F1 score of 37.3, verifying the
need for richer vision-language representations and improved architectures to
reason about task feasibility.
| 2,021 |
Computation and Language
|
Crossing the Conversational Chasm: A Primer on Natural Language
Processing for Multilingual Task-Oriented Dialogue Systems
|
In task-oriented dialogue (ToD), a user holds a conversation with an
artificial agent to complete a concrete task. Although this technology
represents one of the central objectives of AI and has been the focus of ever
more intense research and development efforts, it is currently limited to a few
narrow domains (e.g., food ordering, ticket booking) and a handful of languages
(e.g., English, Chinese). This work provides an extensive overview of existing
methods and resources in multilingual ToD as an entry point to this exciting
and emerging field. We find that the most critical factor preventing the
creation of truly multilingual ToD systems is the lack of datasets in most
languages for both training and evaluation. In fact, acquiring annotations or
human feedback for each component of modular systems or for data-hungry
end-to-end systems is expensive and tedious. Hence, state-of-the-art approaches
to multilingual ToD mostly rely on (zero- or few-shot) cross-lingual transfer
from resource-rich languages (almost exclusively English), either by means of
machine translation or multilingual representations. These approaches are
currently viable only for typologically similar languages and languages with
parallel / monolingual corpora available. On the other hand, their
effectiveness beyond these boundaries is doubtful or hard to assess due to the
lack of linguistically diverse benchmarks (especially for natural language
generation and end-to-end evaluation). To overcome this limitation, we draw
parallels between components of the ToD pipeline and other NLP tasks, which can
inspire solutions for learning in low-resource scenarios. Finally, we list
additional challenges that multilinguality poses for related areas (such as
speech and human-centred evaluation), and indicate future directions that hold
promise to further expand language coverage and dialogue capabilities of
current ToD systems.
| 2,022 |
Computation and Language
|
GupShup: An Annotated Corpus for Abstractive Summarization of
Open-Domain Code-Switched Conversations
|
Code-switching is the communication phenomenon where speakers switch between
different languages during a conversation. With the widespread adoption of
conversational agents and chat platforms, code-switching has become an integral
part of written conversations in many multi-lingual communities worldwide. This
makes it essential to develop techniques for summarizing and understanding
these conversations. Towards this objective, we introduce abstractive
summarization of Hindi-English code-switched conversations and develop the
first code-switched conversation summarization dataset - GupShup, which
contains over 6,831 conversations in Hindi-English and their corresponding
human-annotated summaries in English and Hindi-English. We present a detailed
account of the entire data collection and annotation processes. We analyze the
dataset using various code-switching statistics. We train state-of-the-art
abstractive summarization models and report their performances using both
automated metrics and human evaluation. Our results show that multi-lingual
mBART and multi-view seq2seq models obtain the best performances on the new
dataset
| 2,021 |
Computation and Language
|
Sentence Alignment with Parallel Documents Facilitates Biomedical
Machine Translation
|
Objective: Today's neural machine translation (NMT) can achieve near
human-level translation quality and greatly facilitates international
communications, but the lack of parallel corpora poses a key problem to the
development of translation systems for highly specialized domains, such as
biomedicine. This work presents an unsupervised algorithm for deriving parallel
corpora from document-level translations by using sentence alignment and
explores how training materials affect the performance of biomedical NMT
systems. Materials and Methods: Document-level translations are mixed to train
bilingual word embeddings (BWEs) for the evaluation of cross-lingual word
similarity, and sentence distance is defined by combining semantic and
positional similarities of the sentences. The alignment of sentences is
formulated as an extended earth mover's distance problem. A Chinese-English
biomedical parallel corpus is derived with the proposed algorithm using
bilingual articles from UpToDate and translations of PubMed abstracts, which is
then used for the training and evaluation of NMT. Results: On two manually
aligned translation datasets, the proposed algorithm achieved accurate sentence
alignment in the 1-to-1 cases and outperformed competing algorithms in the
many-to-many cases. The NMT model fine-tuned on biomedical data significantly
improved the in-domain translation quality (zh-en: +17.72 BLEU; en-zh: +17.02
BLEU). Both the size of the training data and the combination of different
corpora can significantly affect the model's performance. Conclusion: The
proposed algorithm relaxes the assumption for sentence alignment and
effectively generates accurate translation pairs that facilitate training high
quality biomedical NMT models.
| 2,022 |
Computation and Language
|
XLEnt: Mining a Large Cross-lingual Entity Dataset with
Lexical-Semantic-Phonetic Word Alignment
|
Cross-lingual named-entity lexica are an important resource to multilingual
NLP tasks such as machine translation and cross-lingual wikification. While
knowledge bases contain a large number of entities in high-resource languages
such as English and French, corresponding entities for lower-resource languages
are often missing. To address this, we propose Lexical-Semantic-Phonetic Align
(LSP-Align), a technique to automatically mine cross-lingual entity lexica from
mined web data. We demonstrate LSP-Align outperforms baselines at extracting
cross-lingual entity pairs and mine 164 million entity pairs from 120 different
languages aligned with English. We release these cross-lingual entity pairs
along with the massively multilingual tagged named entity corpus as a resource
to the NLP community.
| 2,021 |
Computation and Language
|
A Stylistic Analysis of Honest Deception: The Case of Seinfeld TV Series
Sitcom
|
Language is a powerful tool if used in the correct manner. It is the major
mode of communication, and using the correct choice of words and styles can
serve to have a long-lasting impact. Stylistics is the study of the use of
various language styles in communication to pass a message with a bigger impact
or to communicate indirectly. Stylistic analysis, therefore, is the study of
the use of linguistic styles in texts to determine how a style has been used,
what is communicated and how it is communicated. Honest deception is the use of
a choice of words to imply something different from the literal meaning. A
person listening or reading a text where honest deception has been used and
with a literal understanding may completely miss out on the point. This is
because the issue of honesty and falsehood arises. However, it would be better
to understand that honest deception is used with the intention of having a
lasting impact rather than to deceive the readers, viewers or listeners. The
major styles used in honest deception are hyperboles, litotes, irony and
sarcasm. The Seinfeld Sitcom TV series was a situational TV comedy show aired
from 1990 to 1998. the show attempts to bring to the understanding the daily
life of a comedian and how comedian views life experiences and convert them
into hilarious jokes. It also shows Jerry's struggle with getting the right
partner from the many women who come into his life. Reflecting on honest
deception in the Seinfeld sitcom TV series, this paper is going to investigate
how honest deception has been used in the series, why it has been used and what
is being communicated. The study is going to use a recapitulative form to give
a better analysis and grouping of the different styles used in honest deception
throughout the series.
| 2,021 |
Computation and Language
|
Who Responded to Whom: The Joint Effects of Latent Topics and Discourse
in Conversation Structure
|
Numerous online conversations are produced on a daily basis, resulting in a
pressing need to conversation understanding. As a basis to structure a
discussion, we identify the responding relations in the conversation discourse,
which link response utterances to their initiations. To figure out who
responded to whom, here we explore how the consistency of topic contents and
dependency of discourse roles indicate such interactions, whereas most prior
work ignore the effects of latent factors underlying word occurrences. We
propose a model to learn latent topics and discourse in word distributions, and
predict pairwise initiation-response links via exploiting topic consistency and
discourse dependency. Experimental results on both English and Chinese
conversations show that our model significantly outperforms the previous state
of the arts, such as 79 vs. 73 MRR on Chinese customer service dialogues. We
further probe into our outputs and shed light on how topics and discourse
indicate conversational user interactions.
| 2,021 |
Computation and Language
|
Emotion Classification in a Resource Constrained Language Using
Transformer-based Approach
|
Although research on emotion classification has significantly progressed in
high-resource languages, it is still infancy for resource-constrained languages
like Bengali. However, unavailability of necessary language processing tools
and deficiency of benchmark corpora makes the emotion classification task in
Bengali more challenging and complicated. This work proposes a
transformer-based technique to classify the Bengali text into one of the six
basic emotions: anger, fear, disgust, sadness, joy, and surprise. A Bengali
emotion corpus consists of 6243 texts is developed for the classification task.
Experimentation carried out using various machine learning (LR, RF, MNB, SVM),
deep neural networks (CNN, BiLSTM, CNN+BiLSTM) and transformer (Bangla-BERT,
m-BERT, XLM-R) based approaches. Experimental outcomes indicate that XLM-R
outdoes all other techniques by achieving the highest weighted $f_1$-score of
$69.73\%$ on the test data. The dataset is publicly available at
https://github.com/omar-sharif03/NAACL-SRW-2021.
| 2,021 |
Computation and Language
|
Decrypting Cryptic Crosswords: Semantically Complex Wordplay Puzzles as
a Target for NLP
|
Cryptic crosswords, the dominant crossword variety in the UK, are a promising
target for advancing NLP systems that seek to process semantically complex,
highly compositional language. Cryptic clues read like fluent natural language
but are adversarially composed of two parts: a definition and a wordplay cipher
requiring character-level manipulations. Expert humans use creative
intelligence to solve cryptics, flexibly combining linguistic, world, and
domain knowledge. In this paper, we make two main contributions. First, we
present a dataset of cryptic clues as a challenging new benchmark for NLP
systems that seek to process compositional language in more creative,
human-like ways. After showing that three non-neural approaches and T5, a
state-of-the-art neural language model, do not achieve good performance, we
make our second main contribution: a novel curriculum approach, in which the
model is first fine-tuned on related tasks such as unscrambling words.We also
introduce a challenging data split, examine the meta-linguistic capabilities of
subword-tokenized models, and investigate model systematicity by perturbing the
wordplay part of clues, showing that T5 exhibits behavior partially consistent
with human solving strategies. Although our curricular approach considerably
improves on the T5 baseline, our best-performing model still fails to
generalize to the extent that humans can. Thus, cryptic crosswords remain an
unsolved challenge for NLP systems and a potential source of future innovation.
| 2,021 |
Computation and Language
|
UPB at SemEval-2021 Task 5: Virtual Adversarial Training for Toxic Spans
Detection
|
The real-world impact of polarization and toxicity in the online sphere
marked the end of 2020 and the beginning of this year in a negative way.
Semeval-2021, Task 5 - Toxic Spans Detection is based on a novel annotation of
a subset of the Jigsaw Unintended Bias dataset and is the first language
toxicity detection task dedicated to identifying the toxicity-level spans. For
this task, participants had to automatically detect character spans in short
comments that render the message as toxic. Our model considers applying Virtual
Adversarial Training in a semi-supervised setting during the fine-tuning
process of several Transformer-based models (i.e., BERT and RoBERTa), in
combination with Conditional Random Fields. Our approach leads to performance
improvements and more robust models, enabling us to achieve an F1-score of
65.73% in the official submission and an F1-score of 66.13% after further
tuning during post-evaluation.
| 2,021 |
Computation and Language
|
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages
with Adversarial Examples
|
Capturing word meaning in context and distinguishing between correspondences
and variations across languages is key to building successful multilingual and
cross-lingual text representation models. However, existing multilingual
evaluation datasets that evaluate lexical semantics "in-context" have various
limitations. In particular, 1) their language coverage is restricted to
high-resource languages and skewed in favor of only a few language families and
areas, 2) a design that makes the task solvable via superficial cues, which
results in artificially inflated (and sometimes super-human) performances of
pretrained encoders, on many target languages, which limits their usefulness
for model probing and diagnostics, and 3) little support for cross-lingual
evaluation. In order to address these gaps, we present AM2iCo (Adversarial and
Multilingual Meaning in Context), a wide-coverage cross-lingual and
multilingual evaluation set; it aims to faithfully assess the ability of
state-of-the-art (SotA) representation models to understand the identity of
word meaning in cross-lingual contexts for 14 language pairs. We conduct a
series of experiments in a wide range of setups and demonstrate the challenging
nature of AM2iCo. The results reveal that current SotA pretrained encoders
substantially lag behind human performance, and the largest gaps are observed
for low-resource languages and languages dissimilar to English.
| 2,021 |
Computation and Language
|
Customized determination of stop words using Random Matrix Theory
approach
|
The distances between words calculated in word units are studied and compared
with the distributions of the Random Matrix Theory (RMT). It is found that the
distribution of distance between the same words can be well described by the
single-parameter Brody distribution. Using the Brody distribution fit, we found
that the distance between given words in a set of texts can show mixed
dynamics, coexisting regular and chaotic regimes. It is found that
distributions correctly fitted by the Brody distribution with a certain
goodness of the fit threshold can be identifid as stop words, usually
considered as the uninformative part of the text. By applying various threshold
values for the goodness of fit, we can extract uninformative words from the
texts under analysis to the desired extent. On this basis we formulate a fully
agnostic recipe that can be used in the creation of a customized set of stop
words for texts in any language based on words.
| 2,021 |
Computation and Language
|
Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training
|
Pre-trained multilingual language encoders, such as multilingual BERT and
XLM-R, show great potential for zero-shot cross-lingual transfer. However,
these multilingual encoders do not precisely align words and phrases across
languages. Especially, learning alignments in the multilingual embedding space
usually requires sentence-level or word-level parallel corpora, which are
expensive to be obtained for low-resource languages. An alternative is to make
the multilingual encoders more robust; when fine-tuning the encoder using
downstream task, we train the encoder to tolerate noise in the contextual
embedding spaces such that even if the representations of different languages
are not aligned well, the model can still achieve good performance on zero-shot
cross-lingual transfer. In this work, we propose a learning strategy for
training robust models by drawing connections between adversarial examples and
the failure cases of zero-shot cross-lingual transfer. We adopt two widely used
robust training methods, adversarial training and randomized smoothing, to
train the desired robust model. The experimental results demonstrate that
robust training improves zero-shot cross-lingual transfer on text
classification tasks. The improvement is more significant in the generalized
cross-lingual transfer setting, where the pair of input sentences belong to two
different languages.
| 2,021 |
Computation and Language
|
Competency Problems: On Finding and Removing Artifacts in Language Data
|
Much recent work in NLP has documented dataset artifacts, bias, and spurious
correlations between input features and output labels. However, how to tell
which features have "spurious" instead of legitimate correlations is typically
left unspecified. In this work we argue that for complex language understanding
tasks, all simple feature correlations are spurious, and we formalize this
notion into a class of problems which we call competency problems. For example,
the word "amazing" on its own should not give information about a sentiment
label independent of the context in which it appears, which could include
negation, metaphor, sarcasm, etc. We theoretically analyze the difficulty of
creating data for competency problems when human bias is taken into account,
showing that realistic datasets will increasingly deviate from competency
problems as dataset size increases. This analysis gives us a simple statistical
test for dataset artifacts, which we use to show more subtle biases than were
described in prior work, including demonstrating that models are
inappropriately affected by these less extreme biases. Our theoretical
treatment of this problem also allows us to analyze proposed solutions, such as
making local edits to dataset instances, and to give recommendations for future
data collection and model design efforts that target competency problems.
| 2,021 |
Computation and Language
|
Question Decomposition with Dependency Graphs
|
QDMR is a meaning representation for complex questions, which decomposes
questions into a sequence of atomic steps. While state-of-the-art QDMR parsers
use the common sequence-to-sequence (seq2seq) approach, a QDMR structure
fundamentally describes labeled relations between spans in the input question,
and thus dependency-based approaches seem appropriate for this task. In this
work, we present a QDMR parser that is based on dependency graphs (DGs), where
nodes in the graph are words and edges describe logical relations that
correspond to the different computation steps. We propose (a) a
non-autoregressive graph parser, where all graph edges are computed
simultaneously, and (b) a seq2seq parser that uses gold graph as auxiliary
supervision. We find that a graph parser leads to a moderate reduction in
performance (0.47 to 0.44), but to a 16x speed-up in inference time due to the
non-autoregressive nature of the parser, and to improved sample complexity
compared to a seq2seq model. Second, a seq2seq model trained with auxiliary
graph supervision has better generalization to new domains compared to a
seq2seq model, and also performs better on questions with long sequences of
computation steps.
| 2,021 |
Computation and Language
|
IITP@COLIEE 2019: Legal Information Retrieval using BM25 and BERT
|
Natural Language Processing (NLP) and Information Retrieval (IR) in the
judicial domain is an essential task. With the advent of availability
domain-specific data in electronic form and aid of different Artificial
intelligence (AI) technologies, automated language processing becomes more
comfortable, and hence it becomes feasible for researchers and developers to
provide various automated tools to the legal community to reduce human burden.
The Competition on Legal Information Extraction/Entailment (COLIEE-2019) run in
association with the International Conference on Artificial Intelligence and
Law (ICAIL)-2019 has come up with few challenging tasks. The shared defined
four sub-tasks (i.e. Task1, Task2, Task3 and Task4), which will be able to
provide few automated systems to the judicial system. The paper presents our
working note on the experiments carried out as a part of our participation in
all the sub-tasks defined in this shared task. We make use of different
Information Retrieval(IR) and deep learning based approaches to tackle these
problems. We obtain encouraging results in all these four sub-tasks.
| 2,021 |
Computation and Language
|
DiS-ReX: A Multilingual Dataset for Distantly Supervised Relation
Extraction
|
Distant supervision (DS) is a well established technique for creating
large-scale datasets for relation extraction (RE) without using human
annotations. However, research in DS-RE has been mostly limited to the English
language. Constraining RE to a single language inhibits utilization of large
amounts of data in other languages which could allow extraction of more diverse
facts. Very recently, a dataset for multilingual DS-RE has been released.
However, our analysis reveals that the proposed dataset exhibits unrealistic
characteristics such as 1) lack of sentences that do not express any relation,
and 2) all sentences for a given entity pair expressing exactly one relation.
We show that these characteristics lead to a gross overestimation of the model
performance. In response, we propose a new dataset, DiS-ReX, which alleviates
these issues. Our dataset has more than 1.5 million sentences, spanning across
4 languages with 36 relation classes + 1 no relation (NA) class. We also modify
the widely used bag attention models by encoding sentences using mBERT and
provide the first benchmark results on multilingual DS-RE. Unlike the competing
dataset, we show that our dataset is challenging and leaves enough room for
future research to take place in this field.
| 2,021 |
Computation and Language
|
Learning from Noisy Labels for Entity-Centric Information Extraction
|
Recent information extraction approaches have relied on training deep neural
models. However, such models can easily overfit noisy labels and suffer from
performance degradation. While it is very costly to filter noisy labels in
large learning resources, recent studies show that such labels take more
training steps to be memorized and are more frequently forgotten than clean
labels, therefore are identifiable in training. Motivated by such properties,
we propose a simple co-regularization framework for entity-centric information
extraction, which consists of several neural models with identical structures
but different parameter initialization. These models are jointly optimized with
the task-specific losses and are regularized to generate similar predictions
based on an agreement loss, which prevents overfitting on noisy labels.
Extensive experiments on two widely used but noisy benchmarks for information
extraction, TACRED and CoNLL03, demonstrate the effectiveness of our framework.
We release our code to the community for future research.
| 2,022 |
Computation and Language
|
Monotonicity Marking from Universal Dependency Trees
|
Dependency parsing is a tool widely used in the field of Natural language
processing and computational linguistics. However, there is hardly any work
that connects dependency parsing to monotonicity, which is an essential part of
logic and linguistic semantics. In this paper, we present a system that
automatically annotates monotonicity information based on Universal Dependency
parse trees. Our system utilizes surface-level monotonicity facts about
quantifiers, lexical items, and token-level polarity information. We compared
our system's performance with existing systems in the literature, including
NatLog and ccg2mono, on a small evaluation dataset. Results show that our
system outperforms NatLog and ccg2mono.
| 2,021 |
Computation and Language
|
Explaining Answers with Entailment Trees
|
Our goal, in the context of open-domain textual question-answering (QA), is
to explain answers by showing the line of reasoning from what is known to the
answer, rather than simply showing a fragment of textual evidence (a
"rationale'"). If this could be done, new opportunities for understanding and
debugging the system's reasoning become possible. Our approach is to generate
explanations in the form of entailment trees, namely a tree of multipremise
entailment steps from facts that are known, through intermediate conclusions,
to the hypothesis of interest (namely the question + answer). To train a model
with this skill, we created ENTAILMENTBANK, the first dataset to contain
multistep entailment trees. Given a hypothesis (question + answer), we define
three increasingly difficult explanation tasks: generate a valid entailment
tree given (a) all relevant sentences (b) all relevant and some irrelevant
sentences, or (c) a corpus. We show that a strong language model can partially
solve these tasks, in particular when the relevant sentences are included in
the input (e.g., 35% of trees for (a) are perfect), and with indications of
generalization to other domains. This work is significant as it provides a new
type of dataset (multistep entailments) and baselines, offering a new avenue
for the community to generate richer, more systematic explanations.
| 2,022 |
Computation and Language
|
Characterizing Idioms: Conventionality and Contingency
|
Idioms are unlike most phrases in two important ways. First, the words in an
idiom have non-canonical meanings. Second, the non-canonical meanings of words
in an idiom are contingent on the presence of other words in the idiom.
Linguistic theories differ on whether these properties depend on one another,
as well as whether special theoretical machinery is needed to accommodate
idioms. We define two measures that correspond to the properties above, and we
implement them using BERT (Devlin et al., 2019) and XLNet(Yang et al., 2019).
We show that idioms fall at the expected intersection of the two dimensions,
but that the dimensions themselves are not correlated. Our results suggest that
special machinery to handle idioms may not be warranted.
| 2,022 |
Computation and Language
|
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language
Models
|
Numerous works have analyzed biases in vision and pre-trained language models
individually - however, less attention has been paid to how these biases
interact in multimodal settings. This work extends text-based bias analysis
methods to investigate multimodal language models, and analyzes intra- and
inter-modality associations and biases learned by these models. Specifically,
we demonstrate that VL-BERT (Su et al., 2020) exhibits gender biases, often
preferring to reinforce a stereotype over faithfully describing the visual
scene. We demonstrate these findings on a controlled case-study and extend them
for a larger set of stereotypically gendered entities.
| 2,022 |
Computation and Language
|
SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal
Conversations
|
Next generation task-oriented dialog systems need to understand
conversational contexts with their perceived surroundings, to effectively help
users in the real-world multimodal environment. Existing task-oriented dialog
datasets aimed towards virtual assistance fall short and do not situate the
dialog in the user's multimodal context. To overcome, we present a new dataset
for Situated and Interactive Multimodal Conversations, SIMMC 2.0, which
includes 11K task-oriented user<->assistant dialogs (117K utterances) in the
shopping domain, grounded in immersive and photo-realistic scenes.
The dialogs are collected using a two-phase pipeline: (1) A novel multimodal
dialog simulator generates simulated dialog flows, with an emphasis on
diversity and richness of interactions, (2) Manual paraphrasing of the
generated utterances to collect diverse referring expressions. We provide an
in-depth analysis of the collected dataset, and describe in detail the four
main benchmark tasks we propose. Our baseline model, powered by the
state-of-the-art language model, shows promising results, and highlights new
challenges and directions for the community to study.
| 2,021 |
Computation and Language
|
Generating Related Work
|
Communicating new research ideas involves highlighting similarities and
differences with past work. Authors write fluent, often long sections to survey
the distinction of a new paper with related work. In this work we model
generating related work sections while being cognisant of the motivation behind
citing papers. Our content planning model generates a tree of cited papers
before a surface realization model lexicalizes this skeleton. Our model
outperforms several strong state-of-the-art summarization and multi-document
summarization models on generating related work on an ACL Anthology (AA) based
dataset which we contribute.
| 2,021 |
Computation and Language
|
When Does Pretraining Help? Assessing Self-Supervised Learning for Law
and the CaseHOLD Dataset
|
While self-supervised learning has made rapid advances in natural language
processing, it remains unclear when researchers should engage in
resource-intensive domain-specific pretraining (domain pretraining). The law,
puzzlingly, has yielded few documented instances of substantial gains to domain
pretraining in spite of the fact that legal language is widely seen to be
unique. We hypothesize that these existing results stem from the fact that
existing legal NLP tasks are too easy and fail to meet conditions for when
domain pretraining can help. To address this, we first present CaseHOLD (Case
Holdings On Legal Decisions), a new dataset comprised of over 53,000+ multiple
choice questions to identify the relevant holding of a cited case. This dataset
presents a fundamental task to lawyers and is both legally meaningful and
difficult from an NLP perspective (F1 of 0.4 with a BiLSTM baseline). Second,
we assess performance gains on CaseHOLD and existing legal NLP datasets. While
a Transformer architecture (BERT) pretrained on a general corpus (Google Books
and Wikipedia) improves performance, domain pretraining (using corpus of
approximately 3.5M decisions across all courts in the U.S. that is larger than
BERT's) with a custom legal vocabulary exhibits the most substantial
performance gains with CaseHOLD (gain of 7.2% on F1, representing a 12%
improvement on BERT) and consistent performance gains across two other legal
tasks. Third, we show that domain pretraining may be warranted when the task
exhibits sufficient similarity to the pretraining corpus: the level of
performance increase in three legal tasks was directly tied to the domain
specificity of the task. Our findings inform when researchers should engage
resource-intensive pretraining and show that Transformer-based architectures,
too, learn embeddings suggestive of distinct legal language.
| 2,021 |
Computation and Language
|
"Average" Approximates "First Principal Component"? An Empirical
Analysis on Representations from Neural Language Models
|
Contextualized representations based on neural language models have furthered
the state of the art in various NLP tasks. Despite its great success, the
nature of such representations remains a mystery. In this paper, we present an
empirical property of these representations -- "average" approximates "first
principal component". Specifically, experiments show that the average of these
representations shares almost the same direction as the first principal
component of the matrix whose columns are these representations. We believe
this explains why the average representation is always a simple yet strong
baseline. Our further examinations show that this property also holds in more
challenging scenarios, for example, when the representations are from a model
right after its random initialization. Therefore, we conjecture that this
property is intrinsic to the distribution of representations and not
necessarily related to the input structure. We realize that these
representations empirically follow a normal distribution for each dimension,
and by assuming this is true, we demonstrate that the empirical property can be
in fact derived mathematically.
| 2,022 |
Computation and Language
|
Distributed NLI: Learning to Predict Human Opinion Distributions for
Language Reasoning
|
We introduce distributed NLI, a new NLU task with a goal to predict the
distribution of human judgements for natural language inference. We show that
by applying additional distribution estimation methods, namely, Monte Carlo
(MC) Dropout, Deep Ensemble, Re-Calibration, and Distribution Distillation,
models can capture human judgement distribution more effectively than the
softmax baseline. We show that MC Dropout is able to achieve decent performance
without any distribution annotations while Re-Calibration can give further
improvements with extra distribution annotations, suggesting the value of
multiple annotations for one example in modeling the distribution of human
judgements. Despite these improvements, the best results are still far below
the estimated human upper-bound, indicating that predicting the distribution of
human judgements is still an open, challenging problem with a large room for
improvements. We showcase the common errors for MC Dropout and Re-Calibration.
Finally, we give guidelines on the usage of these methods with different levels
of data availability and encourage future work on modeling the human opinion
distribution for language reasoning. Our code and data are publicly available
at https://github.com/easonnie/ChaosNLI
| 2,022 |
Computation and Language
|
From Fully Trained to Fully Random Embeddings: Improving Neural Machine
Translation with Compact Word Embedding Tables
|
Embedding matrices are key components in neural natural language processing
(NLP) models that are responsible to provide numerical representations of input
tokens.\footnote{In this paper words and subwords are referred to as
\textit{tokens} and the term \textit{embedding} only refers to embeddings of
inputs.} In this paper, we analyze the impact and utility of such matrices in
the context of neural machine translation (NMT). We show that detracting
syntactic and semantic information from word embeddings and running NMT systems
with random embeddings is not as damaging as it initially sounds. We also show
how incorporating only a limited amount of task-specific knowledge from
fully-trained embeddings can boost the performance NMT systems. Our findings
demonstrate that in exchange for negligible deterioration in performance, any
NMT model can be run with partially random embeddings. Working with such
structures means a minimal memory requirement as there is no longer need to
store large embedding tables, which is a significant gain in industrial and
on-device settings. We evaluated our embeddings in translating {English} into
{German} and {French} and achieved a $5.3$x compression rate. Despite having a
considerably smaller architecture, our models in some cases are even able to
outperform state-of-the-art baselines.
| 2,022 |
Computation and Language
|
Improving Question Answering Model Robustness with Synthetic Adversarial
Data Generation
|
Despite recent progress, state-of-the-art question answering models remain
vulnerable to a variety of adversarial attacks. While dynamic adversarial data
collection, in which a human annotator tries to write examples that fool a
model-in-the-loop, can improve model robustness, this process is expensive
which limits the scale of the collected data. In this work, we are the first to
use synthetic adversarial data generation to make question answering models
more robust to human adversaries. We develop a data generation pipeline that
selects source passages, identifies candidate answers, generates questions,
then finally filters or re-labels them to improve quality. Using this approach,
we amplify a smaller human-written adversarial dataset to a much larger set of
synthetic question-answer pairs. By incorporating our synthetic data, we
improve the state-of-the-art on the AdversarialQA dataset by 3.7F1 and improve
model generalisation on nine of the twelve MRQA datasets. We further conduct a
novel human-in-the-loop evaluation to show that our models are considerably
more robust to new human-written adversarial examples: crowdworkers can fool
our model only 8.8% of the time on average, compared to 17.6% for a model
trained without synthetic data.
| 2,021 |
Computation and Language
|
Guilt by Association: Emotion Intensities in Lexical Representations
|
What do word vector representations reveal about the emotions associated with
words? In this study, we consider the task of estimating word-level emotion
intensity scores for specific emotions, exploring unsupervised, supervised, and
finally a self-supervised method of extracting emotional associations from word
vector representations. Overall, we find that word vectors carry substantial
potential for inducing fine-grained emotion intensity scores, showing a far
higher correlation with human ground truth ratings than achieved by
state-of-the-art emotion lexicons.
| 2,021 |
Computation and Language
|
Rethinking Network Pruning -- under the Pre-train and Fine-tune Paradigm
|
Transformer-based pre-trained language models have significantly improved the
performance of various natural language processing (NLP) tasks in the recent
years. While effective and prevalent, these models are usually prohibitively
large for resource-limited deployment scenarios. A thread of research has thus
been working on applying network pruning techniques under the
pretrain-then-finetune paradigm widely adopted in NLP. However, the existing
pruning results on benchmark transformers, such as BERT, are not as remarkable
as the pruning results in the literature of convolutional neural networks
(CNNs). In particular, common wisdom in pruning CNN states that sparse pruning
technique compresses a model more than that obtained by reducing number of
channels and layers (Elsen et al., 2020; Zhu and Gupta, 2017), while existing
works on sparse pruning of BERT yields inferior results than its small-dense
counterparts such as TinyBERT (Jiao et al., 2020). In this work, we aim to fill
this gap by studying how knowledge are transferred and lost during the
pre-train, fine-tune, and pruning process, and proposing a knowledge-aware
sparse pruning process that achieves significantly superior results than
existing literature. We show for the first time that sparse pruning compresses
a BERT model significantly more than reducing its number of channels and
layers. Experiments on multiple data sets of GLUE benchmark show that our
method outperforms the leading competitors with a 20-times weight/FLOPs
compression and neglectable loss in prediction accuracy.
| 2,022 |
Computation and Language
|
Linguistic Dependencies and Statistical Dependence
|
Are pairs of words that tend to occur together also likely to stand in a
linguistic dependency? This empirical question is motivated by a long history
of literature in cognitive science, psycholinguistics, and NLP. In this work we
contribute an extensive analysis of the relationship between linguistic
dependencies and statistical dependence between words. Improving on previous
work, we introduce the use of large pretrained language models to compute
contextualized estimates of the pointwise mutual information between words
(CPMI). For multiple models and languages, we extract dependency trees which
maximize CPMI, and compare to gold standard linguistic dependencies. Overall,
we find that CPMI dependencies achieve an unlabelled undirected attachment
score of at most $\approx 0.5$. While far above chance, and consistently above
a non-contextualized PMI baseline, this score is generally comparable to a
simple baseline formed by connecting adjacent words. We analyze which kinds of
linguistic dependencies are best captured in CPMI dependencies, and also find
marked differences between the estimates of the large pretrained language
models, illustrating how their different training schemes affect the type of
dependencies they capture.
| 2,021 |
Computation and Language
|
The Power of Scale for Parameter-Efficient Prompt Tuning
|
In this work, we explore "prompt tuning", a simple yet effective mechanism
for learning "soft prompts" to condition frozen language models to perform
specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft
prompts are learned through backpropagation and can be tuned to incorporate
signal from any number of labeled examples. Our end-to-end learned approach
outperforms GPT-3's "few-shot" learning by a large margin. More remarkably,
through ablations on model size using T5, we show that prompt tuning becomes
more competitive with scale: as models exceed billions of parameters, our
method "closes the gap" and matches the strong performance of model tuning
(where all model weights are tuned). This finding is especially relevant in
that large models are costly to share and serve, and the ability to reuse one
frozen model for multiple downstream tasks can ease this burden. Our method can
be seen as a simplification of the recently proposed "prefix tuning" of Li and
Liang (2021), and we provide a comparison to this and other similar approaches.
Finally, we show that conditioning a frozen model with soft prompts confers
benefits in robustness to domain transfer, as compared to full model tuning.
| 2,021 |
Computation and Language
|
MT6: Multilingual Pretrained Text-to-Text Transformer with Translation
Pairs
|
Multilingual T5 (mT5) pretrains a sequence-to-sequence model on massive
monolingual texts, which has shown promising results on many cross-lingual
tasks. In this paper, we improve multilingual text-to-text transfer Transformer
with translation pairs (mT6). Specifically, we explore three cross-lingual
text-to-text pre-training tasks, namely, machine translation, translation pair
span corruption, and translation span corruption. In addition, we propose a
partially non-autoregressive objective for text-to-text pre-training. We
evaluate the methods on eight multilingual benchmark datasets, including
sentence classification, named entity recognition, question answering, and
abstractive summarization. Experimental results show that the proposed mT6
improves cross-lingual transferability over mT5.
| 2,021 |
Computation and Language
|
Knowledge Neurons in Pretrained Transformers
|
Large-scale pretrained language models are surprisingly good at recalling
factual knowledge presented in the training corpus. In this paper, we present
preliminary studies on how factual knowledge is stored in pretrained
Transformers by introducing the concept of knowledge neurons. Specifically, we
examine the fill-in-the-blank cloze task for BERT. Given a relational fact, we
propose a knowledge attribution method to identify the neurons that express the
fact. We find that the activation of such knowledge neurons is positively
correlated to the expression of their corresponding facts. In our case studies,
we attempt to leverage knowledge neurons to edit (such as update, and erase)
specific factual knowledge without fine-tuning. Our results shed light on
understanding the storage of knowledge within pretrained Transformers. The code
is available at https://github.com/Hunter-DDM/knowledge-neurons.
| 2,022 |
Computation and Language
|
A Simple and Effective Positional Encoding for Transformers
|
Transformer models are permutation equivariant. To supply the order and type
information of the input tokens, position and segment embeddings are usually
added to the input. Recent works proposed variations of positional encodings
with relative position encodings achieving better performance. Our analysis
shows that the gain actually comes from moving positional information to
attention layer from the input. Motivated by this, we introduce Decoupled
Positional Attention for Transformers (DIET), a simple yet effective mechanism
to encode position and segment information into the Transformer models. The
proposed method has faster training and inference time, while achieving
competitive performance on GLUE, XTREME and WMT benchmarks. We further
generalize our method to long-range transformers and show performance gain.
| 2,021 |
Computation and Language
|
Intent Features for Rich Natural Language Understanding
|
Complex natural language understanding modules in dialog systems have a
richer understanding of user utterances, and thus are critical in providing a
better user experience. However, these models are often created from scratch,
for specific clients and use cases, and require the annotation of large
datasets. This encourages the sharing of annotated data across multiple
clients. To facilitate this we introduce the idea of intent features: domain
and topic agnostic properties of intents that can be learned from the syntactic
cues only, and hence can be shared. We introduce a new neural network
architecture, the Global-Local model, that shows significant improvement over
strong baselines for identifying these features in a deployed, multi-intent
natural language understanding module, and, more generally, in a classification
setting where a part of an utterance has to be classified utilizing the whole
context.
| 2,021 |
Computation and Language
|
A Token-level Reference-free Hallucination Detection Benchmark for
Free-form Text Generation
|
Large pretrained generative models like GPT-3 often suffer from hallucinating
non-existent or incorrect content, which undermines their potential merits in
real applications. Existing work usually attempts to detect these
hallucinations based on a corresponding oracle reference at a sentence or
document level. However ground-truth references may not be readily available
for many free-form text generation applications, and sentence- or
document-level detection may fail to provide the fine-grained signals that
would prevent fallacious content in real time. As a first step to addressing
these issues, we propose a novel token-level, reference-free hallucination
detection task and an associated annotated dataset named HaDes (HAllucination
DEtection dataSet). To create this dataset, we first perturb a large number of
text segments extracted from English language Wikipedia, and then verify these
with crowd-sourced annotations. To mitigate label imbalance during annotation,
we utilize an iterative model-in-loop strategy. We conduct comprehensive data
analyses and create multiple baseline models.
| 2,022 |
Computation and Language
|
Simple and Efficient ways to Improve REALM
|
Dense retrieval has been shown to be effective for retrieving relevant
documents for Open Domain QA, surpassing popular sparse retrieval methods like
BM25. REALM (Guu et al., 2020) is an end-to-end dense retrieval system that
relies on MLM based pretraining for improved downstream QA efficiency across
multiple datasets. We study the finetuning of REALM on various QA tasks and
explore the limits of various hyperparameter and supervision choices. We find
that REALM was significantly undertrained when finetuning and simple
improvements in the training, supervision, and inference setups can
significantly benefit QA results and exceed the performance of other models
published post it. Our best model, REALM++, incorporates all the best working
findings and achieves significant QA accuracy improvements over baselines
(~5.5% absolute accuracy) without any model design changes. Additionally,
REALM++ matches the performance of large Open Domain QA models which have 3x
more parameters demonstrating the efficiency of the setup.
| 2,021 |
Computation and Language
|
PaCo: Preconditions Attributed to Commonsense Knowledge
|
Humans can seamlessly reason with circumstantial preconditions of commonsense
knowledge. We understand that a glass is used for drinking water, unless the
glass is broken or the water is toxic. Despite state-of-the-art (SOTA) language
models' (LMs) impressive performance on inferring commonsense knowledge, it is
unclear whether they understand the circumstantial preconditions. To address
this gap, we propose a novel challenge of reasoning with circumstantial
preconditions. We collect a dataset, called PaCo, consisting of 12.4 thousand
preconditions of commonsense statements expressed in natural language. Based on
this dataset, we create three canonical evaluation tasks and use them to
examine the capability of existing LMs to understand situational preconditions.
Our results reveal a 10-30% gap between machine and human performance on our
tasks, which shows that reasoning with preconditions is an open challenge.
| 2,023 |
Computation and Language
|
Embedding-Enhanced Giza++: Improving Alignment in Low- and High-
Resource Scenarios Using Embedding Space Geometry
|
A popular natural language processing task decades ago, word alignment has
been dominated until recently by GIZA++, a statistical method based on the
30-year-old IBM models. New methods that outperform GIZA++ primarily rely on
large machine translation models, massively multilingual language models, or
supervision from GIZA++ alignments itself. We introduce Embedding-Enhanced
GIZA++, and outperform GIZA++ without any of the aforementioned factors. Taking
advantage of monolingual embedding spaces of source and target language only,
we exceed GIZA++'s performance in every tested scenario for three languages
pairs. In the lowest-resource setting, we outperform GIZA++ by 8.5, 10.9, and
12 AER for Ro-En, De-En, and En-Fr, respectively. We release our code at
https://github.com/kellymarchisio/ee-giza.
| 2,022 |
Computation and Language
|
News Meets Microblog: Hashtag Annotation via Retriever-Generator
|
Hashtag annotation for microblog posts has been recently formulated as a
sequence generation problem to handle emerging hashtags that are unseen in the
training set. The state-of-the-art method leverages conversations initiated by
posts to enrich contextual information for the short posts. However, it is
unrealistic to assume the existence of conversations before the hashtag
annotation itself. Therefore, we propose to leverage news articles published
before the microblog post to generate hashtags following a Retriever-Generator
framework. Extensive experiments on English Twitter datasets demonstrate
superior performance and significant advantages of leveraging news articles to
generate hashtags.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.