Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
An Alignment-Agnostic Model for Chinese Text Error Correction
|
This paper investigates how to correct Chinese text errors with types of
mistaken, missing and redundant characters, which is common for Chinese native
speakers. Most existing models based on detect-correct framework can correct
mistaken characters errors, but they cannot deal with missing or redundant
characters. The reason is that lengths of sentences before and after correction
are not the same, leading to the inconsistence between model inputs and
outputs. Although the Seq2Seq-based or sequence tagging methods provide
solutions to the problem and achieved relatively good results on English
context, but they do not perform well in Chinese context according to our
experimental results. In our work, we propose a novel detect-correct framework
which is alignment-agnostic, meaning that it can handle both text aligned and
non-aligned occasions, and it can also serve as a cold start model when there
are no annotated data provided. Experimental results on three datasets
demonstrate that our method is effective and achieves the best performance
among existing published models.
| 2,021 |
Computation and Language
|
Ultra-High Dimensional Sparse Representations with Binarization for
Efficient Text Retrieval
|
The semantic matching capabilities of neural information retrieval can
ameliorate synonymy and polysemy problems of symbolic approaches. However,
neural models' dense representations are more suitable for re-ranking, due to
their inefficiency. Sparse representations, either in symbolic or latent form,
are more efficient with an inverted index. Taking the merits of the sparse and
dense representations, we propose an ultra-high dimensional (UHD)
representation scheme equipped with directly controllable sparsity. UHD's large
capacity and minimal noise and interference among the dimensions allow for
binarized representations, which are highly efficient for storage and search.
Also proposed is a bucketing method, where the embeddings from multiple layers
of BERT are selected/merged to represent diverse linguistic aspects. We test
our models with MS MARCO and TREC CAR, showing that our models outperforms
other sparse models
| 2,021 |
Computation and Language
|
Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese
Pre-trained Language Models
|
Chinese pre-trained language models usually process text as a sequence of
characters, while ignoring more coarse granularity, e.g., words. In this work,
we propose a novel pre-training paradigm for Chinese -- Lattice-BERT, which
explicitly incorporates word representations along with characters, thus can
model a sentence in a multi-granularity manner. Specifically, we construct a
lattice graph from the characters and words in a sentence and feed all these
text units into transformers. We design a lattice position attention mechanism
to exploit the lattice structures in self-attention layers. We further propose
a masked segment prediction task to push the model to learn from rich but
redundant information inherent in lattices, while avoiding learning unexpected
tricks. Experiments on 11 Chinese natural language understanding tasks show
that our model can bring an average increase of 1.5% under the 12-layer
setting, which achieves new state-of-the-art among base-size models on the CLUE
benchmarks. Further analysis shows that Lattice-BERT can harness the lattice
structures, and the improvement comes from the exploration of redundant
information and multi-granularity representations. Our code will be available
at https://github.com/alibaba/pretrained-language-models/LatticeBERT.
| 2,021 |
Computation and Language
|
RefSum: Refactoring Neural Summarization
|
Although some recent works show potential complementarity among different
state-of-the-art systems, few works try to investigate this problem in text
summarization. Researchers in other areas commonly refer to the techniques of
reranking or stacking to approach this problem. In this work, we highlight
several limitations of previous methods, which motivates us to present a new
framework Refactor that provides a unified view of text summarization and
summaries combination. Experimentally, we perform a comprehensive evaluation
that involves twenty-two base systems, four datasets, and three different
application scenarios. Besides new state-of-the-art results on CNN/DailyMail
dataset (46.18 ROUGE-1), we also elaborate on how our proposed method addresses
the limitations of the traditional methods and the effectiveness of the
Refactor model sheds light on insight for performance improvement. Our system
can be directly used by other researchers as an off-the-shelf tool to achieve
further performance improvements. We open-source all the code and provide a
convenient interface to use it:
https://github.com/yixinL7/Refactoring-Summarization. We have also made the
demo of this work available at:
http://explainaboard.nlpedia.ai/leaderboard/task-summ/index.php.
| 2,021 |
Computation and Language
|
Neural Sequence Segmentation as Determining the Leftmost Segments
|
Prior methods to text segmentation are mostly at token level. Despite the
adequacy, this nature limits their full potential to capture the long-term
dependencies among segments. In this work, we propose a novel framework that
incrementally segments natural language sentences at segment level. For every
step in segmentation, it recognizes the leftmost segment of the remaining
sequence. Implementations involve LSTM-minus technique to construct the phrase
representations and recurrent neural networks (RNN) to model the iterations of
determining the leftmost segments. We have conducted extensive experiments on
syntactic chunking and Chinese part-of-speech (POS) tagging across 3 datasets,
demonstrating that our methods have significantly outperformed previous all
baselines and achieved new state-of-the-art results. Moreover, qualitative
analysis and the study on segmenting long-length sentences verify its
effectiveness in modeling long-term dependencies.
| 2,021 |
Computation and Language
|
Multitasking Inhibits Semantic Drift
|
When intelligent agents communicate to accomplish shared goals, how do these
goals shape the agents' language? We study the dynamics of learning in latent
language policies (LLPs), in which instructor agents generate natural-language
subgoal descriptions and executor agents map these descriptions to low-level
actions. LLPs can solve challenging long-horizon reinforcement learning
problems and provide a rich model for studying task-oriented language use. But
previous work has found that LLP training is prone to semantic drift (use of
messages in ways inconsistent with their original natural language meanings).
Here, we demonstrate theoretically and empirically that multitask training is
an effective counter to this problem: we prove that multitask training
eliminates semantic drift in a well-studied family of signaling games, and show
that multitask training of neural LLPs in a complex strategy game reduces drift
and while improving sample efficiency.
| 2,021 |
Computation and Language
|
A Dual-Questioning Attention Network for Emotion-Cause Pair Extraction
with Context Awareness
|
Emotion-cause pair extraction (ECPE), an emerging task in sentiment analysis,
aims at extracting pairs of emotions and their corresponding causes in
documents. This is a more challenging problem than emotion cause extraction
(ECE), since it requires no emotion signals which are demonstrated as an
important role in the ECE task. Existing work follows a two-stage pipeline
which identifies emotions and causes at the first step and pairs them at the
second step. However, error propagation across steps and pair combining without
contextual information limits the effectiveness. Therefore, we propose a
Dual-Questioning Attention Network to alleviate these limitations.
Specifically, we question candidate emotions and causes to the context
independently through attention networks for a contextual and semantical
answer. Also, we explore how weighted loss functions in controlling error
propagation between steps. Empirical results show that our method performs
better than baselines in terms of multiple evaluation metrics. The source code
can be obtained at https://github.com/QixuanSun/DQAN.
| 2,021 |
Computation and Language
|
Low-Resource Task-Oriented Semantic Parsing via Intrinsic Modeling
|
Task-oriented semantic parsing models typically have high resource
requirements: to support new ontologies (i.e., intents and slots),
practitioners crowdsource thousands of samples for supervised fine-tuning.
Partly, this is due to the structure of de facto copy-generate parsers; these
models treat ontology labels as discrete entities, relying on parallel data to
extrinsically derive their meaning. In our work, we instead exploit what we
intrinsically know about ontology labels; for example, the fact that
SL:TIME_ZONE has the categorical type "slot" and language-based span "time
zone". Using this motivation, we build our approach with offline and online
stages. During preprocessing, for each ontology label, we extract its intrinsic
properties into a component, and insert each component into an inventory as a
cache of sorts. During training, we fine-tune a seq2seq, pre-trained
transformer to map utterances and inventories to frames, parse trees comprised
of utterance and ontology tokens. Our formulation encourages the model to
consider ontology labels as a union of its intrinsic properties, therefore
substantially bootstrapping learning in low-resource settings. Experiments show
our model is highly sample efficient: using a low-resource benchmark derived
from TOPv2, our inventory parser outperforms a copy-generate parser by +15 EM
absolute (44% relative) when fine-tuning on 10 samples from an unseen domain.
| 2,021 |
Computation and Language
|
Sentence-Permuted Paragraph Generation
|
Generating paragraphs of diverse contents is important in many applications.
Existing generation models produce similar contents from homogenized contexts
due to the fixed left-to-right sentence order. Our idea is permuting the
sentence orders to improve the content diversity of multi-sentence paragraph.
We propose a novel framework PermGen whose objective is to maximize the
expected log-likelihood of output paragraph distributions with respect to all
possible sentence orders. PermGen uses hierarchical positional embedding and
designs new procedures for training, decoding, and candidate ranking in the
sentence-permuted generation. Experiments on three paragraph generation
benchmarks demonstrate PermGen generates more diverse outputs with a higher
quality than existing models.
| 2,021 |
Computation and Language
|
Designing a Minimal Retrieve-and-Read System for Open-Domain Question
Answering
|
In open-domain question answering (QA), retrieve-and-read mechanism has the
inherent benefit of interpretability and the easiness of adding, removing, or
editing knowledge compared to the parametric approaches of closed-book QA
models. However, it is also known to suffer from its large storage footprint
due to its document corpus and index. Here, we discuss several orthogonal
strategies to drastically reduce the footprint of a retrieve-and-read
open-domain QA system by up to 160x. Our results indicate that
retrieve-and-read can be a viable option even in a highly constrained serving
environment such as edge devices, as we show that it can achieve better
accuracy than a purely parametric model with comparable docker-level system
size.
| 2,021 |
Computation and Language
|
TorontoCL at CMCL 2021 Shared Task: RoBERTa with Multi-Stage Fine-Tuning
for Eye-Tracking Prediction
|
Eye movement data during reading is a useful source of information for
understanding language comprehension processes. In this paper, we describe our
submission to the CMCL 2021 shared task on predicting human reading patterns.
Our model uses RoBERTa with a regression layer to predict 5 eye-tracking
features. We train the model in two stages: we first fine-tune on the Provo
corpus (another eye-tracking dataset), then fine-tune on the task data. We
compare different Transformer models and apply ensembling methods to improve
the performance. Our final submission achieves a MAE score of 3.929, ranking
3rd place out of 13 teams that participated in this shared task.
| 2,021 |
Computation and Language
|
Regularization for Long Named Entity Recognition
|
When performing named entity recognition (NER), entity length is variable and
dependent on a specific domain or dataset. Pre-trained language models (PLMs)
are used to solve NER tasks and tend to be biased toward dataset patterns such
as length statistics, surface form, and skewed class distribution. These biases
hinder the generalization ability of PLMs, which is necessary to address many
unseen mentions in real-world situations. We propose a novel debiasing method
RegLER to improve predictions for entities of varying lengths. To close the gap
between evaluation and real-world situations, we evaluated PLMs on partitioned
benchmark datasets containing unseen mention sets. Here, RegLER shows
significant improvement over long-named entities that can predict through
debiasing on conjunction or special characters within entities. Furthermore,
there is a severe class imbalance in most NER datasets, causing easy-negative
examples to dominate during training, such as "The". Our approach alleviates
skewed class distribution by reducing the influence of easy-negative examples.
Extensive experiments on the biomedical and general domains demonstrated the
generalization capabilities of our method. To facilitate reproducibility and
future work, we release our code."https://github.com/minstar/RegLER"
| 2,022 |
Computation and Language
|
Integration of Pre-trained Networks with Continuous Token Interface for
End-to-End Spoken Language Understanding
|
Most End-to-End (E2E) SLU networks leverage the pre-trained ASR networks but
still lack the capability to understand the semantics of utterances, crucial
for the SLU task. To solve this, recently proposed studies use pre-trained NLU
networks. However, it is not trivial to fully utilize both pre-trained
networks; many solutions were proposed, such as Knowledge Distillation,
cross-modal shared embedding, and network integration with Interface. We
propose a simple and robust integration method for the E2E SLU network with
novel Interface, Continuous Token Interface (CTI), the junctional
representation of the ASR and NLU networks when both networks are pre-trained
with the same vocabulary. Because the only difference is the noise level, we
directly feed the ASR network's output to the NLU network. Thus, we can train
our SLU network in an E2E manner without additional modules, such as
Gumbel-Softmax. We evaluate our model using SLURP, a challenging SLU dataset
and achieve state-of-the-art scores on both intent classification and slot
filling tasks. We also verify the NLU network, pre-trained with Masked Language
Model, can utilize a noisy textual representation of CTI. Moreover, we show our
model can be trained with multi-task learning from heterogeneous data even
after integration with CTI.
| 2,022 |
Computation and Language
|
Span Pointer Networks for Non-Autoregressive Task-Oriented Semantic
Parsing
|
An effective recipe for building seq2seq, non-autoregressive, task-oriented
parsers to map utterances to semantic frames proceeds in three steps: encoding
an utterance $x$, predicting a frame's length |y|, and decoding a |y|-sized
frame with utterance and ontology tokens. Though empirically strong, these
models are typically bottlenecked by length prediction, as even small
inaccuracies change the syntactic and semantic characteristics of resulting
frames. In our work, we propose span pointer networks, non-autoregressive
parsers which shift the decoding task from text generation to span prediction;
that is, when imputing utterance spans into frame slots, our model produces
endpoints (e.g., [i, j]) as opposed to text (e.g., "6pm"). This natural
quantization of the output space reduces the variability of gold frames,
therefore improving length prediction and, ultimately, exact match.
Furthermore, length prediction is now responsible for frame syntax and the
decoder is responsible for frame semantics, resulting in a coarse-to-fine
model. We evaluate our approach on several task-oriented semantic parsing
datasets. Notably, we bridge the quality gap between non-autogressive and
autoregressive parsers, achieving 87 EM on TOPv2 (Chen et al. 2020).
Furthermore, due to our more consistent gold frames, we show strong
improvements in model generalization in both cross-domain and cross-lingual
transfer in low-resource settings. Finally, due to our diminished output
vocabulary, we observe 70% reduction in latency and 83% reduction in memory at
beam size 5 compared to prior non-autoregressive parsers.
| 2,021 |
Computation and Language
|
Consistency Training with Virtual Adversarial Discrete Perturbation
|
Consistency training regularizes a model by enforcing predictions of original
and perturbed inputs to be similar. Previous studies have proposed various
augmentation methods for the perturbation but are limited in that they are
agnostic to the training model. Thus, the perturbed samples may not aid in
regularization due to their ease of classification from the model. In this
context, we propose an augmentation method of adding a discrete noise that
would incur the highest divergence between predictions. This virtual
adversarial discrete noise obtained by replacing a small portion of tokens
while keeping original semantics as much as possible efficiently pushes a
training model's decision boundary. Experimental results show that our proposed
method outperforms other consistency training baselines with text editing,
paraphrasing, or a continuous noise on semi-supervised text classification
tasks and a robustness benchmark
| 2,022 |
Computation and Language
|
TransferNet: An Effective and Transparent Framework for Multi-hop
Question Answering over Relation Graph
|
Multi-hop Question Answering (QA) is a challenging task because it requires
precise reasoning with entity relations at every step towards the answer. The
relations can be represented in terms of labels in knowledge graph (e.g.,
\textit{spouse}) or text in text corpus (e.g., \textit{they have been married
for 26 years}). Existing models usually infer the answer by predicting the
sequential relation path or aggregating the hidden graph features. The former
is hard to optimize, and the latter lacks interpretability. In this paper, we
propose TransferNet, an effective and transparent model for multi-hop QA, which
supports both label and text relations in a unified framework. TransferNet
jumps across entities at multiple steps. At each step, it attends to different
parts of the question, computes activated scores for relations, and then
transfer the previous entity scores along activated relations in a
differentiable way. We carry out extensive experiments on three datasets and
demonstrate that TransferNet surpasses the state-of-the-art models by a large
margin. In particular, on MetaQA, it achieves 100\% accuracy in 2-hop and 3-hop
questions. By qualitative analysis, we show that TransferNet has transparent
and interpretable intermediate results.
| 2,021 |
Computation and Language
|
NT5?! Training T5 to Perform Numerical Reasoning
|
Numerical reasoning over text (NRoT) presents unique challenges that are not
well addressed by existing pre-training objectives. We explore five sequential
training schedules that adapt a pre-trained T5 model for NRoT. Our final model
is adapted from T5, but further pre-trained on three datasets designed to
strengthen skills necessary for NRoT and general reading comprehension before
being fine-tuned on the Discrete Reasoning over Text (DROP) dataset. The
training improves DROP's adjusted F1 performance (a numeracy-focused score)
from 45.90 to 70.83. Our model closes in on GenBERT (72.4), a custom BERT-Base
model using the same datasets with significantly more parameters. We show that
training the T5 multitasking framework with multiple numerical reasoning
datasets of increasing difficulty, good performance on DROP can be achieved
without manually engineering partitioned functionality between distributed and
symbol modules.
| 2,021 |
Computation and Language
|
Adaptive Sparse Transformer for Multilingual Translation
|
Multilingual machine translation has attracted much attention recently due to
its support of knowledge transfer among languages and the low cost of training
and deployment compared with numerous bilingual models. A known challenge of
multilingual models is the negative language interference. In order to enhance
the translation quality, deeper and wider architectures are applied to
multilingual modeling for larger model capacity, which suffers from the
increased inference cost at the same time. It has been pointed out in recent
studies that parameters shared among languages are the cause of interference
while they may also enable positive transfer. Based on these insights, we
propose an adaptive and sparse architecture for multilingual modeling, and
train the model to learn shared and language-specific parameters to improve the
positive transfer and mitigate the interference. The sparse architecture only
activates a sub-network which preserves inference efficiency, and the adaptive
design selects different sub-networks based on the input languages. Our model
outperforms strong baselines across multiple benchmarks. On the large-scale
OPUS dataset with $100$ languages, we achieve $+2.1$, $+1.3$ and $+6.2$ BLEU
improvements in one-to-many, many-to-one and zero-shot tasks respectively
compared to standard Transformer without increasing the inference cost.
| 2,022 |
Computation and Language
|
BERT based Transformers lead the way in Extraction of Health Information
from Social Media
|
This paper describes our submissions for the Social Media Mining for Health
(SMM4H)2021 shared tasks. We participated in 2 tasks:(1) Classification,
extraction and normalization of adverse drug effect (ADE) mentions in English
tweets (Task-1) and (2) Classification of COVID-19 tweets containing
symptoms(Task-6). Our approach for the first task uses the language
representation model RoBERTa with a binary classification head. For the second
task, we use BERTweet, based on RoBERTa. Fine-tuning is performed on the
pre-trained models for both tasks. The models are placed on top of a custom
domain-specific processing pipeline. Our system ranked first among all the
submissions for subtask-1(a) with an F1-score of 61%. For subtask-1(b), our
system obtained an F1-score of 50% with improvements up to +8% F1 over the
score averaged across all submissions. The BERTweet model achieved an F1 score
of 94% on SMM4H 2021 Task-6.
| 2,021 |
Computation and Language
|
UIT-E10dot3 at SemEval-2021 Task 5: Toxic Spans Detection with Named
Entity Recognition and Question-Answering Approaches
|
The increment of toxic comments on online space is causing tremendous effects
on other vulnerable users. For this reason, considerable efforts are made to
deal with this, and SemEval-2021 Task 5: Toxic Spans Detection is one of those.
This task asks competitors to extract spans that have toxicity from the given
texts, and we have done several analyses to understand its structure before
doing experiments. We solve this task by two approaches, Named Entity
Recognition with spaCy library and Question-Answering with RoBERTa combining
with ToxicBERT, and the former gains the highest F1-score of 66.99%.
| 2,021 |
Computation and Language
|
Tracking entities in technical procedures -- a new dataset and baselines
|
We introduce TechTrack, a new dataset for tracking entities in technical
procedures. The dataset, prepared by annotating open domain articles from
WikiHow, consists of 1351 procedures, e.g., "How to connect a printer",
identifies more than 1200 unique entities with an average of 4.7 entities per
procedure. We evaluate the performance of state-of-the-art models on the
entity-tracking task and find that they are well below the human annotation
performance. We describe how TechTrack can be used to take forward the research
on understanding procedures from temporal texts.
| 2,021 |
Computation and Language
|
Node Co-occurrence based Graph Neural Networks for Knowledge Graph Link
Prediction
|
We introduce a novel embedding model, named NoGE, which aims to integrate
co-occurrence among entities and relations into graph neural networks to
improve knowledge graph completion (i.e., link prediction). Given a knowledge
graph, NoGE constructs a single graph considering entities and relations as
individual nodes. NoGE then computes weights for edges among nodes based on the
co-occurrence of entities and relations. Next, NoGE proposes Dual Quaternion
Graph Neural Networks (DualQGNN) and utilizes DualQGNN to update vector
representations for entity and relation nodes. NoGE then adopts a score
function to produce the triple scores. Comprehensive experimental results show
that NoGE obtains state-of-the-art results on three new and difficult benchmark
datasets CoDEx for knowledge graph completion.
| 2,021 |
Computation and Language
|
Bilingual Terminology Extraction from Comparable E-Commerce Corpora
|
Bilingual terminologies are important machine translation resources in the
field of e-commerce, which are usually either manually translated or
automatically extracted from parallel data. The human translation is costly and
e-commerce parallel corpora is very scarce. However, the comparable data in
different languages in the same commodity field is abundant. In this paper, we
propose a novel framework of extracting e-commercial bilingual terminologies
from comparable data. Benefiting from the cross-lingual pre-training in
e-commerce, our framework can make full use of the deep semantic relationship
between source-side terminology and target-side sentence to extract
corresponding target terminology. Experimental results on various language
pairs show that our approaches achieve significantly better performance than
various strong baselines.
| 2,022 |
Computation and Language
|
Simultaneous Multi-Pivot Neural Machine Translation
|
Parallel corpora are indispensable for training neural machine translation
(NMT) models, and parallel corpora for most language pairs do not exist or are
scarce. In such cases, pivot language NMT can be helpful where a pivot language
is used such that there exist parallel corpora between the source and pivot and
pivot and target languages. Naturally, the quality of pivot language
translation is more inferior to what could be achieved with a direct parallel
corpus of a reasonable size for that pair. In a real-time simultaneous
translation setting, the quality of pivot language translation deteriorates
even further given that the model has to output translations the moment a few
source words become available. To solve this issue, we propose multi-pivot
translation and apply it to a simultaneous translation setting involving pivot
languages. Our approach involves simultaneously translating a source language
into multiple pivots, which are then simultaneously translated together into
the target language by leveraging multi-source NMT. Our experiments in a
low-resource setting using the N-way parallel UN corpus for Arabic to English
NMT via French and Spanish as pivots reveals that in a simultaneous pivot NMT
setting, using two pivot languages can lead to an improvement of up to 5.8
BLEU.
| 2,021 |
Computation and Language
|
XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation
|
Machine learning has brought striking advances in multilingual natural
language processing capabilities over the past year. For example, the latest
techniques have improved the state-of-the-art performance on the XTREME
multilingual benchmark by more than 13 points. While a sizeable gap to
human-level performance remains, improvements have been easier to achieve in
some tasks than in others. This paper analyzes the current state of
cross-lingual transfer learning and summarizes some lessons learned. In order
to catalyze meaningful progress, we extend XTREME to XTREME-R, which consists
of an improved set of ten natural language understanding tasks, including
challenging language-agnostic retrieval tasks, and covers 50 typologically
diverse languages. In addition, we provide a massively multilingual diagnostic
suite (MultiCheckList) and fine-grained multi-dataset evaluation capabilities
through an interactive public leaderboard to gain a better understanding of
such models. The leaderboard and code for XTREME-R will be made available at
https://sites.research.google/xtreme and
https://github.com/google-research/xtreme respectively.
| 2,021 |
Computation and Language
|
The Role of Context in Detecting Previously Fact-Checked Claims
|
Recent years have seen the proliferation of disinformation and fake news
online. Traditional approaches to mitigate these issues is to use manual or
automatic fact-checking. Recently, another approach has emerged: checking
whether the input claim has previously been fact-checked, which can be done
automatically, and thus fast, while also offering credibility and
explainability, thanks to the human fact-checking and explanations in the
associated fact-checking article. Here, we focus on claims made in a political
debate and we study the impact of modeling the context of the claim: both on
the source side, i.e., in the debate, as well as on the target side, i.e., in
the fact-checking explanation document. We do this by modeling the local
context, the global context, as well as by means of co-reference resolution,
and multi-hop reasoning over the sentences of the document describing the
fact-checked claim. The experimental results show that each of these represents
a valuable information source, but that modeling the source-side context is
most important, and can yield 10+ points of absolute improvement over a
state-of-the-art model.
| 2,022 |
Computation and Language
|
Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution
|
Masked language models (MLMs) have contributed to drastic performance
improvements with regard to zero anaphora resolution (ZAR). To further improve
this approach, in this study, we made two proposals. The first is a new
pretraining task that trains MLMs on anaphoric relations with explicit
supervision, and the second proposal is a new finetuning method that remedies a
notorious issue, the pretrain-finetune discrepancy. Our experiments on Japanese
ZAR demonstrated that our two proposals boost the state-of-the-art performance,
and our detailed analysis provides new insights on the remaining challenges.
| 2,021 |
Computation and Language
|
First the worst: Finding better gender translations during beam search
|
Neural machine translation inference procedures like beam search generate the
most likely output under the model. This can exacerbate any demographic biases
exhibited by the model. We focus on gender bias resulting from systematic
errors in grammatical gender translation, which can lead to human referents
being misrepresented or misgendered.
Most approaches to this problem adjust the training data or the model. By
contrast, we experiment with simply adjusting the inference procedure. We
experiment with reranking nbest lists using gender features obtained
automatically from the source sentence, and applying gender constraints while
decoding to improve nbest list gender diversity. We find that a combination of
these techniques allows large gains in WinoMT accuracy without requiring
additional bilingual data or an additional NMT model.
| 2,022 |
Computation and Language
|
Effect of Post-processing on Contextualized Word Representations
|
Post-processing of static embedding has beenshown to improve their
performance on both lexical and sequence-level tasks. However, post-processing
for contextualized embeddings is an under-studied problem. In this work, we
question the usefulness of post-processing for contextualized embeddings
obtained from different layers of pre-trained language models. More
specifically, we standardize individual neuron activations using z-score,
min-max normalization, and by removing top principle components using the
all-but-the-top method. Additionally, we apply unit length normalization to
word representations. On a diverse set of pre-trained models, we show that
post-processing unwraps vital information present in the representations for
both lexical tasks (such as word similarity and analogy)and sequence
classification tasks. Our findings raise interesting points in relation to
theresearch studies that use contextualized representations, and suggest
z-score normalization as an essential step to consider when using them in an
application.
| 2,022 |
Computation and Language
|
Cross-Domain Label-Adaptive Stance Detection
|
Stance detection concerns the classification of a writer's viewpoint towards
a target. There are different task variants, e.g., stance of a tweet vs. a full
article, or stance with respect to a claim vs. an (implicit) topic. Moreover,
task definitions vary, which includes the label inventory, the data collection,
and the annotation protocol. All these aspects hinder cross-domain studies, as
they require changes to standard domain adaptation approaches. In this paper,
we perform an in-depth analysis of 16 stance detection datasets, and we explore
the possibility for cross-domain learning from them. Moreover, we propose an
end-to-end unsupervised framework for out-of-domain prediction of unseen,
user-defined labels. In particular, we combine domain adaptation techniques
such as mixture of experts and domain-adversarial training with label
embeddings, and we demonstrate sizable performance gains over strong baselines,
both (i) in-domain, i.e., for seen targets, and (ii) out-of-domain, i.e., for
unseen targets. Finally, we perform an exhaustive analysis of the cross-domain
results, and we highlight the important factors influencing the model
performance.
| 2,021 |
Computation and Language
|
Fabula Entropy Indexing: Objective Measures of Story Coherence
|
Automated story generation remains a difficult area of research because it
lacks strong objective measures. Generated stories may be linguistically sound,
but in many cases suffer poor narrative coherence required for a compelling,
logically-sound story. To address this, we present Fabula Entropy Indexing
(FEI), an evaluation method to assess story coherence by measuring the degree
to which human participants agree with each other when answering true/false
questions about stories. We devise two theoretically grounded measures of
reader question-answering entropy, the entropy of world coherence (EWC), and
the entropy of transitional coherence (ETC), focusing on global and local
coherence, respectively. We evaluate these metrics by testing them on
human-written stories and comparing against the same stories that have been
corrupted to introduce incoherencies. We show that in these controlled studies,
our entropy indices provide a reliable objective measure of story coherence.
| 2,021 |
Computation and Language
|
Unlocking Compositional Generalization in Pre-trained Models Using
Intermediate Representations
|
Sequence-to-sequence (seq2seq) models are prevalent in semantic parsing, but
have been found to struggle at out-of-distribution compositional
generalization. While specialized model architectures and pre-training of
seq2seq models have been proposed to address this issue, the former often comes
at the cost of generality and the latter only shows limited success. In this
paper, we study the impact of intermediate representations on compositional
generalization in pre-trained seq2seq models, without changing the model
architecture at all, and identify key aspects for designing effective
representations. Instead of training to directly map natural language to an
executable form, we map to a reversible or lossy intermediate representation
that has stronger structural correspondence with natural language. The
combination of our proposed intermediate representations and pre-trained models
is surprisingly effective, where the best combinations obtain a new
state-of-the-art on CFQ (+14.8 accuracy points) and on the template-splits of
three text-to-SQL datasets (+15.0 to +19.4 accuracy points). This work
highlights that intermediate representations provide an important and
potentially overlooked degree of freedom for improving the compositional
generalization abilities of pre-trained seq2seq models.
| 2,021 |
Computation and Language
|
IndT5: A Text-to-Text Transformer for 10 Indigenous Languages
|
Transformer language models have become fundamental components of natural
language processing based pipelines. Although several Transformer models have
been introduced to serve many languages, there is a shortage of models
pre-trained for low-resource and Indigenous languages. In this work, we
introduce IndT5, the first Transformer language model for Indigenous languages.
To train IndT5, we build IndCorpus--a new dataset for ten Indigenous languages
and Spanish. We also present the application of IndT5 to machine translation by
investigating different approaches to translate between Spanish and the
Indigenous languages as part of our contribution to the AmericasNLP 2021 Shared
Task on Open Machine Translation. IndT5 and IndCorpus are publicly available
for research
| 2,021 |
Computation and Language
|
Unmasking the Mask -- Evaluating Social Biases in Masked Language Models
|
Masked Language Models (MLMs) have shown superior performances in numerous
downstream NLP tasks when used as text encoders. Unfortunately, MLMs also
demonstrate significantly worrying levels of social biases. We show that the
previously proposed evaluation metrics for quantifying the social biases in
MLMs are problematic due to following reasons: (1) prediction accuracy of the
masked tokens itself tend to be low in some MLMs, which raises questions
regarding the reliability of the evaluation metrics that use the (pseudo)
likelihood of the predicted tokens, and (2) the correlation between the
prediction accuracy of the mask and the performance in downstream NLP tasks is
not taken into consideration, and (3) high frequency words in the training data
are masked more often, introducing noise due to this selection bias in the test
cases. To overcome the above-mentioned disfluencies, we propose All Unmasked
Likelihood (AUL), a bias evaluation measure that predicts all tokens in a test
case given the MLM embedding of the unmasked input. We find that AUL accurately
detects different types of biases in MLMs. We also propose AUL with attention
weights (AULA) to evaluate tokens based on their importance in a sentence.
However, unlike AUL and AULA, previously proposed bias evaluation measures for
MLMs systematically overestimate the measured biases, and are heavily
influenced by the unmasked tokens in the context.
| 2,021 |
Computation and Language
|
Learning Zero-Shot Multifaceted Visually Grounded Word Embeddings via
Multi-Task Training
|
Language grounding aims at linking the symbolic representation of language
(e.g., words) into the rich perceptual knowledge of the outside world. The
general approach is to embed both textual and visual information into a common
space -the grounded space-confined by an explicit relationship between both
modalities. We argue that this approach sacrifices the abstract knowledge
obtained from linguistic co-occurrence statistics in the process of acquiring
perceptual information. The focus of this paper is to solve this issue by
implicitly grounding the word embeddings. Rather than learning two mappings
into a joint space, our approach integrates modalities by determining a
reversible grounded mapping between the textual and the grounded space by means
of multi-task learning. Evaluations on intrinsic and extrinsic tasks show that
our embeddings are highly beneficial for both abstract and concrete words. They
are strongly correlated with human judgments and outperform previous works on a
wide range of benchmarks. Our grounded embeddings are publicly available here.
| 2,021 |
Computation and Language
|
Natural Language Understanding with Privacy-Preserving BERT
|
Privacy preservation remains a key challenge in data mining and Natural
Language Understanding (NLU). Previous research shows that the input text or
even text embeddings can leak private information. This concern motivates our
research on effective privacy preservation approaches for pretrained Language
Models (LMs). We investigate the privacy and utility implications of applying
dx-privacy, a variant of Local Differential Privacy, to BERT fine-tuning in NLU
applications. More importantly, we further propose privacy-adaptive LM
pretraining methods and show that our approach can boost the utility of BERT
dramatically while retaining the same level of privacy protection. We also
quantify the level of privacy preservation and provide guidance on privacy
configuration. Our experiments and findings lay the groundwork for future
explorations of privacy-preserving NLU with pretrained LMs.
| 2,021 |
Computation and Language
|
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language
Models
|
Recent research has demonstrated that large pre-trained language models
reflect societal biases expressed in natural language. The present paper
introduces a simple method for probing language models to conduct a
multilingual study of gender bias towards politicians. We quantify the usage of
adjectives and verbs generated by language models surrounding the names of
politicians as a function of their gender. To this end, we curate a dataset of
250k politicians worldwide, including their names and gender. Our study is
conducted in seven languages across six different language modeling
architectures. The results demonstrate that pre-trained language models' stance
towards politicians varies strongly across analyzed languages. We find that
while some words such as dead, and designated are associated with both male and
female politicians, a few specific words such as beautiful and divorced are
predominantly associated with female politicians. Finally, and contrary to
previous findings, our study suggests that larger language models do not tend
to be significantly more gender-biased than smaller ones.
| 2,023 |
Computation and Language
|
A Sample-Based Training Method for Distantly Supervised Relation
Extraction with Pre-Trained Transformers
|
Multiple instance learning (MIL) has become the standard learning paradigm
for distantly supervised relation extraction (DSRE). However, due to relation
extraction being performed at bag level, MIL has significant hardware
requirements for training when coupled with large sentence encoders such as
deep transformer neural networks. In this paper, we propose a novel sampling
method for DSRE that relaxes these hardware requirements. In the proposed
method, we limit the number of sentences in a batch by randomly sampling
sentences from the bags in the batch. However, this comes at the cost of losing
valid sentences from bags. To alleviate the issues caused by random sampling,
we use an ensemble of trained models for prediction. We demonstrate the
effectiveness of our approach by using our proposed learning setting to
fine-tuning BERT on the widely NYT dataset. Our approach significantly
outperforms previous state-of-the-art methods in terms of AUC and P@N metrics.
| 2,021 |
Computation and Language
|
Sequence tagging for biomedical extractive question answering
|
Current studies in extractive question answering (EQA) have modeled the
single-span extraction setting, where a single answer span is a label to
predict for a given question-passage pair. This setting is natural for general
domain EQA as the majority of the questions in the general domain can be
answered with a single span. Following general domain EQA models, current
biomedical EQA (BioEQA) models utilize the single-span extraction setting with
post-processing steps. In this article, we investigate the question
distribution across the general and biomedical domains and discover biomedical
questions are more likely to require list-type answers (multiple answers) than
factoid-type answers (single answer). This necessitates the models capable of
producing multiple answers for a question. Based on this preliminary study, we
propose a sequence tagging approach for BioEQA, which is a multi-span
extraction setting. Our approach directly tackles questions with a variable
number of phrases as their answer and can learn to decide the number of answers
for a question from training data. Our experimental results on the BioASQ 7b
and 8b list-type questions outperformed the best-performing existing models
without requiring post-processing steps. Source codes and resources are freely
available for download at https://github.com/dmis-lab/SeqTagQA
| 2,022 |
Computation and Language
|
Generating Datasets with Pretrained Language Models
|
To obtain high-quality sentence embeddings from pretrained language models
(PLMs), they must either be augmented with additional pretraining objectives or
finetuned on a large set of labeled text pairs. While the latter approach
typically outperforms the former, it requires great human effort to generate
suitable datasets of sufficient size. In this paper, we show how PLMs can be
leveraged to obtain high-quality sentence embeddings without the need for
labeled data, finetuning or modifications to the pretraining objective: We
utilize the generative abilities of large and high-performing PLMs to generate
entire datasets of labeled text pairs from scratch, which we then use for
finetuning much smaller and more efficient models. Our fully unsupervised
approach outperforms strong baselines on several semantic textual similarity
datasets.
| 2,021 |
Computation and Language
|
Reward Optimization for Neural Machine Translation with Learned Metrics
|
Neural machine translation (NMT) models are conventionally trained with
token-level negative log-likelihood (NLL), which does not guarantee that the
generated translations will be optimized for a selected sequence-level
evaluation metric. Multiple approaches are proposed to train NMT with BLEU as
the reward, in order to directly improve the metric. However, it was reported
that the gain in BLEU does not translate to real quality improvement, limiting
the application in industry. Recently, it became clear to the community that
BLEU has a low correlation with human judgment when dealing with
state-of-the-art models. This leads to the emerging of model-based evaluation
metrics. These new metrics are shown to have a much higher human correlation.
In this paper, we investigate whether it is beneficial to optimize NMT models
with the state-of-the-art model-based metric, BLEURT. We propose a
contrastive-margin loss for fast and stable reward optimization suitable for
large NMT models. In experiments, we perform automatic and human evaluations to
compare models trained with smoothed BLEU and BLEURT to the baseline models.
Results show that the reward optimization with BLEURT is able to increase the
metric scores by a large margin, in contrast to limited gain when training with
smoothed BLEU. The human evaluation shows that models trained with BLEURT
improve adequacy and coverage of translations. Code is available via
https://github.com/naver-ai/MetricMT.
| 2,021 |
Computation and Language
|
Hierarchical Learning for Generation with Long Source Sequences
|
One of the challenges for current sequence to sequence (seq2seq) models is
processing long sequences, such as those in summarization and document level
machine translation tasks. These tasks require the model to reason at the token
level as well as the sentence and paragraph level. We design and study a new
Hierarchical Attention Transformer-based architecture (HAT) that outperforms
standard Transformers on several sequence to sequence tasks. Furthermore, our
model achieves state-of-the-art ROUGE scores on four summarization tasks,
including PubMed, arXiv, CNN/DM, SAMSum, and AMI. Our model outperforms
document-level machine translation baseline on the WMT20 English to German
translation task. We investigate what the hierarchical layers learn by
visualizing the hierarchical encoder-decoder attention. Finally, we study
hierarchical learning on encoder-only pre-training and analyze its performance
on classification tasks.
| 2,021 |
Computation and Language
|
Zero-Shot Cross-lingual Semantic Parsing
|
Recent work in cross-lingual semantic parsing has successfully applied
machine translation to localize parsers to new languages. However, these
advances assume access to high-quality machine translation systems and word
alignment tools. We remove these assumptions and study cross-lingual semantic
parsing as a zero-shot problem, without parallel data (i.e., utterance-logical
form pairs) for new languages. We propose a multi-task encoder-decoder model to
transfer parsing knowledge to additional languages using only English-logical
form paired data and in-domain natural language corpora in each new language.
Our model encourages language-agnostic encodings by jointly optimizing for
logical-form generation with auxiliary objectives designed for cross-lingual
latent representation alignment. Our parser performs significantly above
translation-based baselines and, in some cases, competes with the supervised
upper-bound.
| 2,022 |
Computation and Language
|
Data-QuestEval: A Referenceless Metric for Data-to-Text Semantic
Evaluation
|
QuestEval is a reference-less metric used in text-to-text tasks, that
compares the generated summaries directly to the source text, by automatically
asking and answering questions. Its adaptation to Data-to-Text tasks is not
straightforward, as it requires multimodal Question Generation and Answering
systems on the considered tasks, which are seldom available. To this purpose,
we propose a method to build synthetic multimodal corpora enabling to train
multimodal components for a data-QuestEval metric. The resulting metric is
reference-less and multimodal; it obtains state-of-the-art correlations with
human judgment on the WebNLG and WikiBio benchmarks. We make data-QuestEval's
code and models available for reproducibility purpose, as part of the QuestEval
project.
| 2,021 |
Computation and Language
|
Rethinking Automatic Evaluation in Sentence Simplification
|
Automatic evaluation remains an open research question in Natural Language
Generation. In the context of Sentence Simplification, this is particularly
challenging: the task requires by nature to replace complex words with simpler
ones that shares the same meaning. This limits the effectiveness of n-gram
based metrics like BLEU. Going hand in hand with the recent advances in NLG,
new metrics have been proposed, such as BERTScore for Machine Translation. In
summarization, the QuestEval metric proposes to automatically compare two texts
by questioning them.
In this paper, we first propose a simple modification of QuestEval allowing
it to tackle Sentence Simplification. We then extensively evaluate the
correlations w.r.t. human judgement for several metrics including the recent
BERTScore and QuestEval, and show that the latter obtain state-of-the-art
correlations, outperforming standard metrics like BLEU and SARI. More
importantly, we also show that a large part of the correlations are actually
spurious for all the metrics. To investigate this phenomenon further, we
release a new corpus of evaluated simplifications, this time not generated by
systems but instead, written by humans. This allows us to remove the spurious
correlations and draw very different conclusions from the original ones,
resulting in a better understanding of these metrics. In particular, we raise
concerns about very low correlations for most of traditional metrics. Our
results show that the only significant measure of the Meaning Preservation is
our adaptation of QuestEval.
| 2,021 |
Computation and Language
|
Retrieval Augmentation Reduces Hallucination in Conversation
|
Despite showing increasingly human-like conversational abilities,
state-of-the-art dialogue models often suffer from factual incorrectness and
hallucination of knowledge (Roller et al., 2020). In this work we explore the
use of neural-retrieval-in-the-loop architectures - recently shown to be
effective in open-domain QA (Lewis et al., 2020b; Izacard and Grave, 2020) -
for knowledge-grounded dialogue, a task that is arguably more challenging as it
requires querying based on complex multi-turn dialogue context and generating
conversationally coherent responses. We study various types of architectures
with multiple components - retrievers, rankers, and encoder-decoders - with the
goal of maximizing knowledgeability while retaining conversational ability. We
demonstrate that our best models obtain state-of-the-art performance on two
knowledge-grounded conversational tasks. The models exhibit open-domain
conversational capabilities, generalize effectively to scenarios not within the
training data, and, as verified by human evaluations, substantially reduce the
well-known problem of knowledge hallucination in state-of-the-art chatbots.
| 2,021 |
Computation and Language
|
Toward Deconfounding the Influence of Entity Demographics for Question
Answering Accuracy
|
The goal of question answering (QA) is to answer any question. However, major
QA datasets have skewed distributions over gender, profession, and nationality.
Despite that skew, model accuracy analysis reveals little evidence that
accuracy is lower for people based on gender or nationality; instead, there is
more variation on professions (question topic). But QA's lack of representation
could itself hide evidence of bias, necessitating QA datasets that better
represent global diversity.
| 2,021 |
Computation and Language
|
Syntactic Perturbations Reveal Representational Correlates of
Hierarchical Phrase Structure in Pretrained Language Models
|
While vector-based language representations from pretrained language models
have set a new standard for many NLP tasks, there is not yet a complete
accounting of their inner workings. In particular, it is not entirely clear
what aspects of sentence-level syntax are captured by these representations,
nor how (if at all) they are built along the stacked layers of the network. In
this paper, we aim to address such questions with a general class of
interventional, input perturbation-based analyses of representations from
pretrained language models. Importing from computational and cognitive
neuroscience the notion of representational invariance, we perform a series of
probes designed to test the sensitivity of these representations to several
kinds of structure in sentences. Each probe involves swapping words in a
sentence and comparing the representations from perturbed sentences against the
original. We experiment with three different perturbations: (1) random
permutations of n-grams of varying width, to test the scale at which a
representation is sensitive to word position; (2) swapping of two spans which
do or do not form a syntactic phrase, to test sensitivity to global phrase
structure; and (3) swapping of two adjacent words which do or do not break
apart a syntactic phrase, to test sensitivity to local phrase structure.
Results from these probes collectively suggest that Transformers build
sensitivity to larger parts of the sentence along their layers, and that
hierarchical phrase structure plays a role in this process. More broadly, our
results also indicate that structured input perturbations widens the scope of
analyses that can be performed on often-opaque deep learning systems, and can
serve as a complement to existing tools (such as supervised linear probes) for
interpreting complex black-box models.
| 2,021 |
Computation and Language
|
SummVis: Interactive Visual Analysis of Models, Data, and Evaluation for
Text Summarization
|
Novel neural architectures, training strategies, and the availability of
large-scale corpora haven been the driving force behind recent progress in
abstractive text summarization. However, due to the black-box nature of neural
models, uninformative evaluation metrics, and scarce tooling for model and data
analysis, the true performance and failure modes of summarization models remain
largely unknown. To address this limitation, we introduce SummVis, an
open-source tool for visualizing abstractive summaries that enables
fine-grained analysis of the models, data, and evaluation metrics associated
with text summarization. Through its lexical and semantic visualizations, the
tools offers an easy entry point for in-depth model prediction exploration
across important dimensions such as factual consistency or abstractiveness. The
tool together with several pre-computed model outputs is available at
https://github.com/robustness-gym/summvis.
| 2,021 |
Computation and Language
|
Planning with Learned Entity Prompts for Abstractive Summarization
|
We introduce a simple but flexible mechanism to learn an intermediate plan to
ground the generation of abstractive summaries. Specifically, we prepend (or
prompt) target summaries with entity chains -- ordered sequences of entities
mentioned in the summary. Transformer-based sequence-to-sequence models are
then trained to generate the entity chain and then continue generating the
summary conditioned on the entity chain and the input. We experimented with
both pretraining and finetuning with this content planning objective. When
evaluated on CNN/DailyMail, XSum, SAMSum and BillSum, we demonstrate
empirically that the grounded generation with the planning objective improves
entity specificity and planning in summaries for all datasets, and achieves
state-of-the-art performance on XSum and SAMSum in terms of Rouge. Moreover, we
demonstrate empirically that planning with entity chains provides a mechanism
to control hallucinations in abstractive summaries. By prompting the decoder
with a modified content plan that drops hallucinated entities, we outperform
state-of-the-art approaches for faithfulness when evaluated automatically and
by humans.
| 2,021 |
Computation and Language
|
Adapting Coreference Resolution Models through Active Learning
|
Neural coreference resolution models trained on one dataset may not transfer
to new, low-resource domains. Active learning mitigates this problem by
sampling a small subset of data for annotators to label. While active learning
is well-defined for classification tasks, its application to coreference
resolution is neither well-defined nor fully understood. This paper explores
how to actively label coreference, examining sources of model uncertainty and
document reading costs. We compare uncertainty sampling strategies and their
advantages through thorough error analysis. In both synthetic and human
experiments, labeling spans within the same document is more effective than
annotating spans across documents. The findings contribute to a more realistic
development of coreference resolution models.
| 2,022 |
Computation and Language
|
SINA-BERT: A pre-trained Language Model for Analysis of Medical Texts in
Persian
|
We have released Sina-BERT, a language model pre-trained on BERT (Devlin et
al., 2018) to address the lack of a high-quality Persian language model in the
medical domain. SINA-BERT utilizes pre-training on a large-scale corpus of
medical contents including formal and informal texts collected from a variety
of online resources in order to improve the performance on health-care related
tasks. We employ SINA-BERT to complete following representative tasks:
categorization of medical questions, medical sentiment analysis, and medical
question retrieval. For each task, we have developed Persian annotated data
sets for training and evaluation and learnt a representation for the data of
each task especially complex and long medical questions. With the same
architecture being used across tasks, SINA-BERT outperforms BERT-based models
that were previously made available in the Persian language.
| 2,021 |
Computation and Language
|
Sometimes We Want Translationese
|
Rapid progress in Neural Machine Translation (NMT) systems over the last few
years has been driven primarily towards improving translation quality, and as a
secondary focus, improved robustness to input perturbations (e.g. spelling and
grammatical mistakes). While performance and robustness are important
objectives, by over-focusing on these, we risk overlooking other important
properties. In this paper, we draw attention to the fact that for some
applications, faithfulness to the original (input) text is important to
preserve, even if it means introducing unusual language patterns in the
(output) translation. We propose a simple, novel way to quantify whether an NMT
system exhibits robustness and faithfulness, focusing on the case of word-order
perturbations. We explore a suite of functions to perturb the word order of
source sentences without deleting or injecting tokens, and measure the effects
on the target side in terms of both robustness and faithfulness. Across several
experimental conditions, we observe a strong tendency towards robustness rather
than faithfulness. These results allow us to better understand the trade-off
between faithfulness and robustness in NMT, and opens up the possibility of
developing systems where users have more autonomy and control in selecting
which property is best suited for their use case.
| 2,021 |
Computation and Language
|
Time-Stamped Language Model: Teaching Language Models to Understand the
Flow of Events
|
Tracking entities throughout a procedure described in a text is challenging
due to the dynamic nature of the world described in the process. Firstly, we
propose to formulate this task as a question answering problem. This enables us
to use pre-trained transformer-based language models on other QA benchmarks by
adapting those to the procedural text understanding. Secondly, since the
transformer-based language models cannot encode the flow of events by
themselves, we propose a Time-Stamped Language Model~(TSLM model) to encode
event information in LMs architecture by introducing the timestamp encoding.
Our model evaluated on the Propara dataset shows improvements on the published
state-of-the-art results with a $3.1\%$ increase in F1 score. Moreover, our
model yields better results on the location prediction task on the NPN-Cooking
dataset. This result indicates that our approach is effective for procedural
text understanding in general.
| 2,021 |
Computation and Language
|
The Effect of Efficient Messaging and Input Variability on Neural-Agent
Iterated Language Learning
|
Natural languages display a trade-off among different strategies to convey
syntactic structure, such as word order or inflection. This trade-off, however,
has not appeared in recent simulations of iterated language learning with
neural network agents (Chaabouni et al., 2019b). We re-evaluate this result in
light of three factors that play an important role in comparable experiments
from the Language Evolution field: (i) speaker bias towards efficient
messaging, (ii) non systematic input languages, and (iii) learning bottleneck.
Our simulations show that neural agents mainly strive to maintain the utterance
type distribution observed during learning, instead of developing a more
efficient or systematic language.
| 2,021 |
Computation and Language
|
Robust Optimization for Multilingual Translation with Imbalanced Data
|
Multilingual models are parameter-efficient and especially effective in
improving low-resource languages by leveraging crosslingual transfer. Despite
recent advance in massive multilingual translation with ever-growing model and
data, how to effectively train multilingual models has not been well
understood. In this paper, we show that a common situation in multilingual
training, data imbalance among languages, poses optimization tension between
high resource and low resource languages where the found multilingual solution
is often sub-optimal for low resources. We show that common training method
which upsamples low resources can not robustly optimize population loss with
risks of either underfitting high resource languages or overfitting low
resource ones. Drawing on recent findings on the geometry of loss landscape and
its effect on generalization, we propose a principled optimization algorithm,
Curvature Aware Task Scaling (CATS), which adaptively rescales gradients from
different tasks with a meta objective of guiding multilingual training to
low-curvature neighborhoods with uniformly low loss for all languages. We ran
experiments on common benchmarks (TED, WMT and OPUS-100) with varying degrees
of data imbalance. CATS effectively improved multilingual optimization and as a
result demonstrated consistent gains on low resources ($+0.8$ to $+2.2$ BLEU)
without hurting high resources. In addition, CATS is robust to
overparameterization and large batch size training, making it a promising
training method for massive multilingual models that truly improve low resource
languages.
| 2,021 |
Computation and Language
|
Bilingual alignment transfers to multilingual alignment for unsupervised
parallel text mining
|
This work presents methods for learning cross-lingual sentence
representations using paired or unpaired bilingual texts. We hypothesize that
the cross-lingual alignment strategy is transferable, and therefore a model
trained to align only two languages can encode multilingually more aligned
representations. We thus introduce dual-pivot transfer: training on one
language pair and evaluating on other pairs. To study this theory, we design
unsupervised models trained on unpaired sentences and single-pair supervised
models trained on bitexts, both based on the unsupervised language model XLM-R
with its parameters frozen. The experiments evaluate the models as universal
sentence encoders on the task of unsupervised bitext mining on two datasets,
where the unsupervised model reaches the state of the art of unsupervised
retrieval, and the alternative single-pair supervised model approaches the
performance of multilingually supervised models. The results suggest that
bilingual training techniques as proposed can be applied to get sentence
representations with multilingual alignment.
| 2,022 |
Computation and Language
|
ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning
|
Recent commonsense-reasoning tasks are typically discriminative in nature,
where a model answers a multiple-choice question for a certain context.
Discriminative tasks are limiting because they fail to adequately evaluate the
model's ability to reason and explain predictions with underlying commonsense
knowledge. They also allow such models to use reasoning shortcuts and not be
"right for the right reasons". In this work, we present ExplaGraphs, a new
generative and structured commonsense-reasoning task (and an associated
dataset) of explanation graph generation for stance prediction. Specifically,
given a belief and an argument, a model has to predict if the argument supports
or counters the belief and also generate a commonsense-augmented graph that
serves as non-trivial, complete, and unambiguous explanation for the predicted
stance. We collect explanation graphs through a novel Create-Verify-And-Refine
graph collection framework that improves the graph quality (up to 90%) via
multiple rounds of verification and refinement. A significant 79% of our graphs
contain external commonsense nodes with diverse structures and reasoning
depths. Next, we propose a multi-level evaluation framework, consisting of
automatic metrics and human evaluation, that check for the structural and
semantic correctness of the generated graphs and their degree of match with
ground-truth graphs. Finally, we present several structured,
commonsense-augmented, and text generation models as strong starting points for
this explanation graph generation task, and observe that there is a large gap
with human performance, thereby encouraging future work for this new
challenging task. ExplaGraphs will be publicly available at
https://explagraphs.github.io.
| 2,021 |
Computation and Language
|
Are Multilingual BERT models robust? A Case Study on Adversarial Attacks
for Multilingual Question Answering
|
Recent approaches have exploited weaknesses in monolingual question answering
(QA) models by adding adversarial statements to the passage. These attacks
caused a reduction in state-of-the-art performance by almost 50%. In this
paper, we are the first to explore and successfully attack a multilingual QA
(MLQA) system pre-trained on multilingual BERT using several attack strategies
for the adversarial statement reducing performance by as much as 85%. We show
that the model gives priority to English and the language of the question
regardless of the other languages in the QA pair. Further, we also show that
adding our attack strategies during training helps alleviate the attacks.
| 2,021 |
Computation and Language
|
KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
for Relation Extraction
|
Recently, prompt-tuning has achieved promising results for specific few-shot
classification tasks. The core idea of prompt-tuning is to insert text pieces
(i.e., templates) into the input and transform a classification task into a
masked language modeling problem. However, for relation extraction, determining
an appropriate prompt template requires domain expertise, and it is cumbersome
and time-consuming to obtain a suitable label word. Furthermore, there exists
abundant semantic and prior knowledge among the relation labels that cannot be
ignored. To this end, we focus on incorporating knowledge among relation labels
into prompt-tuning for relation extraction and propose a Knowledge-aware
Prompt-tuning approach with synergistic optimization (KnowPrompt).
Specifically, we inject latent knowledge contained in relation labels into
prompt construction with learnable virtual type words and answer words. Then,
we synergistically optimize their representation with structured constraints.
Extensive experimental results on five datasets with standard and low-resource
settings demonstrate the effectiveness of our approach. Our code and datasets
are available in https://github.com/zjunlp/KnowPrompt for reproducibility.
| 2,023 |
Computation and Language
|
Improving Gender Translation Accuracy with Filtered Self-Training
|
Targeted evaluations have found that machine translation systems often output
incorrect gender, even when the gender is clear from context. Furthermore,
these incorrectly gendered translations have the potential to reflect or
amplify social biases. We propose a gender-filtered self-training technique to
improve gender translation accuracy on unambiguously gendered inputs. This
approach uses a source monolingual corpus and an initial model to generate
gender-specific pseudo-parallel corpora which are then added to the training
data. We filter the gender-specific corpora on the source and target sides to
ensure that sentence pairs contain and correctly translate the specified
gender. We evaluate our approach on translation from English into five
languages, finding that our models improve gender translation accuracy without
any cost to generic translation quality. In addition, we show the viability of
our approach on several settings, including re-training from scratch,
fine-tuning, controlling the balance of the training data, forward translation,
and back-translation.
| 2,021 |
Computation and Language
|
Syntax-Aware Graph-to-Graph Transformer for Semantic Role Labelling
|
Recent models have shown that incorporating syntactic knowledge into the
semantic role labelling (SRL) task leads to a significant improvement. In this
paper, we propose Syntax-aware Graph-to-Graph Transformer (SynG2G-Tr) model,
which encodes the syntactic structure using a novel way to input graph
relations as embeddings, directly into the self-attention mechanism of
Transformer. This approach adds a soft bias towards attention patterns that
follow the syntactic structure but also allows the model to use this
information to learn alternative patterns. We evaluate our model on both
span-based and dependency-based SRL datasets, and outperform previous
alternative methods in both in-domain and out-of-domain settings, on CoNLL 2005
and CoNLL 2009 datasets.
| 2,023 |
Computation and Language
|
How to Train BERT with an Academic Budget
|
While large language models a la BERT are used ubiquitously in NLP,
pretraining them is considered a luxury that only a few well-funded industry
labs can afford. How can one train such models with a more modest budget? We
present a recipe for pretraining a masked language model in 24 hours using a
single low-end deep learning server. We demonstrate that through a combination
of software optimizations, design choices, and hyperparameter tuning, it is
possible to produce models that are competitive with BERT-base on GLUE tasks at
a fraction of the original pretraining cost.
| 2,021 |
Computation and Language
|
Does BERT Pretrained on Clinical Notes Reveal Sensitive Data?
|
Large Transformers pretrained over clinical notes from Electronic Health
Records (EHR) have afforded substantial gains in performance on predictive
clinical tasks. The cost of training such models (and the necessity of data
access to do so) coupled with their utility motivates parameter sharing, i.e.,
the release of pretrained models such as ClinicalBERT. While most efforts have
used deidentified EHR, many researchers have access to large sets of sensitive,
non-deidentified EHR with which they might train a BERT model (or similar).
Would it be safe to release the weights of such a model if they did? In this
work, we design a battery of approaches intended to recover Personal Health
Information (PHI) from a trained BERT. Specifically, we attempt to recover
patient names and conditions with which they are associated. We find that
simple probing methods are not able to meaningfully extract sensitive
information from BERT trained over the MIMIC-III corpus of EHR. However, more
sophisticated "attacks" may succeed in doing so: To facilitate such research,
we make our experimental setup and baseline probing models available at
https://github.com/elehman16/exposing_patient_data_release
| 2,021 |
Computation and Language
|
Proteno: Text Normalization with Limited Data for Fast Deployment in
Text to Speech Systems
|
Developing Text Normalization (TN) systems for Text-to-Speech (TTS) on new
languages is hard. We propose a novel architecture to facilitate it for
multiple languages while using data less than 3% of the size of the data used
by the state of the art results on English. We treat TN as a sequence
classification problem and propose a granular tokenization mechanism that
enables the system to learn majority of the classes and their normalizations
from the training data itself. This is further combined with minimal precoded
linguistic knowledge for other classes. We publish the first results on TN for
TTS in Spanish and Tamil and also demonstrate that the performance of the
approach is comparable with the previous work done on English. All annotated
datasets used for experimentation will be released at
https://github.com/amazon-research/proteno.
| 2,021 |
Computation and Language
|
Sublanguage: A Serious Issue Affects Pretrained Models in Legal Domain
|
Legal English is a sublanguage that is important for everyone but not for
everyone to understand. Pretrained models have become best practices among
current deep learning approaches for different problems. It would be a waste or
even a danger if these models were applied in practice without knowledge of the
sublanguage of the law. In this paper, we raise the issue and propose a trivial
solution by introducing BERTLaw a legal sublanguage pretrained model. The
paper's experiments demonstrate the superior effectiveness of the method
compared to the baseline pretrained model
| 2,021 |
Computation and Language
|
Detect and Classify -- Joint Span Detection and Classification for
Health Outcomes
|
A health outcome is a measurement or an observation used to capture and
assess the effect of a treatment. Automatic detection of health outcomes from
text would undoubtedly speed up access to evidence necessary in healthcare
decision making. Prior work on outcome detection has modelled this task as
either (a) a sequence labelling task, where the goal is to detect which text
spans describe health outcomes, or (b) a classification task, where the goal is
to classify a text into a pre-defined set of categories depending on an outcome
that is mentioned somewhere in that text. However, this decoupling of span
detection and classification is problematic from a modelling perspective and
ignores global structural correspondences between sentence-level and word-level
information present in a given text. To address this, we propose a method that
uses both word-level and sentence-level information to simultaneously perform
outcome span detection and outcome type classification. In addition to
injecting contextual information to hidden vectors, we use label attention to
appropriately weight both word and sentence level information. Experimental
results on several benchmark datasets for health outcome detection show that
our proposed method consistently outperforms decoupled methods, reporting
competitive results.
| 2,021 |
Computation and Language
|
Towards Robust Neural Retrieval Models with Synthetic Pre-Training
|
Recent work has shown that commonly available machine reading comprehension
(MRC) datasets can be used to train high-performance neural information
retrieval (IR) systems. However, the evaluation of neural IR has so far been
limited to standard supervised learning settings, where they have outperformed
traditional term matching baselines. We conduct in-domain and out-of-domain
evaluations of neural IR, and seek to improve its robustness across different
scenarios, including zero-shot settings. We show that synthetic training
examples generated using a sequence-to-sequence generator can be effective
towards this goal: in our experiments, pre-training with synthetic examples
improves retrieval performance in both in-domain and out-of-domain evaluation
on five different test sets.
| 2,021 |
Computation and Language
|
Detecting Polarized Topics Using Partisanship-aware Contextualized Topic
Embeddings
|
Growing polarization of the news media has been blamed for fanning
disagreement, controversy and even violence. Early identification of polarized
topics is thus an urgent matter that can help mitigate conflict. However,
accurate measurement of topic-wise polarization is still an open research
challenge. To address this gap, we propose Partisanship-aware Contextualized
Topic Embeddings (PaCTE), a method to automatically detect polarized topics
from partisan news sources. Specifically, utilizing a language model that has
been finetuned on recognizing partisanship of the news articles, we represent
the ideology of a news corpus on a topic by corpus-contextualized topic
embedding and measure the polarization using cosine distance. We apply our
method to a dataset of news articles about the COVID-19 pandemic. Extensive
experiments on different news sources and topics demonstrate the efficacy of
our method to capture topical polarization, as indicated by its effectiveness
of retrieving the most polarized topics.
| 2,021 |
Computation and Language
|
A Method to Reveal Speaker Identity in Distributed ASR Training, and How
to Counter It
|
End-to-end Automatic Speech Recognition (ASR) models are commonly trained
over spoken utterances using optimization methods like Stochastic Gradient
Descent (SGD). In distributed settings like Federated Learning, model training
requires transmission of gradients over a network. In this work, we design the
first method for revealing the identity of the speaker of a training utterance
with access only to a gradient. We propose Hessian-Free Gradients Matching, an
input reconstruction technique that operates without second derivatives of the
loss function (required in prior works), which can be expensive to compute. We
show the effectiveness of our method using the DeepSpeech model architecture,
demonstrating that it is possible to reveal the speaker's identity with 34%
top-1 accuracy (51% top-5 accuracy) on the LibriSpeech dataset. Further, we
study the effect of two well-known techniques, Differentially Private SGD and
Dropout, on the success of our method. We show that a dropout rate of 0.2 can
reduce the speaker identity accuracy to 0% top-1 (0.5% top-5).
| 2,021 |
Computation and Language
|
A Masked Segmental Language Model for Unsupervised Natural Language
Segmentation
|
Segmentation remains an important preprocessing step both in languages where
"words" or other important syntactic/semantic units (like morphemes) are not
clearly delineated by white space, as well as when dealing with continuous
speech data, where there is often no meaningful pause between words.
Near-perfect supervised methods have been developed for use in resource-rich
languages such as Chinese, but many of the world's languages are both
morphologically complex, and have no large dataset of "gold" segmentations into
meaningful units. To solve this problem, we propose a new type of Segmental
Language Model (Sun and Deng, 2018; Kawakami et al., 2019; Wang et al., 2021)
for use in both unsupervised and lightly supervised segmentation tasks. We
introduce a Masked Segmental Language Model (MSLM) built on a span-masking
transformer architecture, harnessing the power of a bi-directional masked
modeling context and attention. In a series of experiments, our model
consistently outperforms Recurrent SLMs on Chinese (PKU Corpus) in segmentation
quality, and performs similarly to the Recurrent model on English (PTB). We
conclude by discussing the different challenges posed in segmenting
phonemic-type writing systems.
| 2,021 |
Computation and Language
|
Human-like informative conversations: Better acknowledgements using
conditional mutual information
|
This work aims to build a dialogue agent that can weave new factual content
into conversations as naturally as humans. We draw insights from linguistic
principles of conversational analysis and annotate human-human conversations
from the Switchboard Dialog Act Corpus to examine humans strategies for
acknowledgement, transition, detail selection and presentation. When current
chatbots (explicitly provided with new factual content) introduce facts into a
conversation, their generated responses do not acknowledge the prior turns.
This is because models trained with two contexts - new factual content and
conversational history - generate responses that are non-specific w.r.t. one of
the contexts, typically the conversational history. We show that specificity
w.r.t. conversational history is better captured by Pointwise Conditional
Mutual Information ($\text{pcmi}_h$) than by the established use of Pointwise
Mutual Information ($\text{pmi}$). Our proposed method, Fused-PCMI, trades off
$\text{pmi}$ for $\text{pcmi}_h$ and is preferred by humans for overall quality
over the Max-PMI baseline 60% of the time. Human evaluators also judge
responses with higher $\text{pcmi}_h$ better at acknowledgement 74% of the
time. The results demonstrate that systems mimicking human conversational
traits (in this case acknowledgement) improve overall quality and more broadly
illustrate the utility of linguistic principles in improving dialogue agents.
| 2,021 |
Computation and Language
|
Tracing Topic Transitions with Temporal Graph Clusters
|
Twitter serves as a data source for many Natural Language Processing (NLP)
tasks. It can be challenging to identify topics on Twitter due to continuous
updating data stream. In this paper, we present an unsupervised graph based
framework to identify the evolution of sub-topics within two weeks of
real-world Twitter data. We first employ a Markov Clustering Algorithm (MCL)
with a node removal method to identify optimal graph clusters from temporal
Graph-of-Words (GoW). Subsequently, we model the clustering transitions between
the temporal graphs to identify the topic evolution. Finally, the transition
flows generated from both computational approach and human annotations are
compared to ensure the validity of our framework.
| 2,021 |
Computation and Language
|
Cross-lingual Entity Alignment with Adversarial Kernel Embedding and
Adversarial Knowledge Translation
|
Cross-lingual entity alignment, which aims to precisely connect the same
entities in different monolingual knowledge bases (KBs) together, often suffers
challenges from feature inconsistency to sequence context unawareness. This
paper presents a dual adversarial learning framework for cross-lingual entity
alignment, DAEA, with two original contributions. First, in order to address
the structural and attribute feature inconsistency between entities in two
knowledge graphs (KGs), an adversarial kernel embedding technique is proposed
to extract graph-invariant information in an unsupervised manner, and project
two KGs into the common embedding space. Second, in order to further improve
successful rate of entity alignment, we propose to produce multiple random
walks through each entity to be aligned and mask these entities in random
walks. With the guidance of known aligned entities in the context of multiple
random walks, an adversarial knowledge translation model is developed to fill
and translate masked entities in pairwise random walks from two KGs. Extensive
experiments performed on real-world datasets show that DAEA can well solve the
feature inconsistency and sequence context unawareness issues and significantly
outperforms thirteen state-of-the-art entity alignment methods.
| 2,021 |
Computation and Language
|
Investigating Failures of Automatic Translation in the Case of
Unambiguous Gender
|
Transformer based models are the modern work horses for neural machine
translation (NMT), reaching state of the art across several benchmarks. Despite
their impressive accuracy, we observe a systemic and rudimentary class of
errors made by transformer based models with regards to translating from a
language that doesn't mark gender on nouns into others that do. We find that
even when the surrounding context provides unambiguous evidence of the
appropriate grammatical gender marking, no transformer based model we tested
was able to accurately gender occupation nouns systematically. We release an
evaluation scheme and dataset for measuring the ability of transformer based
NMT models to translate gender morphology correctly in unambiguous contexts
across syntactically diverse sentences. Our dataset translates from an English
source into 20 languages from several different language families. With the
availability of this dataset, our hope is that the NMT community can iterate on
solutions for this class of especially egregious errors.
| 2,021 |
Computation and Language
|
Are Classes Clusters?
|
Sentence embedding models aim to provide general purpose embeddings for
sentences. Most of the models studied in this paper claim to perform well on
STS tasks - but they do not report on their suitability for clustering. This
paper looks at four recent sentence embedding models (Universal Sentence
Encoder (Cer et al., 2018), Sentence-BERT (Reimers and Gurevych, 2019), LASER
(Artetxe and Schwenk, 2019), and DeCLUTR (Giorgi et al., 2020)). It gives a
brief overview of the ideas behind their implementations. It then investigates
how well topic classes in two text classification datasets (Amazon Reviews (Ni
et al., 2019) and News Category Dataset (Misra, 2018)) map to clusters in their
corresponding sentence embedding space. While the performance of the resulting
classification model is far from perfect, it is better than random. This is
interesting because the classification model has been constructed in an
unsupervised way. The topic classes in these real life topic classification
datasets can be partly reconstructed by clustering the corresponding sentence
embeddings.
| 2,021 |
Computation and Language
|
Multivalent Entailment Graphs for Question Answering
|
Drawing inferences between open-domain natural language predicates is a
necessity for true language understanding. There has been much progress in
unsupervised learning of entailment graphs for this purpose. We make three
contributions: (1) we reinterpret the Distributional Inclusion Hypothesis to
model entailment between predicates of different valencies, like DEFEAT(Biden,
Trump) entails WIN(Biden); (2) we actualize this theory by learning
unsupervised Multivalent Entailment Graphs of open-domain predicates; and (3)
we demonstrate the capabilities of these graphs on a novel question answering
task. We show that directional entailment is more helpful for inference than
bidirectional similarity on questions of fine-grained semantics. We also show
that drawing on evidence across valencies answers more questions than by using
only the same valency evidence.
| 2,021 |
Computation and Language
|
Comparison of Grammatical Error Correction Using Back-Translation Models
|
Grammatical error correction (GEC) suffers from a lack of sufficient parallel
data. Therefore, GEC studies have developed various methods to generate pseudo
data, which comprise pairs of grammatical and artificially produced
ungrammatical sentences. Currently, a mainstream approach to generate pseudo
data is back-translation (BT). Most previous GEC studies using BT have employed
the same architecture for both GEC and BT models. However, GEC models have
different correction tendencies depending on their architectures. Thus, in this
study, we compare the correction tendencies of the GEC models trained on pseudo
data generated by different BT models, namely, Transformer, CNN, and LSTM. The
results confirm that the correction tendencies for each error type are
different for every BT model. Additionally, we examine the correction
tendencies when using a combination of pseudo data generated by different BT
models. As a result, we find that the combination of different BT models
improves or interpolates the F_0.5 scores of each error type compared with that
of single BT models with different seeds.
| 2,021 |
Computation and Language
|
Matching-oriented Product Quantization For Ad-hoc Retrieval
|
Product quantization (PQ) is a widely used technique for ad-hoc retrieval.
Recent studies propose supervised PQ, where the embedding and quantization
models can be jointly trained with supervised learning. However, there is a
lack of appropriate formulation of the joint training objective; thus, the
improvements over previous non-supervised baselines are limited in reality. In
this work, we propose the Matching-oriented Product Quantization (MoPQ), where
a novel objective Multinoulli Contrastive Loss (MCL) is formulated. With the
minimization of MCL, we are able to maximize the matching probability of query
and ground-truth key, which contributes to the optimal retrieval accuracy.
Given that the exact computation of MCL is intractable due to the demand of
vast contrastive samples, we further propose the Differentiable Cross-device
Sampling (DCS), which significantly augments the contrastive samples for
precise approximation of MCL. We conduct extensive experimental studies on four
real-world datasets, whose results verify the effectiveness of MoPQ. The code
is available at https://github.com/microsoft/MoPQ.
| 2,021 |
Computation and Language
|
Segmenting Subtitles for Correcting ASR Segmentation Errors
|
Typical ASR systems segment the input audio into utterances using purely
acoustic information, which may not resemble the sentence-like units that are
expected by conventional machine translation (MT) systems for Spoken Language
Translation. In this work, we propose a model for correcting the acoustic
segmentation of ASR models for low-resource languages to improve performance on
downstream tasks. We propose the use of subtitles as a proxy dataset for
correcting ASR acoustic segmentation, creating synthetic acoustic utterances by
modeling common error modes. We train a neural tagging model for correcting ASR
acoustic segmentation and show that it improves downstream performance on MT
and audio-document cross-language information retrieval (CLIR).
| 2,021 |
Computation and Language
|
Translational NLP: A New Paradigm and General Principles for Natural
Language Processing Research
|
Natural language processing (NLP) research combines the study of universal
principles, through basic science, with applied science targeting specific use
cases and settings. However, the process of exchange between basic NLP and
applications is often assumed to emerge naturally, resulting in many
innovations going unapplied and many important questions left unstudied. We
describe a new paradigm of Translational NLP, which aims to structure and
facilitate the processes by which basic and applied NLP research inform one
another. Translational NLP thus presents a third research paradigm, focused on
understanding the challenges posed by application needs and how these
challenges can drive innovation in basic science and technology design. We show
that many significant advances in NLP research have emerged from the
intersection of basic principles with application needs, and present a
conceptual framework outlining the stakeholders and key questions in
translational research. Our framework provides a roadmap for developing
Translational NLP as a dedicated research area, and identifies general
translational principles to facilitate exchange between basic and applied
research.
| 2,021 |
Computation and Language
|
Probing Across Time: What Does RoBERTa Know and When?
|
Models of language trained on very large corpora have been demonstrated
useful for NLP. As fixed artifacts, they have become the object of intense
study, with many researchers "probing" the extent to which linguistic
abstractions, factual and commonsense knowledge, and reasoning abilities they
acquire and readily demonstrate. Building on this line of work, we consider a
new question: for types of knowledge a language model learns, when during
(pre)training are they acquired? We plot probing performance across iterations,
using RoBERTa as a case study. Among our findings: linguistic knowledge is
acquired fast, stably, and robustly across domains. Facts and commonsense are
slower and more domain-sensitive. Reasoning abilities are, in general, not
stably acquired. As new datasets, pretraining protocols, and probes emerge, we
believe that probing-across-time analyses can help researchers understand the
complex, intermingled learning that these models undergo and guide us toward
more efficient approaches that accomplish necessary learning faster.
| 2,021 |
Computation and Language
|
Generating Bug-Fixes Using Pretrained Transformers
|
Detecting and fixing bugs are two of the most important yet frustrating parts
of the software development cycle. Existing bug detection tools are based
mainly on static analyzers, which rely on mathematical logic and symbolic
reasoning about the program execution to detect common types of bugs. Fixing
bugs is typically left out to the developer. In this work we introduce
DeepDebug: a data-driven program repair approach which learns to detect and fix
bugs in Java methods mined from real-world GitHub repositories. We frame
bug-patching as a sequence-to-sequence learning task consisting of two steps:
(i) denoising pretraining, and (ii) supervised finetuning on the target
translation task. We show that pretraining on source code programs improves the
number of patches found by 33% as compared to supervised training from scratch,
while domain-adaptive pretraining from natural language to code further
improves the accuracy by another 32%. We refine the standard accuracy
evaluation metric into non-deletion and deletion-only fixes, and show that our
best model generates 75% more non-deletion fixes than the previous state of the
art. In contrast to prior work, we attain our best results when generating raw
code, as opposed to working with abstracted code that tends to only benefit
smaller capacity models. Finally, we observe a subtle improvement from adding
syntax embeddings along with the standard positional embeddings, as well as
with adding an auxiliary task to predict each token's syntactic class. Despite
focusing on Java, our approach is language agnostic, requiring only a
general-purpose parser such as tree-sitter.
| 2,021 |
Computation and Language
|
MetaXL: Meta Representation Transformation for Low-resource
Cross-lingual Learning
|
The combination of multilingual pre-trained representations and cross-lingual
transfer learning is one of the most effective methods for building functional
NLP systems for low-resource languages. However, for extremely low-resource
languages without large-scale monolingual corpora for pre-training or
sufficient annotated data for fine-tuning, transfer learning remains an
under-studied and challenging task. Moreover, recent work shows that
multilingual representations are surprisingly disjoint across languages,
bringing additional challenges for transfer onto extremely low-resource
languages. In this paper, we propose MetaXL, a meta-learning based framework
that learns to transform representations judiciously from auxiliary languages
to a target one and brings their representation spaces closer for effective
transfer. Extensive experiments on real-world low-resource languages - without
access to large-scale monolingual corpora or large amounts of labeled data -
for tasks like cross-lingual sentiment analysis and named entity recognition
show the effectiveness of our approach. Code for MetaXL is publicly available
at github.com/microsoft/MetaXL.
| 2,021 |
Computation and Language
|
An Empirical Study of Extrapolation in Text Generation with Scalar
Control
|
We conduct an empirical evaluation of extrapolation performance when
conditioning on scalar control inputs like desired output length, desired edit
from an input sentence, and desired sentiment across three text generation
tasks. Specifically, we examine a zero-shot setting where models are asked to
generalize to ranges of control values not seen during training. We focus on
evaluating popular embedding methods for scalar inputs, including both
learnable and sinusoidal embeddings, as well as simpler approaches.
Surprisingly, our findings indicate that the simplest strategy of using scalar
inputs directly, without further encoding, most reliably allows for successful
extrapolation.
| 2,021 |
Computation and Language
|
A Comparative Study on Collecting High-Quality Implicit Reasonings at a
Large-scale
|
Explicating implicit reasoning (i.e. warrants) in arguments is a
long-standing challenge for natural language understanding systems. While
recent approaches have focused on explicating warrants via crowdsourcing or
expert annotations, the quality of warrants has been questionable due to the
extreme complexity and subjectivity of the task. In this paper, we tackle the
complex task of warrant explication and devise various methodologies for
collecting warrants. We conduct an extensive study with trained experts to
evaluate the resulting warrants of each methodology and find that our
methodologies allow for high-quality warrants to be collected. We construct a
preliminary dataset of 6,000 warrants annotated over 600 arguments for 3
debatable topics. To facilitate research in related downstream tasks, we
release our guidelines and preliminary dataset.
| 2,021 |
Computation and Language
|
A Million Tweets Are Worth a Few Points: Tuning Transformers for
Customer Service Tasks
|
In online domain-specific customer service applications, many companies
struggle to deploy advanced NLP models successfully, due to the limited
availability of and noise in their datasets. While prior research demonstrated
the potential of migrating large open-domain pretrained models for
domain-specific tasks, the appropriate (pre)training strategies have not yet
been rigorously evaluated in such social media customer service settings,
especially under multilingual conditions. We address this gap by collecting a
multilingual social media corpus containing customer service conversations
(865k tweets), comparing various pipelines of pretraining and finetuning
approaches, applying them on 5 different end tasks. We show that pretraining a
generic multilingual transformer model on our in-domain dataset, before
finetuning on specific end tasks, consistently boosts performance, especially
in non-English settings.
| 2,021 |
Computation and Language
|
Optimal Size-Performance Tradeoffs: Weighing PoS Tagger Models
|
Improvement in machine learning-based NLP performance are often presented
with bigger models and more complex code. This presents a trade-off: better
scores come at the cost of larger tools; bigger models tend to require more
during training and inference time. We present multiple methods for measuring
the size of a model, and for comparing this with the model's performance.
In a case study over part-of-speech tagging, we then apply these techniques
to taggers for eight languages and present a novel analysis identifying which
taggers are size-performance optimal. Results indicate that some classical
taggers place on the size-performance skyline across languages. Further,
although the deep models have highest performance for multiple scores, it is
often not the most complex of these that reach peak performance.
| 2,021 |
Computation and Language
|
Language Models are Few-Shot Butlers
|
Pretrained language models demonstrate strong performance in most NLP tasks
when fine-tuned on small task-specific datasets. Hence, these autoregressive
models constitute ideal agents to operate in text-based environments where
language understanding and generative capabilities are essential. Nonetheless,
collecting expert demonstrations in such environments is a time-consuming
endeavour. We introduce a two-stage procedure to learn from a small set of
demonstrations and further improve by interacting with an environment. We show
that language models fine-tuned with only 1.2% of the expert demonstrations and
a simple reinforcement learning algorithm achieve a 51% absolute improvement in
success rate over existing methods in the ALFWorld environment.
| 2,021 |
Computation and Language
|
ProphetNet-X: Large-Scale Pre-training Models for English, Chinese,
Multi-lingual, Dialog, and Code Generation
|
Now, the pre-training technique is ubiquitous in natural language processing
field. ProphetNet is a pre-training based natural language generation method
which shows powerful performance on English text summarization and question
generation tasks. In this paper, we extend ProphetNet into other domains and
languages, and present the ProphetNet family pre-training models, named
ProphetNet-X, where X can be English, Chinese, Multi-lingual, and so on. We
pre-train a cross-lingual generation model ProphetNet-Multi, a Chinese
generation model ProphetNet-Zh, two open-domain dialog generation models
ProphetNet-Dialog-En and ProphetNet-Dialog-Zh. And also, we provide a PLG
(Programming Language Generation) model ProphetNet-Code to show the generation
performance besides NLG (Natural Language Generation) tasks. In our
experiments, ProphetNet-X models achieve new state-of-the-art performance on 10
benchmarks. All the models of ProphetNet-X share the same model structure,
which allows users to easily switch between different models. We make the code
and models publicly available, and we will keep updating more pre-training
models and finetuning scripts.
| 2,021 |
Computation and Language
|
Fast, Effective, and Self-Supervised: Transforming Masked Language
Models into Universal Lexical and Sentence Encoders
|
Pretrained Masked Language Models (MLMs) have revolutionised NLP in recent
years. However, previous work has indicated that off-the-shelf MLMs are not
effective as universal lexical or sentence encoders without further
task-specific fine-tuning on NLI, sentence similarity, or paraphrasing tasks
using annotated task data. In this work, we demonstrate that it is possible to
turn MLMs into effective universal lexical and sentence encoders even without
any additional data and without any supervision. We propose an extremely
simple, fast and effective contrastive learning technique, termed Mirror-BERT,
which converts MLMs (e.g., BERT and RoBERTa) into such encoders in 20-30
seconds without any additional external knowledge. Mirror-BERT relies on fully
identical or slightly modified string pairs as positive (i.e., synonymous)
fine-tuning examples, and aims to maximise their similarity during identity
fine-tuning. We report huge gains over off-the-shelf MLMs with Mirror-BERT in
both lexical-level and sentence-level tasks, across different domains and
different languages. Notably, in the standard sentence semantic similarity
(STS) tasks, our self-supervised Mirror-BERT model even matches the performance
of the task-tuned Sentence-BERT models from prior work. Finally, we delve
deeper into the inner workings of MLMs, and suggest some evidence on why this
simple approach can yield effective universal lexical and sentence encoders.
| 2,021 |
Computation and Language
|
Cost-effective End-to-end Information Extraction for Semi-structured
Document Images
|
A real-world information extraction (IE) system for semi-structured document
images often involves a long pipeline of multiple modules, whose complexity
dramatically increases its development and maintenance cost. One can instead
consider an end-to-end model that directly maps the input to the target output
and simplify the entire process. However, such generation approach is known to
lead to unstable performance if not designed carefully. Here we present our
recent effort on transitioning from our existing pipeline-based IE system to an
end-to-end system focusing on practical challenges that are associated with
replacing and deploying the system in real, large-scale production. By
carefully formulating document IE as a sequence generation task, we show that a
single end-to-end IE system can be built and still achieve competent
performance.
| 2,021 |
Computation and Language
|
Effect of Visual Extensions on Natural Language Understanding in
Vision-and-Language Models
|
A method for creating a vision-and-language (V&L) model is to extend a
language model through structural modifications and V&L pre-training. Such an
extension aims to make a V&L model inherit the capability of natural language
understanding (NLU) from the original language model. To see how well this is
achieved, we propose to evaluate V&L models using an NLU benchmark (GLUE). We
compare five V&L models, including single-stream and dual-stream models,
trained with the same pre-training. Dual-stream models, with their higher
modality independence achieved by approximately doubling the number of
parameters, are expected to preserve the NLU capability better. Our main
finding is that the dual-stream scores are not much different than the
single-stream scores, contrary to expectation. Further analysis shows that
pre-training causes the performance drop in NLU tasks with few exceptions.
These results suggest that adopting a single-stream structure and devising the
pre-training could be an effective method for improving the maintenance of
language knowledge in V&L extensions.
| 2,021 |
Computation and Language
|
To Share or not to Share: Predicting Sets of Sources for Model Transfer
Learning
|
In low-resource settings, model transfer can help to overcome a lack of
labeled data for many tasks and domains. However, predicting useful transfer
sources is a challenging problem, as even the most similar sources might lead
to unexpected negative transfer results. Thus, ranking methods based on task
and text similarity -- as suggested in prior work -- may not be sufficient to
identify promising sources. To tackle this problem, we propose a new approach
to automatically determine which and how many sources should be exploited. For
this, we study the effects of model transfer on sequence labeling across
various domains and tasks and show that our methods based on model similarity
and support vector machines are able to predict promising sources, resulting in
performance increases of up to 24 F1 points.
| 2,021 |
Computation and Language
|
Improving Zero-Shot Multi-Lingual Entity Linking
|
Entity linking -- the task of identifying references in free text to relevant
knowledge base representations -- often focuses on single languages. We
consider multilingual entity linking, where a single model is trained to link
references to same-language knowledge bases in several languages. We propose a
neural ranker architecture, which leverages multilingual transformer
representations of text to be easily applied to a multilingual setting. We then
explore how a neural ranker trained in one language (e.g. English) transfers to
an unseen language (e.g. Chinese), and find that while there is a consistent
but not large drop in performance. How can this drop in performance be
alleviated? We explore adding an adversarial objective to force our model to
learn language-invariant representations. We find that using this approach
improves recall in several datasets, often matching the in-language
performance, thus alleviating some of the performance loss occurring from
zero-shot transfer.
| 2,021 |
Computation and Language
|
LU-BZU at SemEval-2021 Task 2: Word2Vec and Lemma2Vec performance in
Arabic Word-in-Context disambiguation
|
This paper presents a set of experiments to evaluate and compare between the
performance of using CBOW Word2Vec and Lemma2Vec models for Arabic
Word-in-Context (WiC) disambiguation without using sense inventories or sense
embeddings. As part of the SemEval-2021 Shared Task 2 on WiC disambiguation, we
used the dev.ar-ar dataset (2k sentence pairs) to decide whether two words in a
given sentence pair carry the same meaning. We used two Word2Vec models:
Wiki-CBOW, a pre-trained model on Arabic Wikipedia, and another model we
trained on large Arabic corpora of about 3 billion tokens. Two Lemma2Vec models
was also constructed based on the two Word2Vec models. Each of the four models
was then used in the WiC disambiguation task, and then evaluated on the
SemEval-2021 test.ar-ar dataset. At the end, we reported the performance of
different models and compared between using lemma-based and word-based models.
| 2,021 |
Computation and Language
|
Temporal Adaptation of BERT and Performance on Downstream Document
Classification: Insights from Social Media
|
Language use differs between domains and even within a domain, language use
changes over time. For pre-trained language models like BERT, domain adaptation
through continued pre-training has been shown to improve performance on
in-domain downstream tasks. In this article, we investigate whether temporal
adaptation can bring additional benefits. For this purpose, we introduce a
corpus of social media comments sampled over three years. It contains
unlabelled data for adaptation and evaluation on an upstream masked language
modelling task as well as labelled data for fine-tuning and evaluation on a
downstream document classification task. We find that temporality matters for
both tasks: temporal adaptation improves upstream and temporal fine-tuning
downstream task performance. Time-specific models generally perform better on
past than on future test sets, which matches evidence on the bursty usage of
topical words. However, adapting BERT to time and domain does not improve
performance on the downstream task over only adapting to domain. Token-level
analysis shows that temporal adaptation captures event-driven changes in
language use in the downstream task, but not those changes that are actually
relevant to task performance. Based on our findings, we discuss when temporal
adaptation may be more effective.
| 2,021 |
Computation and Language
|
Towards Variable-Length Textual Adversarial Attacks
|
Adversarial attacks have shown the vulnerability of machine learning models,
however, it is non-trivial to conduct textual adversarial attacks on natural
language processing tasks due to the discreteness of data. Most previous
approaches conduct attacks with the atomic \textit{replacement} operation,
which usually leads to fixed-length adversarial examples and therefore limits
the exploration on the decision space. In this paper, we propose
variable-length textual adversarial attacks~(VL-Attack) and integrate three
atomic operations, namely \textit{insertion}, \textit{deletion} and
\textit{replacement}, into a unified framework, by introducing and manipulating
a special \textit{blank} token while attacking. In this way, our approach is
able to more comprehensively find adversarial examples around the decision
boundary and effectively conduct adversarial attacks. Specifically, our method
drops the accuracy of IMDB classification by $96\%$ with only editing $1.3\%$
tokens while attacking a pre-trained BERT model. In addition, fine-tuning the
victim model with generated adversarial samples can improve the robustness of
the model without hurting the performance, especially for length-sensitive
models. On the task of non-autoregressive machine translation, our method can
achieve $33.18$ BLEU score on IWSLT14 German-English translation, achieving an
improvement of $1.47$ over the baseline model.
| 2,021 |
Computation and Language
|
Supervising Model Attention with Human Explanations for Robust Natural
Language Inference
|
Natural Language Inference (NLI) models are known to learn from biases and
artefacts within their training data, impacting how well they generalise to
other unseen datasets. Existing de-biasing approaches focus on preventing the
models from learning these biases, which can result in restrictive models and
lower performance. We instead investigate teaching the model how a human would
approach the NLI task, in order to learn features that will generalise better
to previously unseen examples. Using natural language explanations, we
supervise the model's attention weights to encourage more attention to be paid
to the words present in the explanations, significantly improving model
performance. Our experiments show that the in-distribution improvements of this
method are also accompanied by out-of-distribution improvements, with the
supervised models learning from features that generalise better to other NLI
datasets. Analysis of the model indicates that human explanations encourage
increased attention on the important words, with more attention paid to words
in the premise and less attention paid to punctuation and stop-words.
| 2,022 |
Computation and Language
|
KI-BERT: Infusing Knowledge Context for Better Language and Domain
Understanding
|
Contextualized entity representations learned by state-of-the-art
transformer-based language models (TLMs) like BERT, GPT, T5, etc., leverage the
attention mechanism to learn the data context from training data corpus.
However, these models do not use the knowledge context. Knowledge context can
be understood as semantics about entities and their relationship with
neighboring entities in knowledge graphs. We propose a novel and effective
technique to infuse knowledge context from multiple knowledge graphs for
conceptual and ambiguous entities into TLMs during fine-tuning. It projects
knowledge graph embeddings in the homogeneous vector-space, introduces new
token-types for entities, aligns entity position ids, and a selective attention
mechanism. We take BERT as a baseline model and implement the
"Knowledge-Infused BERT" by infusing knowledge context from ConceptNet and
WordNet, which significantly outperforms BERT and other recent knowledge-aware
BERT variants like ERNIE, SenseBERT, and BERT_CS over eight different subtasks
of GLUE benchmark. The KI-BERT-base model even significantly outperforms
BERT-large for domain-specific tasks like SciTail and academic subsets of QQP,
QNLI, and MNLI.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.