Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
RADDLE: An Evaluation Benchmark and Analysis Platform for Robust
Task-oriented Dialog Systems
|
For task-oriented dialog systems to be maximally useful, it must be able to
process conversations in a way that is (1) generalizable with a small number of
training examples for new task domains, and (2) robust to user input in various
styles, modalities or domains. In pursuit of these goals, we introduce the
RADDLE benchmark, a collection of corpora and tools for evaluating the
performance of models across a diverse set of domains. By including tasks with
limited training data, RADDLE is designed to favor and encourage models with a
strong generalization ability. RADDLE also includes a diagnostic checklist that
facilitates detailed robustness analysis in aspects such as language
variations, speech errors, unseen entities, and out-of-domain utterances. We
evaluate recent state-of-the-art systems based on pre-training and fine-tuning,
and find that grounded pre-training on heterogeneous dialog corpora performs
better than training a separate model per domain. Overall, existing models are
less than satisfactory in robustness evaluation, which suggests opportunities
for future improvement.
| 2,021 |
Computation and Language
|
Faster Re-translation Using Non-Autoregressive Model For Simultaneous
Neural Machine Translation
|
Recently, simultaneous translation has gathered a lot of attention since it
enables compelling applications such as subtitle translation for a live event
or real-time video-call translation. Some of these translation applications
allow editing of partial translation giving rise to re-translation approaches.
The current re-translation approaches are based on autoregressive sequence
generation models (ReTA), which generate tar-get tokens in the (partial)
translation sequentially. The multiple re-translations with sequential
generation inReTAmodelslead to an increased inference time gap between the
incoming source input and the corresponding target output as the source input
grows. Besides, due to the large number of inference operations involved, the
ReTA models are not favorable for resource-constrained devices. In this work,
we propose a faster re-translation system based on a non-autoregressive
sequence generation model (FReTNA) to overcome the aforementioned limitations.
We evaluate the proposed model on multiple translation tasks and our model
reduces the inference times by several orders and achieves a competitive
BLEUscore compared to the ReTA and streaming (Wait-k) models.The proposed model
reduces the average computation time by a factor of 20 when compared to the
ReTA model by incurring a small drop in the translation quality. It also
outperforms the streaming-based Wait-k model both in terms of computation time
(1.5 times lower) and translation quality.
| 2,021 |
Computation and Language
|
CascadeBERT: Accelerating Inference of Pre-trained Language Models via
Calibrated Complete Models Cascade
|
Dynamic early exiting aims to accelerate the inference of pre-trained
language models (PLMs) by emitting predictions in internal layers without
passing through the entire model. In this paper, we empirically analyze the
working mechanism of dynamic early exiting and find that it faces a performance
bottleneck under high speed-up ratios. On one hand, the PLMs' representations
in shallow layers lack high-level semantic information and thus are not
sufficient for accurate predictions. On the other hand, the exiting decisions
made by internal classifiers are unreliable, leading to wrongly emitted early
predictions. We instead propose a new framework for accelerating the inference
of PLMs, CascadeBERT, which dynamically selects proper-sized and complete
models in a cascading manner, providing comprehensive representations for
predictions. We further devise a difficulty-aware objective, encouraging the
model to output the class probability that reflects the real difficulty of each
instance for a more reliable cascading mechanism. Experimental results show
that CascadeBERT can achieve an overall 15\% improvement under 4$\times$
speed-up compared with existing dynamic early exiting methods on six
classification tasks, yielding more calibrated and accurate predictions.
| 2,021 |
Computation and Language
|
Code Summarization with Structure-induced Transformer
|
Code summarization (CS) is becoming a promising area in recent language
understanding, which aims to generate sensible human language automatically for
programming language in the format of source code, serving in the most
convenience of programmer developing. It is well known that programming
languages are highly structured. Thus previous works attempt to apply
structure-based traversal (SBT) or non-sequential models like Tree-LSTM and
graph neural network (GNN) to learn structural program semantics. However, it
is surprising that incorporating SBT into advanced encoder like Transformer
instead of LSTM has been shown no performance gain, which lets GNN become the
only rest means modeling such necessary structural clue in source code. To
release such inconvenience, we propose structure-induced Transformer, which
encodes sequential code inputs with multi-view structural clues in terms of a
newly-proposed structure-induced self-attention mechanism. Extensive
experiments show that our proposed structure-induced Transformer helps achieve
new state-of-the-art results on benchmarks.
| 2,021 |
Computation and Language
|
LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document
Understanding
|
Pre-training of text and layout has proved effective in a variety of
visually-rich document understanding tasks due to its effective model
architecture and the advantage of large-scale unlabeled scanned/digital-born
documents. We propose LayoutLMv2 architecture with new pre-training tasks to
model the interaction among text, layout, and image in a single multi-modal
framework. Specifically, with a two-stream multi-modal Transformer encoder,
LayoutLMv2 uses not only the existing masked visual-language modeling task but
also the new text-image alignment and text-image matching tasks, which make it
better capture the cross-modality interaction in the pre-training stage.
Meanwhile, it also integrates a spatial-aware self-attention mechanism into the
Transformer architecture so that the model can fully understand the relative
positional relationship among different text blocks. Experiment results show
that LayoutLMv2 outperforms LayoutLM by a large margin and achieves new
state-of-the-art results on a wide variety of downstream visually-rich document
understanding tasks, including FUNSD (0.7895 $\to$ 0.8420), CORD (0.9493 $\to$
0.9601), SROIE (0.9524 $\to$ 0.9781), Kleister-NDA (0.8340 $\to$ 0.8520),
RVL-CDIP (0.9443 $\to$ 0.9564), and DocVQA (0.7295 $\to$ 0.8672). We made our
model and code publicly available at \url{https://aka.ms/layoutlmv2}.
| 2,022 |
Computation and Language
|
Dialogue Response Selection with Hierarchical Curriculum Learning
|
We study the learning of a matching model for dialogue response selection.
Motivated by the recent finding that models trained with random negative
samples are not ideal in real-world scenarios, we propose a hierarchical
curriculum learning framework that trains the matching model in an
"easy-to-difficult" scheme. Our learning framework consists of two
complementary curricula: (1) corpus-level curriculum (CC); and (2)
instance-level curriculum (IC). In CC, the model gradually increases its
ability in finding the matching clues between the dialogue context and a
response candidate. As for IC, it progressively strengthens the model's ability
in identifying the mismatching information between the dialogue context and a
response candidate. Empirical studies on three benchmark datasets with three
state-of-the-art matching models demonstrate that the proposed learning
framework significantly improves the model performance across various
evaluation metrics.
| 2,021 |
Computation and Language
|
CMV-BERT: Contrastive multi-vocab pretraining of BERT
|
In this work, we represent CMV-BERT, which improves the pretraining of a
language model via two ingredients: (a) contrastive learning, which is well
studied in the area of computer vision; (b) multiple vocabularies, one of which
is fine-grained and the other is coarse-grained. The two methods both provide
different views of an original sentence, and both are shown to be beneficial.
Downstream tasks demonstrate our proposed CMV-BERT are effective in improving
the pretrained language models.
| 2,021 |
Computation and Language
|
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence
Learning
|
Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder
layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq)
models, which has proven effective on various NLP tasks. However, it is still
not entirely clear why and when EncoderFusion should work. In this paper, our
main contribution is to take a step further in understanding EncoderFusion.
Many of previous studies believe that the success of EncoderFusion comes from
exploiting surface and syntactic information embedded in lower encoder layers.
Unlike them, we find that the encoder embedding layer is more important than
other intermediate encoder layers. In addition, the uppermost decoder layer
consistently pays more attention to the encoder embedding layer across NLP
tasks. Based on this observation, we propose a simple fusion method,
SurfaceFusion, by fusing only the encoder embedding layer for the softmax
layer. Experimental results show that SurfaceFusion outperforms EncoderFusion
on several NLP benchmarks, including machine translation, text summarization,
and grammatical error correction. It obtains the state-of-the-art performance
on WMT16 Romanian-English and WMT14 English-French translation tasks. Extensive
analyses reveal that SurfaceFusion learns more expressive bilingual word
embeddings by building a closer relationship between relevant source and target
embedding. Source code is freely available at
https://github.com/SunbowLiu/SurfaceFusion.
| 2,021 |
Computation and Language
|
Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces
|
Adversarial attacks in texts are mostly substitution-based methods that
replace words or characters in the original texts to achieve success attacks.
Recent methods use pre-trained language models as the substitutes generator.
While in Chinese, such methods are not applicable since words in Chinese
require segmentations first. In this paper, we propose a pre-train language
model as the substitutes generator using sentence-pieces to craft adversarial
examples in Chinese. The substitutions in the generated adversarial examples
are not characters or words but \textit{'pieces'}, which are more natural to
Chinese readers. Experiments results show that the generated adversarial
samples can mislead strong target models and remain fluent and semantically
preserved.
| 2,021 |
Computation and Language
|
Generating Query Focused Summaries from Query-Free Resources
|
The availability of large-scale datasets has driven the development of neural
models that create generic summaries from single or multiple documents. In this
work we consider query focused summarization (QFS), a task for which training
data in the form of queries, documents, and summaries is not readily available.
We propose to decompose QFS into (1) query modeling (i.e., finding supportive
evidence within a set of documents for a query) and (2) conditional language
modeling (i.e., summary generation). We introduce MaRGE, a Masked ROUGE
Regression framework for evidence estimation and ranking which relies on a
unified representation for summaries and queries, so that summaries in generic
data can be converted into proxy queries for learning a query model.
Experiments across QFS benchmarks and query types show that our model achieves
state-of-the-art performance despite learning from weak supervision.
| 2,021 |
Computation and Language
|
Combining Semilattices and Semimodules
|
We describe the canonical weak distributive law $\delta \colon \mathcal S
\mathcal P \to \mathcal P \mathcal S$ of the powerset monad $\mathcal P$ over
the $S$-left-semimodule monad $\mathcal S$, for a class of semirings $S$. We
show that the composition of $\mathcal P$ with $\mathcal S$ by means of such
$\delta$ yields almost the monad of convex subsets previously introduced by
Jacobs: the only difference consists in the absence in Jacobs's monad of the
empty convex set. We provide a handy characterisation of the canonical weak
lifting of $\mathcal P$ to $\mathbb{EM}(\mathcal S)$ as well as an algebraic
theory for the resulting composed monad. Finally, we restrict the composed
monad to finitely generated convex subsets and we show that it is presented by
an algebraic theory combining semimodules and semilattices with bottom, which
are the algebras for the finite powerset monad $\mathcal P_f$.
| 2,021 |
Computation and Language
|
A Hierarchical Transformer with Speaker Modeling for Emotion Recognition
in Conversation
|
Emotion Recognition in Conversation (ERC) is a more challenging task than
conventional text emotion recognition. It can be regarded as a personalized and
interactive emotion recognition task, which is supposed to consider not only
the semantic information of text but also the influences from speakers. The
current method models speakers' interactions by building a relation between
every two speakers. However, this fine-grained but complicated modeling is
computationally expensive, hard to extend, and can only consider local context.
To address this problem, we simplify the complicated modeling to a binary
version: Intra-Speaker and Inter-Speaker dependencies, without identifying
every unique speaker for the targeted speaker. To better achieve the simplified
interaction modeling of speakers in Transformer, which shows excellent ability
to settle long-distance dependency, we design three types of masks and
respectively utilize them in three independent Transformer blocks. The designed
masks respectively model the conventional context modeling, Intra-Speaker
dependency, and Inter-Speaker dependency. Furthermore, different speaker-aware
information extracted by Transformer blocks diversely contributes to the
prediction, and therefore we utilize the attention mechanism to automatically
weight them. Experiments on two ERC datasets indicate that our model is
efficacious to achieve better performance.
| 2,021 |
Computation and Language
|
Dialogue Graph Modeling for Conversational Machine Reading
|
Conversational Machine Reading (CMR) aims at answering questions in a
complicated manner. Machine needs to answer questions through interactions with
users based on given rule document, user scenario and dialogue history, and ask
questions to clarify if necessary. In this paper, we propose a dialogue graph
modeling framework to improve the understanding and reasoning ability of
machine on CMR task. There are three types of graph in total. Specifically,
Discourse Graph is designed to learn explicitly and extract the discourse
relation among rule texts as well as the extra knowledge of scenario;
Decoupling Graph is used for understanding local and contextualized connection
within rule texts. And finally a global graph for fusing the information
together and reply to the user with our final decision being either
"Yes/No/Irrelevant" or to ask a follow-up question to clarify.
| 2,021 |
Computation and Language
|
DRS at MRP 2020: Dressing up Discourse Representation Structures as
Graphs
|
Discourse Representation Theory (DRT) is a formal account for representing
the meaning of natural language discourse. Meaning in DRT is modeled via a
Discourse Representation Structure (DRS), a meaning representation with a
model-theoretic interpretation, which is usually depicted as nested boxes. In
contrast, a directed labeled graph is a common data structure used to encode
semantics of natural language texts. The paper describes the procedure of
dressing up DRSs as directed labeled graphs to include DRT as a new framework
in the 2020 shared task on Cross-Framework and Cross-Lingual Meaning
Representation Parsing. Since one of the goals of the shared task is to
encourage unified models for several semantic graph frameworks, the conversion
procedure was biased towards making the DRT graph framework somewhat similar to
other graph-based meaning representation frameworks.
| 2,021 |
Computation and Language
|
The Parallel Meaning Bank: A Framework for Semantically Annotating
Multiple Languages
|
This paper gives a general description of the ideas behind the Parallel
Meaning Bank, a framework with the aim to provide an easy way to annotate
compositional semantics for texts written in languages other than English. The
annotation procedure is semi-automatic, and comprises seven layers of
linguistic information: segmentation, symbolisation, semantic tagging, word
sense disambiguation, syntactic structure, thematic role labelling, and
co-reference. New languages can be added to the meaning bank as long as the
documents are based on translations from English, but also introduce new
interesting challenges on the linguistics assumptions underlying the Parallel
Meaning Bank.
| 2,021 |
Computation and Language
|
Transformer Feed-Forward Layers Are Key-Value Memories
|
Feed-forward layers constitute two-thirds of a transformer model's
parameters, yet their role in the network remains under-explored. We show that
feed-forward layers in transformer-based language models operate as key-value
memories, where each key correlates with textual patterns in the training
examples, and each value induces a distribution over the output vocabulary. Our
experiments show that the learned patterns are human-interpretable, and that
lower layers tend to capture shallow patterns, while upper layers learn more
semantic ones. The values complement the keys' input patterns by inducing
output distributions that concentrate probability mass on tokens likely to
appear immediately after each pattern, particularly in the upper layers.
Finally, we demonstrate that the output of a feed-forward layer is a
composition of its memories, which is subsequently refined throughout the
model's layers via residual connections to produce the final output
distribution.
| 2,021 |
Computation and Language
|
WikiTableT: A Large-Scale Data-to-Text Dataset for Generating Wikipedia
Article Sections
|
Datasets for data-to-text generation typically focus either on multi-domain,
single-sentence generation or on single-domain, long-form generation. In this
work, we cast generating Wikipedia sections as a data-to-text generation task
and create a large-scale dataset, WikiTableT, that pairs Wikipedia sections
with their corresponding tabular data and various metadata. WikiTableT contains
millions of instances, covering a broad range of topics, as well as a variety
of flavors of generation tasks with different levels of flexibility. We
benchmark several training and decoding strategies on WikiTableT. Our
qualitative analysis shows that the best approaches can generate fluent and
high quality texts but they struggle with coherence and factuality, showing the
potential for our dataset to inspire future work on long-form generation.
| 2,021 |
Computation and Language
|
Generating Natural Language Attacks in a Hard Label Black Box Setting
|
We study an important and challenging task of attacking natural language
processing models in a hard label black box setting. We propose a
decision-based attack strategy that crafts high quality adversarial examples on
text classification and entailment tasks. Our proposed attack strategy
leverages population-based optimization algorithm to craft plausible and
semantically similar adversarial examples by observing only the top label
predicted by the target model. At each iteration, the optimization procedure
allow word replacements that maximizes the overall semantic similarity between
the original and the adversarial text. Further, our approach does not rely on
using substitute models or any kind of training data. We demonstrate the
efficacy of our proposed approach through extensive experimentation and
ablation studies on five state-of-the-art target models across seven benchmark
datasets. In comparison to attacks proposed in prior literature, we are able to
achieve a higher success rate with lower word perturbation percentage that too
in a highly restricted setting.
| 2,021 |
Computation and Language
|
Few-Shot Named Entity Recognition: A Comprehensive Study
|
This paper presents a comprehensive study to efficiently build named entity
recognition (NER) systems when a small number of in-domain labeled data is
available. Based upon recent Transformer-based self-supervised pre-trained
language models (PLMs), we investigate three orthogonal schemes to improve the
model generalization ability for few-shot settings: (1) meta-learning to
construct prototypes for different entity types, (2) supervised pre-training on
noisy web data to extract entity-related generic representations and (3)
self-training to leverage unlabeled in-domain data. Different combinations of
these schemes are also considered. We perform extensive empirical comparisons
on 10 public NER datasets with various proportions of labeled data, suggesting
useful insights for future research. Our experiments show that (i) in the
few-shot learning setting, the proposed NER schemes significantly improve or
outperform the commonly used baseline, a PLM-based linear classifier fine-tuned
on domain labels; (ii) We create new state-of-the-art results on both few-shot
and training-free settings compared with existing methods. We will release our
code and pre-trained models for reproducible research.
| 2,021 |
Computation and Language
|
Reducing conversational agents' overconfidence through linguistic
calibration
|
While improving neural dialogue agents' factual accuracy is the object of
much research, another important aspect of communication, less studied in the
setting of neural dialogue, is transparency about ignorance. In this work, we
analyze to what extent state-of-the-art chit-chat models are linguistically
calibrated in the sense that their verbalized expression of doubt (or
confidence) matches the likelihood that the model's responses are factually
incorrect (or correct). We find that these models are poorly calibrated, yet we
show that likelihood of correctness can accurately be predicted. By
incorporating such metacognitive features into the training of a controllable
generation model, we obtain a dialogue agent with greatly improved linguistic
calibration. While improving neural dialogue agents' factual accuracy is the
object of much research, another important aspect of communication, less
studied in the setting of neural dialogue, is transparency about ignorance. In
this work, we analyze to what extent state-of-the-art chit-chat models are
linguistically calibrated in the sense that their verbalized expression of
doubt (or confidence) matches the likelihood that the model's responses are
factually incorrect (or correct). We find that these models are poorly
calibrated, yet we show that likelihood of correctness can accurately be
predicted. By incorporating such metacognitive features into the training of a
controllable generation model, we obtain a dialogue agent with greatly improved
linguistic calibration.
| 2,022 |
Computation and Language
|
OpenViDial: A Large-Scale, Open-Domain Dialogue Dataset with Visual
Contexts
|
When humans converse, what a speaker will say next significantly depends on
what he sees. Unfortunately, existing dialogue models generate dialogue
utterances only based on preceding textual contexts, and visual contexts are
rarely considered. This is due to a lack of a large-scale multi-module dialogue
dataset with utterances paired with visual contexts. In this paper, we release
{\bf OpenViDial}, a large-scale multi-module dialogue dataset. The dialogue
turns and visual contexts are extracted from movies and TV series, where each
dialogue turn is paired with the corresponding visual context in which it takes
place. OpenViDial contains a total number of 1.1 million dialogue turns, and
thus 1.1 million visual contexts stored in images. Based on this dataset, we
propose a family of encoder-decoder models leveraging both textual and visual
contexts, from coarse-grained image features extracted from CNNs to
fine-grained object features extracted from Faster R-CNNs. We observe that
visual information significantly improves dialogue generation qualities,
verifying the necessity of integrating multi-modal features for dialogue
learning. Our work marks an important step towards large-scale multi-modal
dialogue learning.
| 2,021 |
Computation and Language
|
ERICA: Improving Entity and Relation Understanding for Pre-trained
Language Models via Contrastive Learning
|
Pre-trained Language Models (PLMs) have shown superior performance on various
downstream Natural Language Processing (NLP) tasks. However, conventional
pre-training objectives do not explicitly model relational facts in text, which
are crucial for textual understanding. To address this issue, we propose a
novel contrastive learning framework ERICA to obtain a deep understanding of
the entities and their relations in text. Specifically, we define two novel
pre-training tasks to better understand entities and relations: (1) the entity
discrimination task to distinguish which tail entity can be inferred by the
given head entity and relation; (2) the relation discrimination task to
distinguish whether two relations are close or not semantically, which involves
complex relational reasoning. Experimental results demonstrate that ERICA can
improve typical PLMs (BERT and RoBERTa) on several language understanding
tasks, including relation extraction, entity typing and question answering,
especially under low-resource settings.
| 2,021 |
Computation and Language
|
Language Identification of Devanagari Poems
|
Language Identification is a very important part of several text processing
pipelines. Extensive research has been done in this field. This paper proposes
a procedure for automatic language identification of poems for poem analysis
task, consisting of 10 Devanagari based languages of India i.e. Angika, Awadhi,
Braj, Bhojpuri, Chhattisgarhi, Garhwali, Haryanvi, Hindi, Magahi, and Maithili.
We collated corpora of poems of varying length and studied the similarity of
poems among the 10 languages at the lexical level. Finally, various language
identification systems based on supervised machine learning and deep learning
techniques are applied and evaluated.
| 2,021 |
Computation and Language
|
Reservoir Transformers
|
We demonstrate that transformers obtain impressive performance even when some
of the layers are randomly initialized and never updated. Inspired by old and
well-established ideas in machine learning, we explore a variety of non-linear
"reservoir" layers interspersed with regular transformer layers, and show
improvements in wall-clock compute time until convergence, as well as overall
performance, on various machine translation and (masked) language modelling
tasks.
| 2,021 |
Computation and Language
|
Enhancing Pre-trained Language Model with Lexical Simplification
|
For both human readers and pre-trained language models (PrLMs), lexical
diversity may lead to confusion and inaccuracy when understanding the
underlying semantic meanings of given sentences. By substituting complex words
with simple alternatives, lexical simplification (LS) is a recognized method to
reduce such lexical diversity, and therefore to improve the understandability
of sentences. In this paper, we leverage LS and propose a novel approach which
can effectively improve the performance of PrLMs in text classification. A
rule-based simplification process is applied to a given sentence. PrLMs are
encouraged to predict the real label of the given sentence with auxiliary
inputs from the simplified version. Using strong PrLMs (BERT and ELECTRA) as
baselines, our approach can still further improve the performance in various
text classification tasks.
| 2,021 |
Computation and Language
|
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
|
While research on explaining predictions of open-domain QA systems (ODQA) to
users is gaining momentum, most works have failed to evaluate the extent to
which explanations improve user trust. While few works evaluate explanations
using user studies, they employ settings that may deviate from the end-user's
usage in-the-wild: ODQA is most ubiquitous in voice-assistants, yet current
research only evaluates explanations using a visual display, and may
erroneously extrapolate conclusions about the most performant explanations to
other modalities. To alleviate these issues, we conduct user studies that
measure whether explanations help users correctly decide when to accept or
reject an ODQA system's answer. Unlike prior work, we control for explanation
modality, e.g., whether they are communicated to users through a spoken or
visual interface, and contrast effectiveness across modalities. Our results
show that explanations derived from retrieved evidence passages can outperform
strong baselines (calibrated confidence) across modalities but the best
explanation strategy in fact changes with the modality. We show common failure
cases of current explanations, emphasize end-to-end evaluation of explanations,
and caution against evaluating them in proxy modalities that are different from
deployment.
| 2,021 |
Computation and Language
|
A Subword Guided Neural Word Segmentation Model for Sindhi
|
Deep neural networks employ multiple processing layers for learning text
representations to alleviate the burden of manual feature engineering in
Natural Language Processing (NLP). Such text representations are widely used to
extract features from unlabeled data. The word segmentation is a fundamental
and inevitable prerequisite for many languages. Sindhi is an under-resourced
language, whose segmentation is challenging as it exhibits space omission,
space insertion issues, and lacks the labeled corpus for segmentation. In this
paper, we investigate supervised Sindhi Word Segmentation (SWS) using unlabeled
data with a Subword Guided Neural Word Segmenter (SGNWS) for Sindhi. In order
to learn text representations, we incorporate subword representations to
recurrent neural architecture to capture word information at morphemic-level,
which takes advantage of Bidirectional Long-Short Term Memory (BiLSTM),
self-attention mechanism, and Conditional Random Field (CRF). Our proposed
SGNWS model achieves an F1 value of 98.51% without relying on feature
engineering. The empirical results demonstrate the benefits of the proposed
model over the existing Sindhi word segmenters.
| 2,021 |
Computation and Language
|
Accurate Word Representations with Universal Visual Guidance
|
Word representation is a fundamental component in neural language
understanding models. Recently, pre-trained language models (PrLMs) offer a new
performant method of contextualized word representations by leveraging the
sequence-level context for modeling. Although the PrLMs generally give more
accurate contextualized word representations than non-contextualized models do,
they are still subject to a sequence of text contexts without diverse hints for
word representation from multimodality. This paper thus proposes a visual
representation method to explicitly enhance conventional word embedding with
multiple-aspect senses from visual guidance. In detail, we build a small-scale
word-image dictionary from a multimodal seed dataset where each word
corresponds to diverse related images. The texts and paired images are encoded
in parallel, followed by an attention layer to integrate the multimodal
representations. We show that the method substantially improves the accuracy of
disambiguation. Experiments on 12 natural language understanding and machine
translation tasks further verify the effectiveness and the generalization
capability of the proposed approach.
| 2,021 |
Computation and Language
|
Joint Verification and Reranking for Open Fact Checking Over Tables
|
Structured information is an important knowledge source for automatic
verification of factual claims. Nevertheless, the majority of existing research
into this task has focused on textual data, and the few recent inquiries into
structured data have been for the closed-domain setting where appropriate
evidence for each claim is assumed to have already been retrieved. In this
paper, we investigate verification over structured data in the open-domain
setting, introducing a joint reranking-and-verification model which fuses
evidence documents in the verification component. Our open-domain model
achieves performance comparable to the closed-domain state-of-the-art on the
TabFact dataset, and demonstrates performance gains from the inclusion of
multiple tables as well as a significant improvement over a heuristic retrieval
baseline.
| 2,021 |
Computation and Language
|
Improving Zero-Shot Translation by Disentangling Positional Information
|
Multilingual neural machine translation has shown the capability of directly
translating between language pairs unseen in training, i.e. zero-shot
translation. Despite being conceptually attractive, it often suffers from low
output quality. The difficulty of generalizing to new translation directions
suggests the model representations are highly specific to those language pairs
seen in training. We demonstrate that a main factor causing the
language-specific representations is the positional correspondence to input
tokens. We show that this can be easily alleviated by removing residual
connections in an encoder layer. With this modification, we gain up to 18.5
BLEU points on zero-shot translation while retaining quality on supervised
directions. The improvements are particularly prominent between related
languages, where our proposed model outperforms pivot-based translation.
Moreover, our approach allows easy integration of new languages, which
substantially expands translation coverage. By thorough inspections of the
hidden layer outputs, we show that our approach indeed leads to more
language-independent representations.
| 2,021 |
Computation and Language
|
Improving BERT with Syntax-aware Local Attention
|
Pre-trained Transformer-based neural language models, such as BERT, have
achieved remarkable results on varieties of NLP tasks. Recent works have shown
that attention-based models can benefit from more focused attention over local
regions. Most of them restrict the attention scope within a linear span, or
confine to certain tasks such as machine translation and question answering. In
this paper, we propose a syntax-aware local attention, where the attention
scopes are restrained based on the distances in the syntactic structure. The
proposed syntax-aware local attention can be integrated with pretrained
language models, such as BERT, to render the model to focus on syntactically
relevant words. We conduct experiments on various single-sentence benchmarks,
including sentence classification and sequence labeling tasks. Experimental
results show consistent gains over BERT on all benchmark datasets. The
extensive studies verify that our model achieves better performance owing to
more focused attention over syntactically relevant words.
| 2,021 |
Computation and Language
|
A Memory Efficient Baseline for Open Domain Question Answering
|
Recently, retrieval systems based on dense representations have led to
important improvements in open-domain question answering, and related tasks.
While very effective, this approach is also memory intensive, as the dense
vectors for the whole knowledge source need to be kept in memory. In this
paper, we study how the memory footprint of dense retriever-reader systems can
be reduced. We consider three strategies to reduce the index size: dimension
reduction, vector quantization and passage filtering. We evaluate our approach
on two question answering benchmarks: TriviaQA and NaturalQuestions, showing
that it is possible to get competitive systems using less than 6Gb of memory.
| 2,021 |
Computation and Language
|
Synthetic Source Language Augmentation for Colloquial Neural Machine
Translation
|
Neural machine translation (NMT) is typically domain-dependent and
style-dependent, and it requires lots of training data. State-of-the-art NMT
models often fall short in handling colloquial variations of its source
language and the lack of parallel data in this regard is a challenging hurdle
in systematically improving the existing models. In this work, we develop a
novel colloquial Indonesian-English test-set collected from YouTube transcript
and Twitter. We perform synthetic style augmentation to the source of formal
Indonesian language and show that it improves the baseline Id-En models (in
BLEU) over the new test data.
| 2,021 |
Computation and Language
|
Out of Order: How Important Is The Sequential Order of Words in a
Sentence in Natural Language Understanding Tasks?
|
Do state-of-the-art natural language understanding models care about word
order - one of the most important characteristics of a sequence? Not always! We
found 75% to 90% of the correct predictions of BERT-based classifiers, trained
on many GLUE tasks, remain constant after input words are randomly shuffled.
Despite BERT embeddings are famously contextual, the contribution of each
individual word to downstream tasks is almost unchanged even after the word's
context is shuffled. BERT-based models are able to exploit superficial cues
(e.g. the sentiment of keywords in sentiment analysis; or the word-wise
similarity between sequence-pair inputs in natural language inference) to make
correct decisions when tokens are arranged in random orders. Encouraging
classifiers to capture word order information improves the performance on most
GLUE tasks, SQuAD 2.0 and out-of-samples. Our work suggests that many GLUE
tasks are not challenging machines to understand the meaning of a sentence.
| 2,021 |
Computation and Language
|
SemGloVe: Semantic Co-occurrences for GloVe from BERT
|
GloVe learns word embeddings by leveraging statistical information from word
co-occurrence matrices. However, word pairs in the matrices are extracted from
a predefined local context window, which might lead to limited word pairs and
potentially semantic irrelevant word pairs. In this paper, we propose SemGloVe,
which distills semantic co-occurrences from BERT into static GloVe word
embeddings. Particularly, we propose two models to extract co-occurrence
statistics based on either the masked language model or the multi-head
attention weights of BERT. Our methods can extract word pairs without limiting
by the local window assumption and can define the co-occurrence weights by
directly considering the semantic distance between word pairs. Experiments on
several word similarity datasets and four external tasks show that SemGloVe can
outperform GloVe.
| 2,021 |
Computation and Language
|
Introducing Orthogonal Constraint in Structural Probes
|
With the recent success of pre-trained models in NLP, a significant focus was
put on interpreting their representations. One of the most prominent approaches
is structural probing (Hewitt and Manning, 2019), where a linear projection of
word embeddings is performed in order to approximate the topology of dependency
structures. In this work, we introduce a new type of structural probing, where
the linear projection is decomposed into 1. isomorphic space rotation; 2.
linear scaling that identifies and scales the most relevant dimensions. In
addition to syntactic dependency, we evaluate our method on novel tasks
(lexical hypernymy and position in a sentence). We jointly train the probes for
multiple tasks and experimentally show that lexical and syntactic information
is separated in the representations. Moreover, the orthogonal constraint makes
the Structural Probes less vulnerable to memorization.
| 2,021 |
Computation and Language
|
Can Sequence-to-Sequence Models Crack Substitution Ciphers?
|
Decipherment of historical ciphers is a challenging problem. The language of
the target plaintext might be unknown, and ciphertext can have a lot of noise.
State-of-the-art decipherment methods use beam search and a neural language
model to score candidate plaintext hypotheses for a given cipher, assuming the
plaintext language is known. We propose an end-to-end multilingual model for
solving simple substitution ciphers. We test our model on synthetic and real
historical ciphers and show that our proposed method can decipher text without
explicit language identification while still being robust to noise.
| 2,021 |
Computation and Language
|
Unsupervised Label-aware Event Trigger and Argument Classification
|
Identifying events and mapping them to pre-defined event types has long been
an important natural language processing problem. Most previous work has been
heavily relying on labor-intensive and domain-specific annotations while
ignoring the semantic meaning contained in the labels of the event types. As a
result, the learned models cannot effectively generalize to new domains, where
new event types could be introduced. In this paper, we propose an unsupervised
event extraction pipeline, which first identifies events with available tools
(e.g., SRL) and then automatically maps them to pre-defined event types with
our proposed unsupervised classification model. Rather than relying on
annotated data, our model matches the semantics of identified events with those
of event type labels. Specifically, we leverage pre-trained language models to
contextually represent pre-defined types for both event triggers and arguments.
After we map identified events to the target types via representation
similarity, we use the event ontology (e.g., argument type "Victim" can only
appear as the argument of event type "Attack") as global constraints to
regularize the prediction. The proposed approach is shown to be very effective
when tested on the ACE-2005 dataset, which has 33 trigger and 22 argument
types. Without using any annotation, we successfully map 83% of the triggers
and 54% of the arguments to the correct types, almost doubling the performance
of previous zero-shot approaches.
| 2,021 |
Computation and Language
|
Robustness Testing of Language Understanding in Task-Oriented Dialog
|
Most language understanding models in task-oriented dialog systems are
trained on a small amount of annotated training data, and evaluated in a small
set from the same distribution. However, these models can lead to system
failure or undesirable output when being exposed to natural language
perturbation or variation in practice. In this paper, we conduct comprehensive
evaluation and analysis with respect to the robustness of natural language
understanding models, and introduce three important aspects related to language
understanding in real-world dialog systems, namely, language variety, speech
characteristics, and noise perturbation. We propose a model-agnostic toolkit
LAUG to approximate natural language perturbations for testing the robustness
issues in task-oriented dialog. Four data augmentation approaches covering the
three aspects are assembled in LAUG, which reveals critical robustness issues
in state-of-the-art models. The augmented dataset through LAUG can be used to
facilitate future research on the robustness testing of language understanding
in task-oriented dialog.
| 2,021 |
Computation and Language
|
Predicting cross-linguistic adjective order with information gain
|
Languages vary in their placement of multiple adjectives before, after, or
surrounding the noun, but they typically exhibit strong intra-language
tendencies on the relative order of those adjectives (e.g., the preference for
`big blue box' in English, `grande bo\^{i}te bleue' in French, and
`alsund\={u}q al'azraq alkab\={\i}r' in Arabic). We advance a new quantitative
account of adjective order across typologically-distinct languages based on
maximizing information gain. Our model addresses the left-right asymmetry of
French-type ANA sequences with the same approach as AAN and NAA orderings,
without appeal to other mechanisms. We find that, across 32 languages, the
preferred order of adjectives largely mirrors an efficient algorithm of
maximizing information gain.
| 2,021 |
Computation and Language
|
ECONET: Effective Continual Pretraining of Language Models for Event
Temporal Reasoning
|
While pre-trained language models (PTLMs) have achieved noticeable success on
many NLP tasks, they still struggle for tasks that require event temporal
reasoning, which is essential for event-centric applications. We present a
continual pre-training approach that equips PTLMs with targeted knowledge about
event temporal relations. We design self-supervised learning objectives to
recover masked-out event and temporal indicators and to discriminate sentences
from their corrupted counterparts (where event or temporal indicators got
replaced). By further pre-training a PTLM with these objectives jointly, we
reinforce its attention to event and temporal information, yielding enhanced
capability on event temporal reasoning. This effective continual pre-training
framework for event temporal reasoning (ECONET) improves the PTLMs' fine-tuning
performances across five relation extraction and question answering tasks and
achieves new or on-par state-of-the-art performances in most of our downstream
tasks.
| 2,021 |
Computation and Language
|
Generating Landmark Navigation Instructions from Maps as a Graph-to-Text
Problem
|
Car-focused navigation services are based on turns and distances of named
streets, whereas navigation instructions naturally used by humans are centered
around physical objects called landmarks. We present a neural model that takes
OpenStreetMap representations as input and learns to generate navigation
instructions that contain visible and salient landmarks from human natural
language instructions. Routes on the map are encoded in a location- and
rotation-invariant graph representation that is decoded into natural language
instructions. Our work is based on a novel dataset of 7,672 crowd-sourced
instances that have been verified by human navigation in Street View. Our
evaluation shows that the navigation instructions generated by our system have
similar properties as human-generated instructions, and lead to successful
human navigation in Street View.
| 2,021 |
Computation and Language
|
Corrected CBOW Performs as well as Skip-gram
|
Mikolov et al. (2013a) observed that continuous bag-of-words (CBOW) word
embeddings tend to underperform Skip-gram (SG) embeddings, and this finding has
been reported in subsequent works. We find that these observations are driven
not by fundamental differences in their training objectives, but more likely on
faulty negative sampling CBOW implementations in popular libraries such as the
official implementation, word2vec.c, and Gensim. We show that after correcting
a bug in the CBOW gradient update, one can learn CBOW word embeddings that are
fully competitive with SG on various intrinsic and extrinsic tasks, while being
many times faster to train.
| 2,021 |
Computation and Language
|
DynaSent: A Dynamic Benchmark for Sentiment Analysis
|
We introduce DynaSent ('Dynamic Sentiment'), a new English-language benchmark
task for ternary (positive/negative/neutral) sentiment analysis. DynaSent
combines naturally occurring sentences with sentences created using the
open-source Dynabench Platform, which facilities human-and-model-in-the-loop
dataset creation. DynaSent has a total of 121,634 sentences, each validated by
five crowdworkers, and its development and test splits are designed to produce
chance performance for even the best models we have been able to develop; when
future models solve this task, we will use them to create DynaSent version 2,
continuing the dynamic evolution of this benchmark. Here, we report on the
dataset creation effort, focusing on the steps we took to increase quality and
reduce artifacts. We also present evidence that DynaSent's Neutral category is
more coherent than the comparable category in other benchmarks, and we motivate
training models from scratch for each round over successive fine-tuning.
| 2,021 |
Computation and Language
|
Deriving Contextualised Semantic Features from BERT (and Other
Transformer Model) Embeddings
|
Models based on the transformer architecture, such as BERT, have marked a
crucial step forward in the field of Natural Language Processing. Importantly,
they allow the creation of word embeddings that capture important semantic
information about words in context. However, as single entities, these
embeddings are difficult to interpret and the models used to create them have
been described as opaque. Binder and colleagues proposed an intuitive embedding
space where each dimension is based on one of 65 core semantic features.
Unfortunately, the space only exists for a small dataset of 535 words, limiting
its uses. Previous work (Utsumi, 2018, 2020, Turton, Vinson & Smith, 2020) has
shown that Binder features can be derived from static embeddings and
successfully extrapolated to a large new vocabulary. Taking the next step, this
paper demonstrates that Binder features can be derived from the BERT embedding
space. This provides contextualised Binder embeddings, which can aid in
understanding semantic differences between words in context. It additionally
provides insights into how semantic features are represented across the
different layers of the BERT model.
| 2,021 |
Computation and Language
|
Optimizing Deeper Transformers on Small Datasets
|
It is a common belief that training deep transformers from scratch requires
large datasets. Consequently, for small datasets, people usually use shallow
and simple additional layers on top of pre-trained models during fine-tuning.
This work shows that this does not always need to be the case: with proper
initialization and optimization, the benefits of very deep transformers can
carry over to challenging tasks with small datasets, including Text-to-SQL
semantic parsing and logical reading comprehension. In particular, we
successfully train $48$ layers of transformers, comprising $24$ fine-tuned
layers from pre-trained RoBERTa and $24$ relation-aware layers trained from
scratch. With fewer training steps and no task-specific pre-training, we obtain
the state-of-the-art performance on the challenging cross-domain Text-to-SQL
parsing benchmark Spider. We achieve this by deriving a novel Data-dependent
Transformer Fixed-update initialization scheme (DT-Fixup), inspired by the
prior T-Fixup work. Further error analysis shows that increasing depth can help
improve generalization on small datasets for hard cases that require reasoning
and structural understanding.
| 2,021 |
Computation and Language
|
Refine and Imitate: Reducing Repetition and Inconsistency in Persuasion
Dialogues via Reinforcement Learning and Human Demonstration
|
Persuasion dialogue systems reflect the machine's ability to make strategic
moves beyond verbal communication, and therefore differentiate themselves from
task-oriented or open-domain dialogue systems and have their own unique values.
However, the repetition and inconsistency problems still persist in dialogue
response generation and could substantially impact user experience and impede
the persuasion outcome. Besides, although reinforcement learning (RL)
approaches have achieved big success in strategic tasks such as games, they
require a sophisticated user simulator to provide real-time feedback to the
dialogue system, which limits the application of RL on persuasion dialogues. To
address these issues towards a better persuasion dialogue system, we apply RL
to refine a language model baseline without user simulators, and distill
sentence-level information about repetition, inconsistency, and task relevance
through rewards. Moreover, to better accomplish the persuasion task, the model
learns from human demonstration to imitate human persuasion behavior and
selects the most persuasive responses. Experiments show that our model
outperforms previous state-of-the-art dialogue models on both automatic metrics
and human evaluation results on a donation persuasion task, and generates more
diverse, consistent and persuasive conversations according to the user
feedback.
| 2,022 |
Computation and Language
|
UNIMO: Towards Unified-Modal Understanding and Generation via
Cross-Modal Contrastive Learning
|
Existed pre-training methods either focus on single-modal tasks or
multi-modal tasks, and cannot effectively adapt to each other. They can only
utilize single-modal data (i.e. text or image) or limited multi-modal data
(i.e. image-text pairs). In this work, we propose a unified-modal pre-training
architecture, namely UNIMO, which can effectively adapt to both single-modal
and multi-modal understanding and generation tasks. Large scale of free text
corpus and image collections can be utilized to improve the capability of
visual and textual understanding, and cross-modal contrastive learning (CMCL)
is leveraged to align the textual and visual information into a unified
semantic space over a corpus of image-text pairs. As the non-paired
single-modal data is very rich, our model can utilize much larger scale of data
to learn more generalizable representations. Moreover, the textual knowledge
and visual knowledge can enhance each other in the unified semantic space. The
experimental results show that UNIMO significantly improves the performance of
several single-modal and multi-modal downstream tasks. Our code and pre-trained
models are public at the UNIMO project page https://unimo-ptm.github.io/
| 2,022 |
Computation and Language
|
Directed Beam Search: Plug-and-Play Lexically Constrained Language
Generation
|
Large pre-trained language models are capable of generating realistic text.
However, controlling these models so that the generated text satisfies lexical
constraints, i.e., contains specific words, is a challenging problem. Given
that state-of-the-art language models are too large to be trained from scratch
in a manageable time, it is desirable to control these models without
re-training them. Methods capable of doing this are called plug-and-play.
Recent plug-and-play methods have been successful in constraining small
bidirectional language models as well as forward models in tasks with a
restricted search space, e.g., machine translation. However, controlling large
transformer-based models to meet lexical constraints without re-training them
remains a challenge. In this work, we propose Directed Beam Search (DBS), a
plug-and-play method for lexically constrained language generation. Our method
can be applied to any language model, is easy to implement and can be used for
general language generation. In our experiments we use DBS to control GPT-2. We
demonstrate its performance on keyword-to-phrase generation and we obtain
comparable results as a state-of-the-art non-plug-and-play model for lexically
constrained story generation.
| 2,021 |
Computation and Language
|
An Experimental Evaluation of Transformer-based Language Models in the
Biomedical Domain
|
With the growing amount of text in health data, there have been rapid
advances in large pre-trained models that can be applied to a wide variety of
biomedical tasks with minimal task-specific modifications. Emphasizing the cost
of these models, which renders technical replication challenging, this paper
summarizes experiments conducted in replicating BioBERT and further
pre-training and careful fine-tuning in the biomedical domain. We also
investigate the effectiveness of domain-specific and domain-agnostic
pre-trained models across downstream biomedical NLP tasks. Our finding confirms
that pre-trained models can be impactful in some downstream NLP tasks (QA and
NER) in the biomedical domain; however, this improvement may not justify the
high cost of domain-specific pre-training.
| 2,021 |
Computation and Language
|
Verb Knowledge Injection for Multilingual Event Processing
|
In parallel to their overwhelming success across NLP tasks, language ability
of deep Transformer networks, pretrained via language modeling (LM) objectives
has undergone extensive scrutiny. While probing revealed that these models
encode a range of syntactic and semantic properties of a language, they are
still prone to fall back on superficial cues and simple heuristics to solve
downstream tasks, rather than leverage deeper linguistic knowledge. In this
paper, we target one such area of their deficiency, verbal reasoning. We
investigate whether injecting explicit information on verbs' semantic-syntactic
behaviour improves the performance of LM-pretrained Transformers in event
extraction tasks -- downstream tasks for which accurate verb processing is
paramount. Concretely, we impart the verb knowledge from curated lexical
resources into dedicated adapter modules (dubbed verb adapters), allowing it to
complement, in downstream tasks, the language knowledge obtained during
LM-pretraining. We first demonstrate that injecting verb knowledge leads to
performance gains in English event extraction. We then explore the utility of
verb adapters for event extraction in other languages: we investigate (1)
zero-shot language transfer with multilingual Transformers as well as (2)
transfer via (noisy automatic) translation of English verb-based lexical
constraints. Our results show that the benefits of verb knowledge injection
indeed extend to other languages, even when verb adapters are trained on
noisily translated constraints.
| 2,021 |
Computation and Language
|
The jsRealB Text Realizer: Organization and Use Cases -- Revised version
|
This paper describes the design principles behind jsRealB (Version 4.0), a
surface realizer written JavaScript for English or French sentences from a
specification inspired by the constituent syntax formalism but for which a
dependency-based input notation is also available. jsRealB can be used either
within a web page or as a node.js module. We show that the seemingly simple
process of text realization involves many interesting implementation challenges
in order to take into account the specifics of each language. jsRealB has a
large coverage of English and French and has been used to develop realistic
data-to-text applications and to reproduce existing literary texts and
sentences from Universal Dependency annotations. Its source code and that of
its applications are available on GitHub. The port of this approach to Python
(pyrealb) is also presented.
| 2,022 |
Computation and Language
|
Text-Free Image-to-Speech Synthesis Using Learned Segmental Units
|
In this paper we present the first model for directly synthesizing fluent,
natural-sounding spoken audio captions for images that does not require natural
language text as an intermediate representation or source of supervision.
Instead, we connect the image captioning module and the speech synthesis module
with a set of discrete, sub-word speech units that are discovered with a
self-supervised visual grounding task. We conduct experiments on the Flickr8k
spoken caption dataset in addition to a novel corpus of spoken audio captions
collected for the popular MSCOCO dataset, demonstrating that our generated
captions also capture diverse visual semantics of the images they describe. We
investigate several different intermediate speech representations, and
empirically find that the representation must satisfy several important
properties to serve as drop-in replacements for text.
| 2,021 |
Computation and Language
|
Fully Synthetic Data Improves Neural Machine Translation with Knowledge
Distillation
|
This paper explores augmenting monolingual data for knowledge distillation in
neural machine translation. Source language monolingual text can be
incorporated as a forward translation. Interestingly, we find the best way to
incorporate target language monolingual text is to translate it to the source
language and round-trip translate it back to the target language, resulting in
a fully synthetic corpus. We find that combining monolingual data from both
source and target languages yields better performance than a corpus twice as
large only in one language. Moreover, experiments reveal that the improvement
depends upon the provenance of the test set. If the test set was originally in
the source language (with the target side written by translators), then forward
translating source monolingual data matters. If the test set was originally in
the target language (with the source written by translators), then
incorporating target monolingual data matters.
| 2,021 |
Computation and Language
|
CLEAR: Contrastive Learning for Sentence Representation
|
Pre-trained language models have proven their unique powers in capturing
implicit language features. However, most pre-training approaches focus on the
word-level training objective, while sentence-level objectives are rarely
studied. In this paper, we propose Contrastive LEArning for sentence
Representation (CLEAR), which employs multiple sentence-level augmentation
strategies in order to learn a noise-invariant sentence representation. These
augmentations include word and span deletion, reordering, and substitution.
Furthermore, we investigate the key reasons that make contrastive learning
effective through numerous experiments. We observe that different sentence
augmentations during pre-training lead to different performance improvements on
various downstream tasks. Our approach is shown to outperform multiple existing
methods on both SentEval and GLUE benchmarks.
| 2,021 |
Computation and Language
|
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale
Generation
|
Natural language (NL) explanations of model predictions are gaining
popularity as a means to understand and verify decisions made by large
black-box pre-trained models, for NLP tasks such as Question Answering (QA) and
Fact Verification. Recently, pre-trained sequence to sequence (seq2seq) models
have proven to be very effective in jointly making predictions, as well as
generating NL explanations. However, these models have many shortcomings; they
can fabricate explanations even for incorrect predictions, they are difficult
to adapt to long input documents, and their training requires a large amount of
labeled data. In this paper, we develop FiD-Ex, which addresses these
shortcomings for seq2seq models by: 1) introducing sentence markers to
eliminate explanation fabrication by encouraging extractive generation, 2)
using the fusion-in-decoder architecture to handle long input contexts, and 3)
intermediate fine-tuning on re-structured open domain QA datasets to improve
few-shot performance. FiD-Ex significantly improves over prior work in terms of
explanation metrics and task accuracy, on multiple tasks from the ERASER
explainability benchmark, both in the fully supervised and in the few-shot
settings.
| 2,021 |
Computation and Language
|
Seeing is Knowing! Fact-based Visual Question Answering using Knowledge
Graph Embeddings
|
Fact-based Visual Question Answering (FVQA), a challenging variant of VQA,
requires a QA-system to include facts from a diverse knowledge graph (KG) in
its reasoning process to produce an answer. Large KGs, especially common-sense
KGs, are known to be incomplete, i.e., not all non-existent facts are always
incorrect. Therefore, being able to reason over incomplete KGs for QA is a
critical requirement in real-world applications that has not been addressed
extensively in the literature. We develop a novel QA architecture that allows
us to reason over incomplete KGs, something current FVQA state-of-the-art
(SOTA) approaches lack due to their critical reliance on fact retrieval. We use
KG Embeddings, a technique widely used for KG completion, for the downstream
task of FVQA. We also employ a new image representation technique we call
'Image-as-Knowledge' to enable this capability, alongside a simple one-step
CoAttention mechanism to attend to text and image during QA. Our FVQA
architecture is faster during inference time, being O(m), as opposed to
existing FVQA SOTA methods which are O(N log N), where m = number of vertices,
N = number of edges = O(m^2). KG embeddings are shown to hold complementary
information to word embeddings: a combination of both metrics permits
performance comparable to SOTA methods in the standard answer retrieval task,
and significantly better (26% absolute) in the proposed missing-edge reasoning
task.
| 2,021 |
Computation and Language
|
Towards Zero-Shot Knowledge Distillation for Natural Language Processing
|
Knowledge Distillation (KD) is a common knowledge transfer algorithm used for
model compression across a variety of deep learning based natural language
processing (NLP) solutions. In its regular manifestations, KD requires access
to the teacher's training data for knowledge transfer to the student network.
However, privacy concerns, data regulations and proprietary reasons may prevent
access to such data. We present, to the best of our knowledge, the first work
on Zero-Shot Knowledge Distillation for NLP, where the student learns from the
much larger teacher without any task specific data. Our solution combines out
of domain data and adversarial training to learn the teacher's output
distribution. We investigate six tasks from the GLUE benchmark and demonstrate
that we can achieve between 75% and 92% of the teacher's classification score
(accuracy or F1) while compressing the model 30 times.
| 2,021 |
Computation and Language
|
Continual Learning in Task-Oriented Dialogue Systems
|
Continual learning in task-oriented dialogue systems can allow us to add new
domains and functionalities through time without incurring the high cost of a
whole system retraining. In this paper, we propose a continual learning
benchmark for task-oriented dialogue systems with 37 domains to be learned
continuously in four settings, such as intent recognition, state tracking,
natural language generation, and end-to-end. Moreover, we implement and compare
multiple existing continual learning baselines, and we propose a simple yet
effective architectural method based on residual adapters. Our experiments
demonstrate that the proposed architectural method and a simple replay-based
strategy perform comparably well but they both achieve inferior performance to
the multi-task learning baseline, in where all the data are shown at once,
showing that continual learning in task-oriented dialogue systems is a
challenging task. Furthermore, we reveal several trade-offs between different
continual learning methods in term of parameter usage and memory size, which
are important in the design of a task-oriented dialogue system. The proposed
benchmark is released together with several baselines to promote more research
in this direction.
| 2,021 |
Computation and Language
|
Neural Machine Translation: A Review of Methods, Resources, and Tools
|
Machine translation (MT) is an important sub-field of natural language
processing that aims to translate natural languages using computers. In recent
years, end-to-end neural machine translation (NMT) has achieved great success
and has become the new mainstream method in practical MT systems. In this
article, we first provide a broad review of the methods for NMT and focus on
methods relating to architectures, decoding, and data augmentation. Then we
summarize the resources and tools that are useful for researchers. Finally, we
conclude with a discussion of possible future research directions.
| 2,021 |
Computation and Language
|
AraELECTRA: Pre-Training Text Discriminators for Arabic Language
Understanding
|
Advances in English language representation enabled a more sample-efficient
pre-training task by Efficiently Learning an Encoder that Classifies Token
Replacements Accurately (ELECTRA). Which, instead of training a model to
recover masked tokens, it trains a discriminator model to distinguish true
input tokens from corrupted tokens that were replaced by a generator network.
On the other hand, current Arabic language representation approaches rely only
on pretraining via masked language modeling. In this paper, we develop an
Arabic language representation model, which we name AraELECTRA. Our model is
pretrained using the replaced token detection objective on large Arabic text
corpora. We evaluate our model on multiple Arabic NLP tasks, including reading
comprehension, sentiment analysis, and named-entity recognition and we show
that AraELECTRA outperforms current state-of-the-art Arabic language
representation models, given the same pretraining data and with even a smaller
model size.
| 2,021 |
Computation and Language
|
AraGPT2: Pre-Trained Transformer for Arabic Language Generation
|
Recently, pre-trained transformer-based architectures have proven to be very
efficient at language modeling and understanding, given that they are trained
on a large enough corpus. Applications in language generation for Arabic are
still lagging in comparison to other NLP advances primarily due to the lack of
advanced Arabic language generation models. In this paper, we develop the first
advanced Arabic language generation model, AraGPT2, trained from scratch on a
large Arabic corpus of internet text and news articles. Our largest model,
AraGPT2-mega, has 1.46 billion parameters, which makes it the largest Arabic
language model available. The Mega model was evaluated and showed success on
different tasks including synthetic news generation, and zero-shot question
answering. For text generation, our best model achieves a perplexity of 29.8 on
held-out Wikipedia articles. A study conducted with human evaluators showed the
significant success of AraGPT2-mega in generating news articles that are
difficult to distinguish from articles written by humans. We thus develop and
release an automatic discriminator model with a 98% percent accuracy in
detecting model-generated text. The models are also publicly available, hoping
to encourage new research directions and applications for Arabic NLP.
| 2,021 |
Computation and Language
|
Fast WordPiece Tokenization
|
Tokenization is a fundamental preprocessing step for almost all NLP tasks. In
this paper, we propose efficient algorithms for the WordPiece tokenization used
in BERT, from single-word tokenization to general text (e.g., sentence)
tokenization. When tokenizing a single word, WordPiece uses a
longest-match-first strategy, known as maximum matching. The best known
algorithms so far are O(n^2) (where n is the input length) or O(nm) (where m is
the maximum vocabulary token length). We propose a novel algorithm whose
tokenization complexity is strictly O(n). Our method is inspired by the
Aho-Corasick algorithm. We introduce additional linkages on top of the trie
built from the vocabulary, allowing smart transitions when the trie matching
cannot continue. For general text, we further propose an algorithm that
combines pre-tokenization (splitting the text into words) and our linear-time
WordPiece method into a single pass. Experimental results show that our method
is 8.2x faster than HuggingFace Tokenizers and 5.1x faster than TensorFlow Text
on average for general text tokenization.
| 2,021 |
Computation and Language
|
BANG: Bridging Autoregressive and Non-autoregressive Generation with
Large Scale Pretraining
|
In this paper, we propose BANG, a new pretraining model to Bridge the gap
between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR
generation can be uniformly regarded as to what extent previous tokens can be
attended, and BANG bridges AR and NAR generation by designing a novel model
structure for large-scale pretraining. The pretrained BANG model can
simultaneously support AR, NAR and semi-NAR generation to meet different
requirements. Experiments on question generation (SQuAD 1.1), summarization
(XSum) and dialogue generation (PersonaChat) show that BANG improves NAR and
semi-NAR performance significantly as well as attaining comparable performance
with strong AR pretrained models. Compared with the semi-NAR strong baselines,
BANG achieves absolute improvements of 14.01 and 5.24 in the overall scores of
SQuAD 1.1 and XSum, respectively. In addition, BANG achieves absolute
improvements of 10.73, 6.39 and 5.90 in the overall scores of SQuAD, XSUM and
PersonaChat respectively compared with the strong NAR baselines.
| 2,021 |
Computation and Language
|
HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions
|
Collecting supporting evidence from large corpora of text (e.g., Wikipedia)
is of great challenge for open-domain Question Answering (QA). Especially, for
multi-hop open-domain QA, scattered evidence pieces are required to be gathered
together to support the answer extraction. In this paper, we propose a new
retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia
for complex question answering. Specifically, the hop in this paper is defined
as the combination of a hyperlink and the corresponding outbound link document.
The hyperlink is encoded as the mention embedding which models the structured
knowledge of how the outbound link entity is mentioned in the textual context,
and the corresponding outbound link document is encoded as the document
embedding representing the unstructured knowledge within it. Accordingly, we
build HopRetriever which retrieves hops over Wikipedia to answer complex
questions. Experiments on the HotpotQA dataset demonstrate that HopRetriever
outperforms previously published evidence retrieval methods by large margins.
Moreover, our approach also yields quantifiable interpretations of the evidence
collection process.
| 2,021 |
Computation and Language
|
XLM-T: Scaling up Multilingual Machine Translation with Pretrained
Cross-lingual Transformer Encoders
|
Multilingual machine translation enables a single model to translate between
different languages. Most existing multilingual machine translation systems
adopt a randomly initialized Transformer backbone. In this work, inspired by
the recent success of language model pre-training, we present XLM-T, which
initializes the model with an off-the-shelf pretrained cross-lingual
Transformer encoder and fine-tunes it with multilingual parallel data. This
simple method achieves significant improvements on a WMT dataset with 10
language pairs and the OPUS-100 corpus with 94 pairs. Surprisingly, the method
is also effective even upon the strong baseline with back-translation.
Moreover, extensive analysis of XLM-T on unsupervised syntactic parsing, word
alignment, and multilingual classification explains its effectiveness for
machine translation. The code will be at https://aka.ms/xlm-t.
| 2,021 |
Computation and Language
|
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
|
Massively multilingual language models such as multilingual BERT offer
state-of-the-art cross-lingual transfer performance on a range of NLP tasks.
However, due to limited capacity and large differences in pretraining data
sizes, there is a profound performance gap between resource-rich and
resource-poor target languages. The ultimate challenge is dealing with
under-resourced languages not covered at all by the models and written in
scripts unseen during pretraining. In this work, we propose a series of novel
data-efficient methods that enable quick and effective adaptation of pretrained
multilingual models to such low-resource languages and unseen scripts. Relying
on matrix factorization, our methods capitalize on the existing latent
knowledge about multiple languages already available in the pretrained model's
embedding matrix. Furthermore, we show that learning of the new dedicated
embedding matrix in the target language can be improved by leveraging a small
number of vocabulary items (i.e., the so-called lexically overlapping tokens)
shared between mBERT's and target language vocabulary. Our adaptation
techniques offer substantial performance gains for languages with unseen
scripts. We also demonstrate that they can yield improvements for low-resource
languages written in scripts covered by the pretrained model.
| 2,021 |
Computation and Language
|
Coreference Reasoning in Machine Reading Comprehension
|
Coreference resolution is essential for natural language understanding and
has been long studied in NLP. In recent years, as the format of Question
Answering (QA) became a standard for machine reading comprehension (MRC), there
have been data collection efforts, e.g., Dasigi et al. (2019), that attempt to
evaluate the ability of MRC models to reason about coreference. However, as we
show, coreference reasoning in MRC is a greater challenge than earlier thought;
MRC datasets do not reflect the natural distribution and, consequently, the
challenges of coreference reasoning. Specifically, success on these datasets
does not reflect a model's proficiency in coreference reasoning. We propose a
methodology for creating MRC datasets that better reflect the challenges of
coreference reasoning and use it to create a sample evaluation set. The results
on our dataset show that state-of-the-art models still struggle with these
phenomena. Furthermore, we develop an effective way to use naturally occurring
coreference phenomena from existing coreference resolution datasets when
training MRC models. This allows us to show an improvement in the coreference
reasoning abilities of state-of-the-art models. The code and the resulting
dataset are available at https://github.com/UKPLab/coref-reasoning-in-qa.
| 2,021 |
Computation and Language
|
HateCheck: Functional Tests for Hate Speech Detection Models
|
Detecting online hate is a difficult task that even state-of-the-art models
struggle with. Typically, hate speech detection models are evaluated by
measuring their performance on held-out test data using metrics such as
accuracy and F1 score. However, this approach makes it difficult to identify
specific model weak points. It also risks overestimating generalisable model
performance due to increasingly well-evidenced systematic gaps and biases in
hate speech datasets. To enable more targeted diagnostic insights, we introduce
HateCheck, a suite of functional tests for hate speech detection models. We
specify 29 model functionalities motivated by a review of previous research and
a series of interviews with civil society stakeholders. We craft test cases for
each functionality and validate their quality through a structured annotation
process. To illustrate HateCheck's utility, we test near-state-of-the-art
transformer models as well as two popular commercial models, revealing critical
model weaknesses.
| 2,021 |
Computation and Language
|
How Good is Your Tokenizer? On the Monolingual Performance of
Multilingual Language Models
|
In this work, we provide a systematic and comprehensive empirical comparison
of pretrained multilingual language models versus their monolingual
counterparts with regard to their monolingual task performance. We study a set
of nine typologically diverse languages with readily available pretrained
monolingual models on a set of five diverse monolingual downstream tasks. We
first aim to establish, via fair and controlled comparisons, if a gap between
the multilingual and the corresponding monolingual representation of that
language exists, and subsequently investigate the reason for any performance
difference. To disentangle conflating factors, we train new monolingual models
on the same data, with monolingually and multilingually trained tokenizers. We
find that while the pretraining data size is an important factor, a designated
monolingual tokenizer plays an equally important role in the downstream
performance. Our results show that languages that are adequately represented in
the multilingual model's vocabulary exhibit negligible performance decreases
over their monolingual counterparts. We further find that replacing the
original multilingual tokenizer with the specialized monolingual tokenizer
improves the downstream performance of the multilingual model for almost every
task and language.
| 2,021 |
Computation and Language
|
Open Korean Corpora: A Practical Report
|
Korean is often referred to as a low-resource language in the research
community. While this claim is partially true, it is also because the
availability of resources is inadequately advertised and curated. This work
curates and reviews a list of Korean corpora, first describing
institution-level resource development, then further iterate through a list of
current open datasets for different types of tasks. We then propose a direction
on how open-source dataset construction and releases should be done for
less-resourced languages to promote research.
| 2,023 |
Computation and Language
|
TexSmart: A Text Understanding System for Fine-Grained NER and Enhanced
Semantic Analysis
|
This technique report introduces TexSmart, a text understanding system that
supports fine-grained named entity recognition (NER) and enhanced semantic
analysis functionalities. Compared to most previous publicly available text
understanding systems and tools, TexSmart holds some unique features. First,
the NER function of TexSmart supports over 1,000 entity types, while most other
public tools typically support several to (at most) dozens of entity types.
Second, TexSmart introduces new semantic analysis functions like semantic
expansion and deep semantic representation, that are absent in most previous
systems. Third, a spectrum of algorithms (from very fast algorithms to those
that are relatively slow but more accurate) are implemented for one function in
TexSmart, to fulfill the requirements of different academic and industrial
applications. The adoption of unsupervised or weakly-supervised algorithms is
especially emphasized, with the goal of easily updating our models to include
fresh data with less human annotation efforts.
The main contents of this report include major functions of TexSmart,
algorithms for achieving these functions, how to use the TexSmart toolkit and
Web APIs, and evaluation results of some key algorithms.
| 2,021 |
Computation and Language
|
CoCoLM: COmplex COmmonsense Enhanced Language Model with Discourse
Relations
|
Large-scale pre-trained language models have demonstrated strong knowledge
representation ability. However, recent studies suggest that even though these
giant models contains rich simple commonsense knowledge (e.g., bird can fly and
fish can swim.), they often struggle with the complex commonsense knowledge
that involves multiple eventualities (verb-centric phrases, e.g., identifying
the relationship between ``Jim yells at Bob'' and ``Bob is upset'').To address
this problem, in this paper, we propose to help pre-trained language models
better incorporate complex commonsense knowledge. Different from existing
fine-tuning approaches, we do not focus on a specific task and propose a
general language model named CoCoLM. Through the careful training over a
large-scale eventuality knowledge graphs ASER, we successfully teach
pre-trained language models (i.e., BERT and RoBERTa) rich complex commonsense
knowledge among eventualities. Experiments on multiple downstream commonsense
tasks that requires the correct understanding of eventualities demonstrate the
effectiveness of CoCoLM.
| 2,022 |
Computation and Language
|
Vocabulary Learning via Optimal Transport for Neural Machine Translation
|
The choice of token vocabulary affects the performance of machine
translation. This paper aims to figure out what is a good vocabulary and
whether one can find the optimal vocabulary without trial training. To answer
these questions, we first provide an alternative understanding of the role of
vocabulary from the perspective of information theory. Motivated by this, we
formulate the quest of vocabularization -- finding the best token dictionary
with a proper size -- as an optimal transport (OT) problem. We propose VOLT, a
simple and efficient solution without trial training. Empirical results show
that VOLT outperforms widely-used vocabularies in diverse scenarios, including
WMT-14 English-German and TED's 52 translation directions. For example, VOLT
achieves almost 70% vocabulary size reduction and 0.5 BLEU gain on
English-German translation. Also, compared to BPE-search, VOLT reduces the
search time from 384 GPU hours to 30 GPU hours on English-German translation.
Codes are available at https://github.com/Jingjing-NLP/VOLT .
| 2,021 |
Computation and Language
|
ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual
Semantics with Monolingual Corpora
|
Recent studies have demonstrated that pre-trained cross-lingual models
achieve impressive performance in downstream cross-lingual tasks. This
improvement benefits from learning a large amount of monolingual and parallel
corpora. Although it is generally acknowledged that parallel corpora are
critical for improving the model performance, existing methods are often
constrained by the size of parallel corpora, especially for low-resource
languages. In this paper, we propose ERNIE-M, a new training method that
encourages the model to align the representation of multiple languages with
monolingual corpora, to overcome the constraint that the parallel corpus size
places on the model performance. Our key insight is to integrate
back-translation into the pre-training process. We generate pseudo-parallel
sentence pairs on a monolingual corpus to enable the learning of semantic
alignments between different languages, thereby enhancing the semantic modeling
of cross-lingual models. Experimental results show that ERNIE-M outperforms
existing cross-lingual models and delivers new state-of-the-art results in
various cross-lingual downstream tasks.
| 2,021 |
Computation and Language
|
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots
Matters
|
Few-shot crosslingual transfer has been shown to outperform its zero-shot
counterpart with pretrained encoders like multilingual BERT. Despite its
growing popularity, little to no attention has been paid to standardizing and
analyzing the design of few-shot experiments. In this work, we highlight a
fundamental risk posed by this shortcoming, illustrating that the model
exhibits a high degree of sensitivity to the selection of few shots. We conduct
a large-scale experimental study on 40 sets of sampled few shots for six
diverse NLP tasks across up to 40 languages. We provide an analysis of success
and failure cases of few-shot transfer, which highlights the role of lexical
features. Additionally, we show that a straightforward full model finetuning
approach is quite effective for few-shot transfer, outperforming several
state-of-the-art few-shot approaches. As a step towards standardizing few-shot
crosslingual experimental designs, we make our sampled few shots publicly
available.
| 2,021 |
Computation and Language
|
ERNIE-Doc: A Retrospective Long-Document Modeling Transformer
|
Transformers are not suited for processing long documents, due to their
quadratically increasing memory and time consumption. Simply truncating a long
document or applying the sparse attention mechanism will incur the context
fragmentation problem or lead to an inferior modeling capability against
comparable model sizes. In this paper, we propose ERNIE-Doc, a document-level
language pretraining model based on Recurrence Transformers. Two well-designed
techniques, namely the retrospective feed mechanism and the enhanced recurrence
mechanism, enable ERNIE-Doc, which has a much longer effective context length,
to capture the contextual information of a complete document. We pretrain
ERNIE-Doc to explicitly learn the relationships among segments with an
additional document-aware segment-reordering objective. Various experiments
were conducted on both English and Chinese document-level tasks. ERNIE-Doc
improved the state-of-the-art language modeling result of perplexity to 16.8 on
WikiText-103. Moreover, it outperformed competitive pretraining models by a
large margin on most language understanding tasks, such as text classification
and question answering.
| 2,021 |
Computation and Language
|
Better Robustness by More Coverage: Adversarial Training with Mixup
Augmentation for Robust Fine-tuning
|
Pretrained language models (PLMs) perform poorly under adversarial attacks.
To improve the adversarial robustness, adversarial data augmentation (ADA) has
been widely adopted to cover more search space of adversarial attacks by adding
textual adversarial examples during training. However, the number of
adversarial examples for text augmentation is still extremely insufficient due
to the exponentially large attack search space. In this work, we propose a
simple and effective method to cover a much larger proportion of the attack
search space, called Adversarial and Mixup Data Augmentation (AMDA).
Specifically, AMDA linearly interpolates the representations of pairs of
training samples to form new virtual samples, which are more abundant and
diverse than the discrete text adversarial examples in conventional ADA.
Moreover, to fairly evaluate the robustness of different models, we adopt a
challenging evaluation setup, which generates a new set of adversarial examples
targeting each model. In text classification experiments of BERT and RoBERTa,
AMDA achieves significant robustness gains under two strong adversarial attacks
and alleviates the performance degradation of ADA on the clean data. Our code
is available at: https://github.com/thunlp/MixADA .
| 2,021 |
Computation and Language
|
BinaryBERT: Pushing the Limit of BERT Quantization
|
The rapid development of large pre-trained language models has greatly
increased the demand for model compression techniques, among which quantization
is a popular solution. In this paper, we propose BinaryBERT, which pushes BERT
quantization to the limit by weight binarization. We find that a binary BERT is
hard to be trained directly than a ternary counterpart due to its complex and
irregular loss landscape. Therefore, we propose ternary weight splitting, which
initializes BinaryBERT by equivalently splitting from a half-sized ternary
network. The binary model thus inherits the good performance of the ternary
one, and can be further enhanced by fine-tuning the new architecture after
splitting. Empirical results show that our BinaryBERT has only a slight
performance drop compared with the full-precision model while being 24x
smaller, achieving the state-of-the-art compression results on the GLUE and
SQuAD benchmarks.
| 2,021 |
Computation and Language
|
Revisiting Robust Neural Machine Translation: A Transformer Case Study
|
Transformers (Vaswani et al., 2017) have brought a remarkable improvement in
the performance of neural machine translation (NMT) systems but they could be
surprisingly vulnerable to noise. In this work, we try to investigate how noise
breaks Transformers and if there exist solutions to deal with such issues.
There is a large body of work in the NMT literature on analyzing the behavior
of conventional models for the problem of noise but Transformers are relatively
understudied in this context. Motivated by this, we introduce a novel
data-driven technique called Target Augmented Fine-tuning (TAFT) to incorporate
noise during training. This idea is comparable to the well-known fine-tuning
strategy. Moreover, we propose two other novel extensions to the original
Transformer: Controlled Denoising (CD) and Dual-Channel Decoding (DCD), that
modify the neural architecture as well as the training process to handle noise.
One important characteristic of our techniques is that they only impact the
training phase and do not impose any overhead at inference time. We evaluated
our techniques to translate the English--German pair in both directions and
observed that our models have a higher tolerance to noise. More specifically,
they perform with no deterioration where up to 10% of entire test words are
infected by noise.
| 2,021 |
Computation and Language
|
Beyond Offline Mapping: Learning Cross Lingual Word Embeddings through
Context Anchoring
|
Recent research on cross-lingual word embeddings has been dominated by
unsupervised mapping approaches that align monolingual embeddings. Such methods
critically rely on those embeddings having a similar structure, but it was
recently shown that the separate training in different languages causes
departures from this assumption. In this paper, we propose an alternative
approach that does not have this limitation, while requiring a weak seed
dictionary (e.g., a list of identical words) as the only form of supervision.
Rather than aligning two fixed embedding spaces, our method works by fixing the
target language embeddings, and learning a new set of embeddings for the source
language that are aligned with them. To that end, we use an extension of
skip-gram that leverages translated context words as anchor points, and
incorporates self-learning and iterative restarts to reduce the dependency on
the initial dictionary. Our approach outperforms conventional mapping methods
on bilingual lexicon induction, and obtains competitive results in the
downstream XNLI task.
| 2,021 |
Computation and Language
|
FGraDA: A Dataset and Benchmark for Fine-Grained Domain Adaptation in
Machine Translation
|
Previous research for adapting a general neural machine translation (NMT)
model into a specific domain usually neglects the diversity in translation
within the same domain, which is a core problem for domain adaptation in
real-world scenarios. One representative of such challenging scenarios is to
deploy a translation system for a conference with a specific topic, e.g.,
global warming or coronavirus, where there are usually extremely less resources
due to the limited schedule. To motivate wider investigation in such a
scenario, we present a real-world fine-grained domain adaptation task in
machine translation (FGraDA). The FGraDA dataset consists of Chinese-English
translation task for four sub-domains of information technology: autonomous
vehicles, AI education, real-time networks, and smart phone. Each sub-domain is
equipped with a development set and test set for evaluation purposes. To be
closer to reality, FGraDA does not employ any in-domain bilingual training data
but provides bilingual dictionaries and wiki knowledge base, which can be
easier obtained within a short time. We benchmark the fine-grained domain
adaptation task and present in-depth analyses showing that there are still
challenging problems to further improve the performance with heterogeneous
resources.
| 2,021 |
Computation and Language
|
Making Pre-trained Language Models Better Few-shot Learners
|
The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot
performance solely by leveraging a natural-language prompt and a few task
demonstrations as input context. Inspired by their findings, we study few-shot
learning in a more practical scenario, where we use smaller language models for
which fine-tuning is computationally efficient. We present LM-BFF--better
few-shot fine-tuning of language models--a suite of simple and complementary
techniques for fine-tuning language models on a small number of annotated
examples. Our approach includes (1) prompt-based fine-tuning together with a
novel pipeline for automating prompt generation; and (2) a refined strategy for
dynamically and selectively incorporating demonstrations into each context.
Finally, we present a systematic evaluation for analyzing few-shot performance
on a range of NLP tasks, including classification and regression. Our
experiments demonstrate that our methods combine to dramatically outperform
standard fine-tuning procedures in this low resource setting, achieving up to
30% absolute improvement, and 11% on average across all tasks. Our approach
makes minimal assumptions on task resources and domain expertise, and hence
constitutes a strong task-agnostic method for few-shot learning.
| 2,021 |
Computation and Language
|
Moral Stories: Situated Reasoning about Norms, Intents, Actions, and
their Consequences
|
In social settings, much of human behavior is governed by unspoken rules of
conduct. For artificial systems to be fully integrated into social
environments, adherence to such norms is a central prerequisite. We investigate
whether contemporary NLG models can function as behavioral priors for systems
deployed in social settings by generating action hypotheses that achieve
predefined goals under moral constraints. Moreover, we examine if models can
anticipate likely consequences of (im)moral actions, or explain why certain
actions are preferable by generating relevant norms. For this purpose, we
introduce 'Moral Stories', a crowd-sourced dataset of structured, branching
narratives for the study of grounded, goal-oriented social reasoning. Finally,
we propose decoding strategies that effectively combine multiple expert models
to significantly improve the quality of generated actions, consequences, and
norms compared to strong baselines, e.g. though abductive reasoning.
| 2,021 |
Computation and Language
|
Learning from the Worst: Dynamically Generated Datasets to Improve
Online Hate Detection
|
We present a human-and-model-in-the-loop process for dynamically generating
datasets and training better performing and more robust hate detection models.
We provide a new dataset of ~40,000 entries, generated and labelled by trained
annotators over four rounds of dynamic data creation. It includes ~15,000
challenging perturbations and each hateful entry has fine-grained labels for
the type and target of hate. Hateful entries make up 54% of the dataset, which
is substantially higher than comparable datasets. We show that model
performance is substantially improved using this approach. Models trained on
later rounds of data collection perform better on test sets and are harder for
annotators to trick. They also perform better on HateCheck, a suite of
functional tests for online hate detection. We provide the code, dataset and
annotation guidelines for other researchers to use. Accepted at ACL 2021.
| 2,021 |
Computation and Language
|
Understanding Politics via Contextualized Discourse Processing
|
Politicians often have underlying agendas when reacting to events. Arguments
in contexts of various events reflect a fairly consistent set of agendas for a
given entity. In spite of recent advances in Pretrained Language Models (PLMs),
those text representations are not designed to capture such nuanced patterns.
In this paper, we propose a Compositional Reader model consisting of encoder
and composer modules, that attempts to capture and leverage such information to
generate more effective representations for entities, issues, and events. These
representations are contextualized by tweets, press releases, issues, news
articles, and participating entities. Our model can process several documents
at once and generate composed representations for multiple entities over
several issues or events. Via qualitative and quantitative empirical analysis,
we show that these representations are meaningful and effective.
| 2,021 |
Computation and Language
|
Conditional Generation of Temporally-ordered Event Sequences
|
Models of narrative schema knowledge have proven useful for a range of
event-related tasks, but they typically do not capture the temporal
relationships between events. We propose a single model that addresses both
temporal ordering, sorting given events into the order they occurred, and event
infilling, predicting new events which fit into an existing temporally-ordered
sequence. We use a BART-based conditional generation model that can capture
both temporality and common event co-occurrence, meaning it can be flexibly
applied to different tasks in this space. Our model is trained as a denoising
autoencoder: we take temporally-ordered event sequences, shuffle them, delete
some events, and then attempt to recover the original event sequence. This task
teaches the model to make inferences given incomplete knowledge about the
events in an underlying scenario. On the temporal ordering task, we show that
our model is able to unscramble event sequences from existing datasets without
access to explicitly labeled temporal training data, outperforming both a
BERT-based pairwise model and a BERT-based pointer network. On event infilling,
human evaluation shows that our model is able to generate events that fit
better temporally into the input events when compared to GPT-2 story completion
models.
| 2,021 |
Computation and Language
|
Evidence-based Factual Error Correction
|
This paper introduces the task of factual error correction: performing edits
to a claim so that the generated rewrite is better supported by evidence. This
extends the well-studied task of fact verification by providing a mechanism to
correct written texts that are refuted or only partially supported by evidence.
We demonstrate that it is feasible to train factual error correction systems
from existing fact checking datasets which only contain labeled claims
accompanied by evidence, but not the correction. We achieve this by employing a
two-stage distant supervision approach that incorporates evidence into masked
claims when generating corrections. Our approach, based on the T5 transformer
and using retrieved evidence, achieved better results than existing work which
used a pointer copy network and gold evidence, producing accurate factual error
corrections for 5x more instances in human evaluation and a .125 increase in
SARI score. The evaluation is conducted on a dataset of 65,000 instances based
on a recent fact verification shared task and we release it to enable further
work on the task.
| 2,021 |
Computation and Language
|
Promoting Graph Awareness in Linearized Graph-to-Text Generation
|
Generating text from structured inputs, such as meaning representations or
RDF triples, has often involved the use of specialized graph-encoding neural
networks. However, recent applications of pretrained transformers to
linearizations of graph inputs have yielded state-of-the-art generation results
on graph-to-text tasks. Here, we explore the ability of these linearized models
to encode local graph structures, in particular their invariance to the graph
linearization strategy and their ability to reconstruct corrupted inputs. Our
findings motivate solutions to enrich the quality of models' implicit graph
encodings via scaffolding. Namely, we use graph-denoising objectives
implemented in a multi-task text-to-text framework. We find that these
denoising scaffolds lead to substantial improvements in downstream generation
in low-resource settings.
| 2,021 |
Computation and Language
|
UCCA's Foundational Layer: Annotation Guidelines v2.1
|
This is the annotation manual for Universal Conceptual Cognitive Annotation
(UCCA; Abend and Rappoport, 2013), specifically the Foundational Layer. UCCA is
a graph-based semantic annotation scheme based on typological linguistic
principles. It has been applied to several languages; for ease of exposition
these guidelines give examples mainly in English. New annotators may wish to
start with the tutorial on the UCCA framework (Abend et al., 2020). Further
resources are available at the project homepage:
https://universalconceptualcognitiveannotation.github.io
| 2,021 |
Computation and Language
|
MiniLMv2: Multi-Head Self-Attention Relation Distillation for
Compressing Pretrained Transformers
|
We generalize deep self-attention distillation in MiniLM (Wang et al., 2020)
by only using self-attention relation distillation for task-agnostic
compression of pretrained Transformers. In particular, we define multi-head
self-attention relations as scaled dot-product between the pairs of query, key,
and value vectors within each self-attention module. Then we employ the above
relational knowledge to train the student model. Besides its simplicity and
unified principle, more favorably, there is no restriction in terms of the
number of student's attention heads, while most previous work has to guarantee
the same head number between teacher and student. Moreover, the fine-grained
self-attention relations tend to fully exploit the interaction knowledge
learned by Transformer. In addition, we thoroughly examine the layer selection
strategy for teacher models, rather than just relying on the last layer as in
MiniLM. We conduct extensive experiments on compressing both monolingual and
multilingual pretrained models. Experimental results demonstrate that our
models distilled from base-size and large-size teachers (BERT, RoBERTa and
XLM-R) outperform the state-of-the-art.
| 2,021 |
Computation and Language
|
Shortformer: Better Language Modeling using Shorter Inputs
|
Increasing the input length has been a driver of progress in language
modeling with transformers. We identify conditions where shorter inputs are not
harmful, and achieve perplexity and efficiency improvements through two new
methods that decrease input length. First, we show that initially training a
model on short subsequences before moving on to longer ones both reduces
overall training time and, surprisingly, substantially improves perplexity.
Second, we show how to improve the efficiency of recurrence methods in
transformers, which let models condition on previously processed tokens when
generating sequences that exceed the maximal length the transformer can handle
at once. Existing methods require computationally expensive relative position
embeddings; we introduce a simple alternative of adding absolute position
embeddings to queries and keys instead of to word embeddings, which efficiently
produces superior results. We show that these recurrent models also benefit
from short input lengths. Combining these techniques speeds up training by a
factor of 1.65, reduces memory usage, and substantially improves perplexity on
WikiText-103, without adding any parameters.
| 2,021 |
Computation and Language
|
Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade
|
Fully non-autoregressive neural machine translation (NAT) is proposed to
simultaneously predict tokens with single forward of neural networks, which
significantly reduces the inference latency at the expense of quality drop
compared to the Transformer baseline. In this work, we target on closing the
performance gap while maintaining the latency advantage. We first inspect the
fundamental issues of fully NAT models, and adopt dependency reduction in the
learning space of output tokens as the basic guidance. Then, we revisit methods
in four different aspects that have been proven effective for improving NAT
models, and carefully combine these techniques with necessary modifications.
Our extensive experiments on three translation benchmarks show that the
proposed system achieves the new state-of-the-art results for fully NAT models,
and obtains comparable performance with the autoregressive and iterative NAT
systems. For instance, one of the proposed models achieves 27.49 BLEU points on
WMT14 En-De with approximately 16.5X speed up at inference time.
| 2,021 |
Computation and Language
|
Using Natural Language Relations between Answer Choices for Machine
Comprehension
|
When evaluating an answer choice for Reading Comprehension task, other answer
choices available for the question and the answers of related questions about
the same paragraph often provide valuable information. In this paper, we
propose a method to leverage the natural language relations between the answer
choices, such as entailment and contradiction, to improve the performance of
machine comprehension. We use a stand-alone question answering (QA) system to
perform QA task and a Natural Language Inference (NLI) system to identify the
relations between the choice pairs. Then we perform inference using an Integer
Linear Programming (ILP)-based relational framework to re-evaluate the
decisions made by the standalone QA system in light of the relations identified
by the NLI system. We also propose a multitask learning model that learns both
the tasks jointly.
| 2,019 |
Computation and Language
|
Studying Strategically: Learning to Mask for Closed-book QA
|
Closed-book question-answering (QA) is a challenging task that requires a
model to directly answer questions without access to external knowledge. It has
been shown that directly fine-tuning pre-trained language models with
(question, answer) examples yields surprisingly competitive performance, which
is further improved upon through adding an intermediate pre-training stage
between general pre-training and fine-tuning. Prior work used a heuristic
during this intermediate stage, whereby named entities and dates are masked,
and the model is trained to recover these tokens. In this paper, we aim to
learn the optimal masking strategy for the intermediate pre-training stage. We
first train our masking policy to extract spans that are likely to be tested,
using supervision from the downstream task itself, then deploy the learned
policy during intermediate pre-training. Thus, our policy packs task-relevant
knowledge into the parameters of a language model. Our approach is particularly
effective on TriviaQA, outperforming strong heuristics when used to pre-train
BART.
| 2,021 |
Computation and Language
|
Intrinsic Bias Metrics Do Not Correlate with Application Bias
|
Natural Language Processing (NLP) systems learn harmful societal biases that
cause them to amplify inequality as they are deployed in more and more
situations. To guide efforts at debiasing these systems, the NLP community
relies on a variety of metrics that quantify bias in models. Some of these
metrics are intrinsic, measuring bias in word embedding spaces, and some are
extrinsic, measuring bias in downstream tasks that the word embeddings enable.
Do these intrinsic and extrinsic metrics correlate with each other? We compare
intrinsic and extrinsic metrics across hundreds of trained models covering
different tasks and experimental conditions. Our results show no reliable
correlation between these metrics that holds in all scenarios across tasks and
languages. We urge researchers working on debiasing to focus on extrinsic
measures of bias, and to make using these measures more feasible via creation
of new challenge sets and annotated test data. To aid this effort, we release
code, a new intrinsic metric, and an annotated test set focused on gender bias
in hate speech.
| 2,021 |
Computation and Language
|
UnNatural Language Inference
|
Recent investigations into the inner-workings of state-of-the-art large-scale
pre-trained Transformer-based Natural Language Understanding (NLU) models
indicate that they appear to know humanlike syntax, at least to some extent. We
provide novel evidence that complicates this claim: we find that
state-of-the-art Natural Language Inference (NLI) models assign the same labels
to permuted examples as they do to the original, i.e. they are largely
invariant to random word-order permutations. This behavior notably differs from
that of humans; we struggle with ungrammatical sentences. To measure the
severity of this issue, we propose a suite of metrics and investigate which
properties of particular permutations lead models to be word-order invariant.
In the MNLI dataset, for example, we find almost all (98.7%) examples contain
at least one permutation which elicits the gold label. Models are sometimes
even able to assign gold labels to permutations that they originally failed to
predict correctly. We provide a comprehensive empirical evaluation of this
phenomenon, and further show that this issue exists for both Transformers and
pre-Transformer RNN / ConvNet based encoders, as well as across multiple
languages (English and Mandarin Chinese). Our code and data are available at
https://github.com/facebookresearch/unlu.
| 2,021 |
Computation and Language
|
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
|
Recent work has demonstrated that increased training dataset diversity
improves general cross-domain knowledge and downstream generalization
capability for large-scale language models. With this in mind, we present
\textit{the Pile}: an 825 GiB English text corpus targeted at training
large-scale language models. The Pile is constructed from 22 diverse
high-quality subsets -- both existing and newly constructed -- many of which
derive from academic or professional sources. Our evaluation of the untuned
performance of GPT-2 and GPT-3 on the Pile shows that these models struggle on
many of its components, such as academic writing. Conversely, models trained on
the Pile improve significantly over both Raw CC and CC-100 on all components of
the Pile, while improving performance on downstream evaluations. Through an
in-depth exploratory analysis, we document potentially concerning aspects of
the data for prospective users. We make publicly available the code used in its
construction.
| 2,021 |
Computation and Language
|
KART: Parameterization of Privacy Leakage Scenarios from Pre-trained
Language Models
|
For the safe sharing pre-trained language models, no guidelines exist at
present owing to the difficulty in estimating the upper bound of the risk of
privacy leakage. One problem is that previous studies have assessed the risk
for different real-world privacy leakage scenarios and attack methods, which
reduces the portability of the findings. To tackle this problem, we represent
complex real-world privacy leakage scenarios under a universal
parameterization, \textit{Knowledge, Anonymization, Resource, and Target}
(KART). KART parameterization has two merits: (i) it clarifies the definition
of privacy leakage in each experiment and (ii) it improves the comparability of
the findings of risk assessments. We show that previous studies can be simply
reviewed by parameterizing the scenarios with KART. We also demonstrate privacy
risk assessments in different scenarios under the same attack method, which
suggests that KART helps approximate the upper bound of risk under a specific
attack or scenario. We believe that KART helps integrate past and future
findings on privacy risk and will contribute to a standard for sharing language
models.
| 2,022 |
Computation and Language
|
Towards Modelling Coherence in Spoken Discourse
|
While there has been significant progress towards modelling coherence in
written discourse, the work in modelling spoken discourse coherence has been
quite limited. Unlike the coherence in text, coherence in spoken discourse is
also dependent on the prosodic and acoustic patterns in speech. In this paper,
we model coherence in spoken discourse with audio-based coherence models. We
perform experiments with four coherence-related tasks with spoken discourses.
In our experiments, we evaluate machine-generated speech against the speech
delivered by expert human speakers. We also compare the spoken discourses
generated by human language learners of varying language proficiency levels.
Our results show that incorporating the audio modality along with the text
benefits the coherence models in performing downstream coherence related tasks
with spoken discourses.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.