Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Disentangled Adversarial Neural Topic Model for Separating Opinions
from Plots in User Reviews
|
The flexibility of the inference process in Variational Autoencoders (VAEs)
has recently led to revising traditional probabilistic topic models giving rise
to Neural Topic Models (NTMs). Although these approaches have achieved
significant results, surprisingly very little work has been done on how to
disentangle the latent topics. Existing topic models when applied to reviews
may extract topics associated with writers' subjective opinions mixed with
those related to factual descriptions such as plot summaries in movie and book
reviews. It is thus desirable to automatically separate opinion topics from
plot/neutral ones enabling a better interpretability. In this paper, we propose
a neural topic model combined with adversarial training to disentangle opinion
topics from plot and neutral ones. We conduct an extensive experimental
assessment introducing a new collection of movie and book reviews paired with
their plots, namely MOBO dataset, showing an improved coherence and variety of
topics, a consistent disentanglement rate, and sentiment classification
performance superior to other supervised topic models.
| 2,021 |
Computation and Language
|
Kwame: A Bilingual AI Teaching Assistant for Online SuaCode Courses
|
Introductory hands-on courses such as our smartphone-based coding course,
SuaCode require a lot of support for students to accomplish learning goals.
Online environments make it even more difficult to get assistance especially
more recently because of COVID-19. Given the multilingual context of SuaCode
students - learners across 42 African countries that are mostly Anglophone or
Francophone - in this work, we developed a bilingual Artificial Intelligence
(AI) Teaching Assistant (TA) - Kwame - that provides answers to students'
coding questions from SuaCode courses in English and French. Kwame is a
Sentence-BERT (SBERT)-based question-answering (QA) system that we trained and
evaluated offline using question-answer pairs created from the course's
quizzes, lesson notes and students' questions in past cohorts. Kwame finds the
paragraph most semantically similar to the question via cosine similarity. We
compared the system with TF-IDF and Universal Sentence Encoder. Our results
showed that fine-tuning on the course data and returning the top 3 and 5
answers improved the accuracy results. Kwame will make it easy for students to
get quick and accurate answers to questions in SuaCode courses.
| 2,021 |
Computation and Language
|
Developing Real-time Streaming Transformer Transducer for Speech
Recognition on Large-scale Dataset
|
Recently, Transformer based end-to-end models have achieved great success in
many areas including speech recognition. However, compared to LSTM models, the
heavy computational cost of the Transformer during inference is a key issue to
prevent their applications. In this work, we explored the potential of
Transformer Transducer (T-T) models for the fist pass decoding with low latency
and fast speed on a large-scale dataset. We combine the idea of Transformer-XL
and chunk-wise streaming processing to design a streamable Transformer
Transducer model. We demonstrate that T-T outperforms the hybrid model, RNN
Transducer (RNN-T), and streamable Transformer attention-based encoder-decoder
model in the streaming scenario. Furthermore, the runtime cost and latency can
be optimized with a relatively small look-ahead.
| 2,021 |
Computation and Language
|
MAM: Masked Acoustic Modeling for End-to-End Speech-to-Text Translation
|
End-to-end Speech-to-text Translation (E2E-ST), which directly translates
source language speech to target language text, is widely useful in practice,
but traditional cascaded approaches (ASR+MT) often suffer from error
propagation in the pipeline. On the other hand, existing end-to-end solutions
heavily depend on the source language transcriptions for pre-training or
multi-task training with Automatic Speech Recognition (ASR). We instead propose
a simple technique to learn a robust speech encoder in a self-supervised
fashion only on the speech side, which can utilize speech data without
transcription. This technique termed Masked Acoustic Modeling (MAM), not only
provides an alternative solution to improving E2E-ST, but also can perform
pre-training on any acoustic signals (including non-speech ones) without
annotation. We conduct our experiments over 8 different translation directions.
In the setting without using any transcriptions, our technique achieves an
average improvement of +1.1 BLEU, and +2.3 BLEU with MAM pre-training.
Pre-training of MAM with arbitrary acoustic signals also has an average
improvement with +1.6 BLEU for those languages. Compared with ASR multi-task
learning solution, which replies on transcription during training, our
pre-trained MAM model, which does not use transcription, achieves similar
accuracy.
| 2,021 |
Computation and Language
|
Knowledge Distillation for BERT Unsupervised Domain Adaptation
|
A pre-trained language model, BERT, has brought significant performance
improvements across a range of natural language processing tasks. Since the
model is trained on a large corpus of diverse topics, it shows robust
performance for domain shift problems in which data distributions at training
(source data) and testing (target data) differ while sharing similarities.
Despite its great improvements compared to previous models, it still suffers
from performance degradation due to domain shifts. To mitigate such problems,
we propose a simple but effective unsupervised domain adaptation method,
adversarial adaptation with distillation (AAD), which combines the adversarial
discriminative domain adaptation (ADDA) framework with knowledge distillation.
We evaluate our approach in the task of cross-domain sentiment classification
on 30 domain pairs, advancing the state-of-the-art performance for unsupervised
domain adaptation in text sentiment classification.
| 2,020 |
Computation and Language
|
On the Effects of Using word2vec Representations in Neural Networks for
Dialogue Act Recognition
|
Dialogue act recognition is an important component of a large number of
natural language processing pipelines. Many research works have been carried
out in this area, but relatively few investigate deep neural networks and word
embeddings. This is surprising, given that both of these techniques have proven
exceptionally good in most other language-related domains. We propose in this
work a new deep neural network that explores recurrent models to capture word
sequences within sentences, and further study the impact of pretrained word
embeddings. We validate this model on three languages: English, French and
Czech. The performance of the proposed approach is consistent across these
languages and it is comparable to the state-of-the-art results in English. More
importantly, we confirm that deep neural networks indeed outperform a Maximum
Entropy classifier, which was expected. However , and this is more surprising,
we also found that standard word2vec em-beddings do not seem to bring valuable
information for this task and the proposed model, whatever the size of the
training corpus is. We thus further analyse the resulting embeddings and
conclude that a possible explanation may be related to the mismatch between the
type of lexical-semantic information captured by the word2vec embeddings, and
the kind of relations between words that is the most useful for the dialogue
act recognition task.
| 2,018 |
Computation and Language
|
Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution
Data
|
Fine-tuned pre-trained language models can suffer from severe miscalibration
for both in-distribution and out-of-distribution (OOD) data due to
over-parameterization. To mitigate this issue, we propose a regularized
fine-tuning method. Our method introduces two types of regularization for
better calibration: (1) On-manifold regularization, which generates pseudo
on-manifold samples through interpolation within the data manifold. Augmented
training with these pseudo samples imposes a smoothness regularization to
improve in-distribution calibration. (2) Off-manifold regularization, which
encourages the model to output uniform distributions for pseudo off-manifold
samples to address the over-confidence issue for OOD data. Our experiments
demonstrate that the proposed method outperforms existing calibration methods
for text classification in terms of expectation calibration error,
misclassification detection, and OOD detection on six datasets. Our code can be
found at https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning.
| 2,020 |
Computation and Language
|
An Industry Evaluation of Embedding-based Entity Alignment
|
Embedding-based entity alignment has been widely investigated in recent
years, but most proposed methods still rely on an ideal supervised learning
setting with a large number of unbiased seed mappings for training and
validation, which significantly limits their usage. In this study, we evaluate
those state-of-the-art methods in an industrial context, where the impact of
seed mappings with different sizes and different biases is explored. Besides
the popular benchmarks from DBpedia and Wikidata, we contribute and evaluate a
new industrial benchmark that is extracted from two heterogeneous knowledge
graphs (KGs) under deployment for medical applications. The experimental
results enable the analysis of the advantages and disadvantages of these
alignment methods and the further discussion of suitable strategies for their
industrial deployment.
| 2,020 |
Computation and Language
|
SlimIPL: Language-Model-Free Iterative Pseudo-Labeling
|
Recent results in end-to-end automatic speech recognition have demonstrated
the efficacy of pseudo-labeling for semi-supervised models trained both with
Connectionist Temporal Classification (CTC) and Sequence-to-Sequence (seq2seq)
losses. Iterative Pseudo-Labeling (IPL), which continuously trains a single
model using pseudo-labels iteratively re-generated as the model learns, has
been shown to further improve performance in ASR. We improve upon the IPL
algorithm: as the model learns, we propose to iteratively re-generate
transcriptions with hard labels (the most probable tokens), that is, without a
language model. We call this approach Language-Model-Free IPL (slimIPL) and
give a resultant training setup for low-resource settings with CTC-based
models. slimIPL features a dynamic cache for pseudo-labels which reduces
sensitivity to changes in relabeling hyperparameters and results in improves
training stability. slimIPL is also highly-efficient and requires 3.5-4x fewer
computational resources to converge than other state-of-the-art
semi/self-supervised approaches. With only 10 hours of labeled audio, slimIPL
is competitive with self-supervised approaches, and is state-of-the-art with
100 hours of labeled audio without the use of a language model both at test
time and during pseudo-label generation.
| 2,021 |
Computation and Language
|
Cross Copy Network for Dialogue Generation
|
In the past few years, audiences from different fields witness the
achievements of sequence-to-sequence models (e.g., LSTM+attention, Pointer
Generator Networks, and Transformer) to enhance dialogue content generation.
While content fluency and accuracy often serve as the major indicators for
model training, dialogue logics, carrying critical information for some
particular domains, are often ignored. Take customer service and court debate
dialogue as examples, compatible logics can be observed across different
dialogue instances, and this information can provide vital evidence for
utterance generation. In this paper, we propose a novel network architecture -
Cross Copy Networks(CCN) to explore the current dialog context and similar
dialogue instances' logical structure simultaneously. Experiments with two
tasks, court debate and customer service content generation, proved that the
proposed algorithm is superior to existing state-of-art content generation
models.
| 2,020 |
Computation and Language
|
Method of noun phrase detection in Ukrainian texts
|
Introduction. The area of natural language processing considers AI-complete
tasks that cannot be solved using traditional algorithmic actions. Such tasks
are commonly implemented with the usage of machine learning methodology and
means of computer linguistics. One of the preprocessing tasks of a text is the
search of noun phrases. The accuracy of this task has implications for the
effectiveness of many other tasks in the area of natural language processing.
In spite of the active development of research in the area of natural language
processing, the investigation of the search for noun phrases within Ukrainian
texts are still at an early stage. Results. The different methods of noun
phrases detection have been analyzed. The expediency of the representation of
sentences as a tree structure has been justified. The key disadvantage of many
methods of noun phrase detection is the severe dependence of the effectiveness
of their detection from the features of a certain language. Taking into account
the unified format of sentence processing and the availability of the trained
model for the building of sentence trees for Ukrainian texts, the Universal
Dependency model has been chosen. The complex method of noun phrases detection
in Ukrainian texts utilizing Universal Dependencies means and named-entity
recognition model has been suggested. Experimental verification of the
effectiveness of the suggested method on the corpus of Ukrainian news has been
performed. Different metrics of method accuracy have been calculated.
Conclusions. The results obtained can indicate that the suggested method can be
used to find noun phrases in Ukrainian texts. An accuracy increase of the
method can be made with the usage of appropriate named-entity recognition
models according to a subject area.
| 2,019 |
Computation and Language
|
Incorporating Stylistic Lexical Preferences in Generative Language
Models
|
While recent advances in language modeling have resulted in powerful
generation models, their generation style remains implicitly dependent on the
training data and can not emulate a specific target style. Leveraging the
generative capabilities of a transformer-based language models, we present an
approach to induce certain target-author attributes by incorporating continuous
multi-dimensional lexical preferences of an author into generative language
models. We introduce rewarding strategies in a reinforcement learning framework
that encourages the use of words across multiple categorical dimensions, to
varying extents. Our experiments demonstrate that the proposed approach can
generate text that distinctively aligns with a given target author's lexical
style. We conduct quantitative and qualitative comparisons with competitive and
relevant baselines to illustrate the benefits of the proposed approach.
| 2,020 |
Computation and Language
|
Bilinear Fusion of Commonsense Knowledge with Attention-Based NLI Models
|
We consider the task of incorporating real-world commonsense knowledge into
deep Natural Language Inference (NLI) models. Existing external knowledge
incorporation methods are limited to lexical level knowledge and lack
generalization across NLI models, datasets, and commonsense knowledge sources.
To address these issues, we propose a novel NLI model-independent neural
framework, BiCAM. BiCAM incorporates real-world commonsense knowledge into NLI
models. Combined with convolutional feature detectors and bilinear feature
fusion, BiCAM provides a conceptually simple mechanism that generalizes well.
Quantitative evaluations with two state-of-the-art NLI baselines on SNLI and
SciTail datasets in conjunction with ConceptNet and Aristo Tuple KGs show that
BiCAM considerably improves the accuracy the incorporated NLI baselines. For
example, our BiECAM model, an instance of BiCAM, on the challenging SciTail
dataset, improves the accuracy of incorporated baselines by 7.0% with
ConceptNet, and 8.0% with Aristo Tuple KG.
| 2,020 |
Computation and Language
|
Exploiting News Article Structure for Automatic Corpus Generation of
Entailment Datasets
|
Transformers represent the state-of-the-art in Natural Language Processing
(NLP) in recent years, proving effective even in tasks done in low-resource
languages. While pretrained transformers for these languages can be made, it is
challenging to measure their true performance and capacity due to the lack of
hard benchmark datasets, as well as the difficulty and cost of producing them.
In this paper, we present three contributions: First, we propose a methodology
for automatically producing Natural Language Inference (NLI) benchmark datasets
for low-resource languages using published news articles. Through this, we
create and release NewsPH-NLI, the first sentence entailment benchmark dataset
in the low-resource Filipino language. Second, we produce new pretrained
transformers based on the ELECTRA technique to further alleviate the resource
scarcity in Filipino, benchmarking them on our dataset against other
commonly-used transfer learning techniques. Lastly, we perform analyses on
transfer learning techniques to shed light on their true performance when
operating in low-data domains through the use of degradation tests.
| 2,021 |
Computation and Language
|
Multi-Style Transfer with Discriminative Feedback on Disjoint Corpus
|
Style transfer has been widely explored in natural language generation with
non-parallel corpus by directly or indirectly extracting a notion of style from
source and target domain corpus. A common shortcoming of existing approaches is
the prerequisite of joint annotations across all the stylistic dimensions under
consideration. Availability of such dataset across a combination of styles
limits the extension of these setups to multiple style dimensions. While
cascading single-dimensional models across multiple styles is a possibility, it
suffers from content loss, especially when the style dimensions are not
completely independent of each other. In our work, we relax this requirement of
jointly annotated data across multiple styles by using independently acquired
data across different style dimensions without any additional annotations. We
initialize an encoder-decoder setup with transformer-based language model
pre-trained on a generic corpus and enhance its re-writing capability to
multiple target style dimensions by employing multiple style-aware language
models as discriminators. Through quantitative and qualitative evaluation, we
show the ability of our model to control styles across multiple style
dimensions while preserving content of the input text. We compare it against
baselines involving cascaded state-of-the-art uni-dimensional style transfer
models.
| 2,021 |
Computation and Language
|
A Technical Report: BUT Speech Translation Systems
|
The paper describes the BUT's speech translation systems. The systems are
English$\longrightarrow$German offline speech translation systems. The systems
are based on our previous works \cite{Jointly_trained_transformers}. Though
End-to-End and cascade~(ASR-MT) spoken language translation~(SLT) systems are
reaching comparable performances, a large degradation is observed when
translating ASR hypothesis compared to the oracle input text. To reduce this
performance degradation, we have jointly-trained ASR and MT modules with ASR
objective as an auxiliary loss. Both the networks are connected through the
neural hidden representations. This model has an End-to-End differentiable path
with respect to the final objective function and also utilizes the ASR
objective for better optimization. During the inference both the modules(i.e.,
ASR and MT) are connected through the hidden representations corresponding to
the n-best hypotheses. Ensembling with independently trained ASR and MT models
have further improved the performance of the system.
| 2,020 |
Computation and Language
|
AI-lead Court Debate Case Investigation
|
The multi-role judicial debate composed of the plaintiff, defendant, and
judge is an important part of the judicial trial. Different from other types of
dialogue, questions are raised by the judge, The plaintiff, plaintiff's agent
defendant, and defendant's agent would be to debating so that the trial can
proceed in an orderly manner. Question generation is an important task in
Natural Language Generation. In the judicial trial, it can help the judge raise
efficient questions so that the judge has a clearer understanding of the case.
In this work, we propose an innovative end-to-end question generation
model-Trial Brain Model (TBM) to build a Trial Brain, it can generate the
questions the judge wants to ask through the historical dialogue between the
plaintiff and the defendant. Unlike prior efforts in natural language
generation, our model can learn the judge's questioning intention through
predefined knowledge. We do experiments on real-world datasets, the
experimental results show that our model can provide a more accurate question
in the multi-role court debate scene.
| 2,020 |
Computation and Language
|
Towards Fully Bilingual Deep Language Modeling
|
Language models based on deep neural networks have facilitated great advances
in natural language processing and understanding tasks in recent years. While
models covering a large number of languages have been introduced, their
multilinguality has come at a cost in terms of monolingual performance, and the
best-performing models at most tasks not involving cross-lingual transfer
remain monolingual. In this paper, we consider the question of whether it is
possible to pre-train a bilingual model for two remotely related languages
without compromising performance at either language. We collect pre-training
data, create a Finnish-English bilingual BERT model and evaluate its
performance on datasets used to evaluate the corresponding monolingual models.
Our bilingual model performs on par with Google's original English BERT on GLUE
and nearly matches the performance of monolingual Finnish BERT on a range of
Finnish NLP tasks, clearly outperforming multilingual BERT. We find that when
the model vocabulary size is increased, the BERT-Base architecture has
sufficient capacity to learn two remotely related languages to a level where it
achieves comparable performance with monolingual models, demonstrating the
feasibility of training fully bilingual deep language models. The model and all
tools involved in its creation are freely available at
https://github.com/TurkuNLP/biBERT
| 2,020 |
Computation and Language
|
Reducing Unintended Identity Bias in Russian Hate Speech Detection
|
Toxicity has become a grave problem for many online communities and has been
growing across many languages, including Russian. Hate speech creates an
environment of intimidation, discrimination, and may even incite some
real-world violence. Both researchers and social platforms have been focused on
developing models to detect toxicity in online communication for a while now. A
common problem of these models is the presence of bias towards some words (e.g.
woman, black, jew) that are not toxic, but serve as triggers for the classifier
due to model caveats. In this paper, we describe our efforts towards
classifying hate speech in Russian, and propose simple techniques of reducing
unintended bias, such as generating training data with language models using
terms and words related to protected identities as context and applying word
dropout to such words.
| 2,020 |
Computation and Language
|
An Analysis of Simple Data Augmentation for Named Entity Recognition
|
Simple yet effective data augmentation techniques have been proposed for
sentence-level and sentence-pair natural language processing tasks. Inspired by
these efforts, we design and compare data augmentation for named entity
recognition, which is usually modeled as a token-level sequence labeling
problem. Through experiments on two data sets from the biomedical and materials
science domains (i2b2-2010 and MaSciP), we show that simple augmentation can
boost performance for both recurrent and transformer-based models, especially
for small training sets.
| 2,020 |
Computation and Language
|
Improving BERT Performance for Aspect-Based Sentiment Analysis
|
Aspect-Based Sentiment Analysis (ABSA) studies the consumer opinion on the
market products. It involves examining the type of sentiments as well as
sentiment targets expressed in product reviews. Analyzing the language used in
a review is a difficult task that requires a deep understanding of the
language. In recent years, deep language models, such as BERT
\cite{devlin2019bert}, have shown great progress in this regard. In this work,
we propose two simple modules called Parallel Aggregation and Hierarchical
Aggregation to be utilized on top of BERT for two main ABSA tasks namely Aspect
Extraction (AE) and Aspect Sentiment Classification (ASC) in order to improve
the model's performance. We show that applying the proposed models eliminates
the need for further training of the BERT model. The source code is available
on the Web for further research and reproduction of the results.
| 2,021 |
Computation and Language
|
CUNI Systems for the Unsupervised and Very Low Resource Translation Task
in WMT20
|
This paper presents a description of CUNI systems submitted to the WMT20 task
on unsupervised and very low-resource supervised machine translation between
German and Upper Sorbian. We experimented with training on synthetic data and
pre-training on a related language pair. In the fully unsupervised scenario, we
achieved 25.5 and 23.7 BLEU translating from and into Upper Sorbian,
respectively. Our low-resource systems relied on transfer learning from
German-Czech parallel data and achieved 57.4 BLEU and 56.1 BLEU, which is an
improvement of 10 BLEU points over the baseline trained only on the available
small German-Upper Sorbian parallel corpus.
| 2,020 |
Computation and Language
|
EIGEN: Event Influence GENeration using Pre-trained Language Models
|
Reasoning about events and tracking their influences is fundamental to
understanding processes. In this paper, we present EIGEN - a method to leverage
pre-trained language models to generate event influences conditioned on a
context, nature of their influence, and the distance in a reasoning chain. We
also derive a new dataset for research and evaluation of methods for event
influence generation. EIGEN outperforms strong baselines both in terms of
automated evaluation metrics (by 10 ROUGE points) and human judgments on
closeness to reference and relevance of generations. Furthermore, we show that
the event influences generated by EIGEN improve the performance on a "what-if"
Question Answering (WIQA) benchmark (over 3% F1), especially for questions that
require background knowledge and multi-hop reasoning.
| 2,020 |
Computation and Language
|
Self-Alignment Pretraining for Biomedical Entity Representations
|
Despite the widespread success of self-supervised learning via masked
language models (MLM), accurately capturing fine-grained semantic relationships
in the biomedical domain remains a challenge. This is of paramount importance
for entity-level tasks such as entity linking where the ability to model entity
relations (especially synonymy) is pivotal. To address this challenge, we
propose SapBERT, a pretraining scheme that self-aligns the representation space
of biomedical entities. We design a scalable metric learning framework that can
leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts.
In contrast with previous pipeline-based hybrid systems, SapBERT offers an
elegant one-model-for-all solution to the problem of medical entity linking
(MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking
datasets. In the scientific domain, we achieve SOTA even without task-specific
supervision. With substantial improvement over various domain-specific
pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining
scheme proves to be both effective and robust.
| 2,021 |
Computation and Language
|
ConVEx: Data-Efficient and Few-Shot Slot Labeling
|
We propose ConVEx (Conversational Value Extractor), an efficient pretraining
and fine-tuning neural approach for slot-labeling dialog tasks. Instead of
relying on more general pretraining objectives from prior work (e.g., language
modeling, response selection), ConVEx's pretraining objective, a novel pairwise
cloze task using Reddit data, is well aligned with its intended usage on
sequence labeling tasks. This enables learning domain-specific slot labelers by
simply fine-tuning decoding layers of the pretrained general-purpose sequence
labeling model, while the majority of the pretrained model's parameters are
kept frozen. We report state-of-the-art performance of ConVEx across a range of
diverse domains and data sets for dialog slot-labeling, with the largest gains
in the most challenging, few-shot setups. We believe that ConVEx's reduced
pretraining times (i.e., only 18 hours on 12 GPUs) and cost, along with its
efficient fine-tuning and strong performance, promise wider portability and
scalability for data-efficient sequence-labeling tasks in general.
| 2,021 |
Computation and Language
|
Compositional Generalization via Semantic Tagging
|
Although neural sequence-to-sequence models have been successfully applied to
semantic parsing, they fail at compositional generalization, i.e., they are
unable to systematically generalize to unseen compositions of seen components.
Motivated by traditional semantic parsing where compositionality is explicitly
accounted for by symbolic grammars, we propose a new decoding framework that
preserves the expressivity and generality of sequence-to-sequence models while
featuring lexicon-style alignments and disentangled information processing.
Specifically, we decompose decoding into two phases where an input utterance is
first tagged with semantic symbols representing the meaning of individual
words, and then a sequence-to-sequence model is used to predict the final
meaning representation conditioning on the utterance and the predicted tag
sequence. Experimental results on three semantic parsing datasets show that the
proposed approach consistently improves compositional generalization across
model architectures, domains, and semantic formalisms.
| 2,021 |
Computation and Language
|
STAR: A Schema-Guided Dialog Dataset for Transfer Learning
|
We present STAR, a schema-guided task-oriented dialog dataset consisting of
127,833 utterances and knowledge base queries across 5,820 task-oriented
dialogs in 13 domains that is especially designed to facilitate task and domain
transfer learning in task-oriented dialog. Furthermore, we propose a scalable
crowd-sourcing paradigm to collect arbitrarily large datasets of the same
quality as STAR. Moreover, we introduce novel schema-guided dialog models that
use an explicit description of the task(s) to generalize from known to unknown
tasks. We demonstrate the effectiveness of these models, particularly for
zero-shot generalization across tasks and domains.
| 2,020 |
Computation and Language
|
Detecting and Exorcising Statistical Demons from Language Models with
Anti-Models of Negative Data
|
It's been said that "Language Models are Unsupervised Multitask Learners."
Indeed, self-supervised language models trained on "positive" examples of
English text generalize in desirable ways to many natural language tasks. But
if such models can stray so far from an initial self-supervision objective, a
wayward model might generalize in undesirable ways too, say to nonsensical
"negative" examples of unnatural language. A key question in this work is: do
language models trained on (positive) training data also generalize to
(negative) test data? We use this question as a contrivance to assess the
extent to which language models learn undesirable properties of text, such as
n-grams, that might interfere with the learning of more desirable properties of
text, such as syntax. We find that within a model family, as the number of
parameters, training epochs, and data set size increase, so does a model's
ability to generalize to negative n-gram data, indicating standard
self-supervision generalizes too far. We propose a form of inductive bias that
attenuates such undesirable signals with negative data distributions
automatically learned from positive data. We apply the method to remove n-gram
signals from LSTMs and find that doing so causes them to favor syntactic
signals, as demonstrated by large error reductions (up to 46% on the hardest
cases) on a syntactic subject-verb agreement task.
| 2,020 |
Computation and Language
|
XOR QA: Cross-lingual Open-Retrieval Question Answering
|
Multilingual question answering tasks typically assume answers exist in the
same language as the question. Yet in practice, many languages face both
information scarcity -- where languages have few reference articles -- and
information asymmetry -- where questions reference concepts from other
cultures. This work extends open-retrieval question answering to a
cross-lingual setting enabling questions from one language to be answered via
answer content from another language. We construct a large-scale dataset built
on questions from TyDi QA lacking same-language answers. Our task formulation,
called Cross-lingual Open Retrieval Question Answering (XOR QA), includes 40k
information-seeking questions from across 7 diverse non-English languages.
Based on this dataset, we introduce three new tasks that involve cross-lingual
document retrieval using multi-lingual and English resources. We establish
baselines with state-of-the-art machine translation systems and cross-lingual
pretrained models. Experimental results suggest that XOR QA is a challenging
task that will facilitate the development of novel techniques for multilingual
question answering. Our data and code are available at
https://nlp.cs.washington.edu/xorqa.
| 2,021 |
Computation and Language
|
Not all parameters are born equal: Attention is mostly what you need
|
Transformers are widely used in state-of-the-art machine translation, but the
key to their success is still unknown. To gain insight into this, we consider
three groups of parameters: embeddings, attention, and feed forward neural
network (FFN) layers. We examine the relative importance of each by performing
an ablation study where we initialise them at random and freeze them, so that
their weights do not change over the course of the training. Through this, we
show that the attention and FFN are equally important and fulfil the same
functionality in a model. We show that the decision about whether a component
is frozen or allowed to train is at least as important for the final model
performance as its number of parameters. At the same time, the number of
parameters alone is not indicative of a component's importance. Finally, while
the embedding layer is the least essential for machine translation tasks, it is
the most important component for language modelling tasks.
| 2,021 |
Computation and Language
|
Rewriting Meaningful Sentences via Conditional BERT Sampling and an
application on fooling text classifiers
|
Most adversarial attack methods that are designed to deceive a text
classifier change the text classifier's prediction by modifying a few words or
characters. Few try to attack classifiers by rewriting a whole sentence, due to
the difficulties inherent in sentence-level rephrasing as well as the problem
of setting the criteria for legitimate rewriting.
In this paper, we explore the problem of creating adversarial examples with
sentence-level rewriting. We design a new sampling method, named
ParaphraseSampler, to efficiently rewrite the original sentence in multiple
ways. Then we propose a new criteria for modification, called a sentence-level
threaten model. This criteria allows for both word- and sentence-level changes,
and can be adjusted independently in two dimensions: semantic similarity and
grammatical quality. Experimental results show that many of these rewritten
sentences are misclassified by the classifier. On all 6 datasets, our
ParaphraseSampler achieves a better attack success rate than our baseline.
| 2,022 |
Computation and Language
|
Challenges in Information-Seeking QA: Unanswerable Questions and
Paragraph Retrieval
|
Recent pretrained language models "solved" many reading comprehension
benchmarks, where questions are written with access to the evidence document.
However, datasets containing information-seeking queries where evidence
documents are provided after the queries are written independently remain
challenging. We analyze why answering information-seeking queries is more
challenging and where their prevalent unanswerabilities arise, on Natural
Questions and TyDi QA. Our controlled experiments suggest two headrooms --
paragraph selection and answerability prediction, i.e. whether the paired
evidence document contains the answer to the query or not. When provided with a
gold paragraph and knowing when to abstain from answering, existing models
easily outperform a human annotator. However, predicting answerability itself
remains challenging. We manually annotate 800 unanswerable examples across six
languages on what makes them challenging to answer. With this new data, we
conduct per-category answerability prediction, revealing issues in the current
dataset collection as well as task formulation. Together, our study points to
avenues for future research in information-seeking question answering, both for
dataset creation and model development.
| 2,021 |
Computation and Language
|
Scientific Claim Verification with VERT5ERINI
|
This work describes the adaptation of a pretrained sequence-to-sequence model
to the task of scientific claim verification in the biomedical domain. We
propose VERT5ERINI that exploits T5 for abstract retrieval, sentence selection
and label prediction, which are three critical sub-tasks of claim verification.
We evaluate our pipeline on SCIFACT, a newly curated dataset that requires
models to not just predict the veracity of claims but also provide relevant
sentences from a corpus of scientific literature that support this decision.
Empirically, our pipeline outperforms a strong baseline in each of the three
steps. Finally, we show VERT5ERINI's ability to generalize to two new datasets
of COVID-19 claims using evidence from the ever-expanding CORD-19 corpus.
| 2,020 |
Computation and Language
|
mT5: A massively multilingual pre-trained text-to-text transformer
|
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified
text-to-text format and scale to attain state-of-the-art results on a wide
variety of English-language NLP tasks. In this paper, we introduce mT5, a
multilingual variant of T5 that was pre-trained on a new Common Crawl-based
dataset covering 101 languages. We detail the design and modified training of
mT5 and demonstrate its state-of-the-art performance on many multilingual
benchmarks. We also describe a simple technique to prevent "accidental
translation" in the zero-shot setting, where a generative model chooses to
(partially) translate its prediction into the wrong language. All of the code
and model checkpoints used in this work are publicly available.
| 2,021 |
Computation and Language
|
UniCase -- Rethinking Casing in Language Models
|
In this paper, we introduce a new approach to dealing with the problem of
case-sensitiveness in Language Modelling (LM). We propose simple architecture
modification to the RoBERTa language model, accompanied by a new tokenization
strategy, which we named Unified Case LM (UniCase). We tested our solution on
the GLUE benchmark, which led to increased performance by 0.42 points.
Moreover, we prove that the UniCase model works much better when we have to
deal with text data, where all tokens are uppercased (+5.88 point).
| 2,020 |
Computation and Language
|
A Differentially Private Text Perturbation Method Using a Regularized
Mahalanobis Metric
|
Balancing the privacy-utility tradeoff is a crucial requirement of many
practical machine learning systems that deal with sensitive customer data. A
popular approach for privacy-preserving text analysis is noise injection, in
which text data is first mapped into a continuous embedding space, perturbed by
sampling a spherical noise from an appropriate distribution, and then projected
back to the discrete vocabulary space. While this allows the perturbation to
admit the required metric differential privacy, often the utility of downstream
tasks modeled on this perturbed data is low because the spherical noise does
not account for the variability in the density around different words in the
embedding space. In particular, words in a sparse region are likely unchanged
even when the noise scale is large. %Using the global sensitivity of the
mechanism can potentially add too much noise to the words in the dense regions
of the embedding space, causing a high utility loss, whereas using local
sensitivity can leak information through the scale of the noise added.
In this paper, we propose a text perturbation mechanism based on a carefully
designed regularized variant of the Mahalanobis metric to overcome this
problem. For any given noise scale, this metric adds an elliptical noise to
account for the covariance structure in the embedding space. This heterogeneity
in the noise scale along different directions helps ensure that the words in
the sparse region have sufficient likelihood of replacement without sacrificing
the overall utility. We provide a text-perturbation algorithm based on this
metric and formally prove its privacy guarantees. Additionally, we empirically
show that our mechanism improves the privacy statistics to achieve the same
level of utility as compared to the state-of-the-art Laplace mechanism.
| 2,020 |
Computation and Language
|
Unsupervised Data Augmentation with Naive Augmentation and without
Unlabeled Data
|
Unsupervised Data Augmentation (UDA) is a semi-supervised technique that
applies a consistency loss to penalize differences between a model's
predictions on (a) observed (unlabeled) examples; and (b) corresponding
'noised' examples produced via data augmentation. While UDA has gained
popularity for text classification, open questions linger over which design
decisions are necessary and over how to extend the method to sequence labeling
tasks. This method has recently gained traction for text classification. In
this paper, we re-examine UDA and demonstrate its efficacy on several
sequential tasks. Our main contribution is an empirical study of UDA to
establish which components of the algorithm confer benefits in NLP. Notably,
although prior work has emphasized the use of clever augmentation techniques
including back-translation, we find that enforcing consistency between
predictions assigned to observed and randomly substituted words often yields
comparable (or greater) benefits compared to these complex perturbation models.
Furthermore, we find that applying its consistency loss affords meaningful
gains without any unlabeled data at all, i.e., in a standard supervised
setting. In short: UDA need not be unsupervised, and does not require complex
data augmentation to be effective.
| 2,020 |
Computation and Language
|
Language Models are Open Knowledge Graphs
|
This paper shows how to construct knowledge graphs (KGs) from pre-trained
language models (e.g., BERT, GPT-2/3), without human supervision. Popular KGs
(e.g, Wikidata, NELL) are built in either a supervised or semi-supervised
manner, requiring humans to create knowledge. Recent deep language models
automatically acquire knowledge from large-scale corpora via pre-training. The
stored knowledge has enabled the language models to improve downstream NLP
tasks, e.g., answering questions, and writing code and articles. In this paper,
we propose an unsupervised method to cast the knowledge contained within
language models into KGs. We show that KGs are constructed with a single
forward pass of the pre-trained language models (without fine-tuning) over the
corpora. We demonstrate the quality of the constructed KGs by comparing to two
KGs (Wikidata, TAC KBP) created by humans. Our KGs also provide open factual
knowledge that is new in the existing KGs. Our code and KGs will be made
publicly available.
| 2,020 |
Computation and Language
|
Rediscovering the Slavic Continuum in Representations Emerging from
Neural Models of Spoken Language Identification
|
Deep neural networks have been employed for various spoken language
recognition tasks, including tasks that are multilingual by definition such as
spoken language identification. In this paper, we present a neural model for
Slavic language identification in speech signals and analyze its emergent
representations to investigate whether they reflect objective measures of
language relatedness and/or non-linguists' perception of language similarity.
While our analysis shows that the language representation space indeed captures
language relatedness to a great extent, we find perceptual confusability
between languages in our study to be the best predictor of the language
representation similarity.
| 2,020 |
Computation and Language
|
A Joint Learning Approach based on Self-Distillation for Keyphrase
Extraction from Scientific Documents
|
Keyphrase extraction is the task of extracting a small set of phrases that
best describe a document. Most existing benchmark datasets for the task
typically have limited numbers of annotated documents, making it challenging to
train increasingly complex neural networks. In contrast, digital libraries
store millions of scientific articles online, covering a wide range of topics.
While a significant portion of these articles contain keyphrases provided by
their authors, most other articles lack such kind of annotations. Therefore, to
effectively utilize these large amounts of unlabeled articles, we propose a
simple and efficient joint learning approach based on the idea of
self-distillation. Experimental results show that our approach consistently
improves the performance of baseline models for keyphrase extraction.
Furthermore, our best models outperform previous methods for the task,
achieving new state-of-the-art results on two public benchmarks: Inspec and
SemEval-2017.
| 2,020 |
Computation and Language
|
The Turking Test: Can Language Models Understand Instructions?
|
Supervised machine learning provides the learner with a set of input-output
examples of the target task. Humans, however, can also learn to perform new
tasks from instructions in natural language. Can machines learn to understand
instructions as well? We present the Turking Test, which examines a model's
ability to follow natural language instructions of varying complexity. These
range from simple tasks, like retrieving the nth word of a sentence, to ones
that require creativity, such as generating examples for SNLI and SQuAD in
place of human intelligence workers ("turkers"). Despite our lenient evaluation
methodology, we observe that a large pretrained language model performs poorly
across all tasks. Analyzing the model's error patterns reveals that the model
tends to ignore explicit instructions and often generates outputs that cannot
be construed as an attempt to solve the task. While it is not yet clear whether
instruction understanding can be captured by traditional language models, the
sheer expressivity of instruction understanding makes it an appealing
alternative to the rising few-shot inference paradigm.
| 2,020 |
Computation and Language
|
MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal
Language Sequences
|
Human communication is multimodal in nature; it is through multiple
modalities such as language, voice, and facial expressions, that opinions and
emotions are expressed. Data in this domain exhibits complex multi-relational
and temporal interactions. Learning from this data is a fundamentally
challenging research problem. In this paper, we propose Modal-Temporal
Attention Graph (MTAG). MTAG is an interpretable graph-based neural model that
provides a suitable framework for analyzing multimodal sequential data. We
first introduce a procedure to convert unaligned multimodal sequence data into
a graph with heterogeneous nodes and edges that captures the rich interactions
across modalities and through time. Then, a novel graph fusion operation,
called MTAG fusion, along with a dynamic pruning and read-out technique, is
designed to efficiently process this modal-temporal graph and capture various
interactions. By learning to focus only on the important interactions within
the graph, MTAG achieves state-of-the-art performance on multimodal sentiment
analysis and emotion recognition benchmarks, while utilizing significantly
fewer model parameters.
| 2,021 |
Computation and Language
|
Meta-Learning for Domain Generalization in Semantic Parsing
|
The importance of building semantic parsers which can be applied to new
domains and generate programs unseen at training has long been acknowledged,
and datasets testing out-of-domain performance are becoming increasingly
available. However, little or no attention has been devoted to learning
algorithms or objectives which promote domain generalization, with virtually
all existing approaches relying on standard supervised learning. In this work,
we use a meta-learning framework which targets zero-shot domain generalization
for semantic parsing. We apply a model-agnostic training algorithm that
simulates zero-shot parsing by constructing virtual train and test sets from
disjoint domains. The learning objective capitalizes on the intuition that
gradient steps that improve source-domain performance should also improve
target-domain performance, thus encouraging a parser to generalize to unseen
target domains. Experimental results on the (English) Spider and Chinese Spider
datasets show that the meta-learning objective significantly boosts the
performance of a baseline parser.
| 2,021 |
Computation and Language
|
Towards Zero-Shot Multilingual Synthetic Question and Answer Generation
for Cross-Lingual Reading Comprehension
|
We propose a simple method to generate multilingual question and answer pairs
on a large scale through the use of a single generative model. These synthetic
samples can be used to improve the zero-shot performance of multilingual QA
models on target languages. Our proposed multi-task training of the generative
model only requires the labeled training samples in English, thus removing the
need for such samples in the target languages, making it applicable to far more
languages than those with labeled data. Human evaluations indicate the majority
of such samples are grammatically correct and sensible. Experimental results
show our proposed approach can achieve large gains on the XQuAD dataset,
reducing the gap between zero-shot and supervised performance of smaller QA
models on various languages.
| 2,021 |
Computation and Language
|
Summarizing Utterances from Japanese Assembly Minutes using Political
Sentence-BERT-based Method for QA Lab-PoliInfo-2 Task of NTCIR-15
|
There are many discussions held during political meetings, and a large number
of utterances for various topics is included in their transcripts. We need to
read all of them if we want to follow speakers\' intentions or opinions about a
given topic. To avoid such a costly and time-consuming process to grasp often
longish discussions, NLP researchers work on generating concise summaries of
utterances. Summarization subtask in QA Lab-PoliInfo-2 task of the NTCIR-15
addresses this problem for Japanese utterances in assembly minutes, and our
team (SKRA) participated in this subtask. As a first step for summarizing
utterances, we created a new pre-trained sentence embedding model, i.e. the
Japanese Political Sentence-BERT. With this model, we summarize utterances
without labelled data. This paper describes our approach to solving the task
and discusses its results.
| 2,020 |
Computation and Language
|
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling
for Natural Language Understanding
|
Coarse-grained linguistic information, such as named entities or phrases,
facilitates adequately representation learning in pre-training. Previous works
mainly focus on extending the objective of BERT's Masked Language Modeling
(MLM) from masking individual tokens to contiguous sequences of n tokens. We
argue that such contiguously masking method neglects to model the
intra-dependencies and inter-relation of coarse-grained linguistic information.
As an alternative, we propose ERNIE-Gram, an explicitly n-gram masking method
to enhance the integration of coarse-grained information into pre-training. In
ERNIE-Gram, n-grams are masked and predicted directly using explicit n-gram
identities rather than contiguous sequences of n tokens. Furthermore,
ERNIE-Gram employs a generator model to sample plausible n-gram identities as
optional n-gram masks and predict them in both coarse-grained and fine-grained
manners to enable comprehensive n-gram prediction and relation modeling. We
pre-train ERNIE-Gram on English and Chinese text corpora and fine-tune on 19
downstream tasks. Experimental results show that ERNIE-Gram outperforms
previous pre-training models like XLNet and RoBERTa by a large margin, and
achieves comparable results with state-of-the-art methods. The source codes and
pre-trained models have been released at https://github.com/PaddlePaddle/ERNIE.
| 2,021 |
Computation and Language
|
Attention Transfer Network for Aspect-level Sentiment Classification
|
Aspect-level sentiment classification (ASC) aims to detect the sentiment
polarity of a given opinion target in a sentence. In neural network-based
methods for ASC, most works employ the attention mechanism to capture the
corresponding sentiment words of the opinion target, then aggregate them as
evidence to infer the sentiment of the target. However, aspect-level datasets
are all relatively small-scale due to the complexity of annotation. Data
scarcity causes the attention mechanism sometimes to fail to focus on the
corresponding sentiment words of the target, which finally weakens the
performance of neural models. To address the issue, we propose a novel
Attention Transfer Network (ATN) in this paper, which can successfully exploit
attention knowledge from resource-rich document-level sentiment classification
datasets to improve the attention capability of the aspect-level sentiment
classification task. In the ATN model, we design two different methods to
transfer attention knowledge and conduct experiments on two ASC benchmark
datasets. Extensive experimental results show that our methods consistently
outperform state-of-the-art works. Further analysis also validates the
effectiveness of ATN.
| 2,020 |
Computation and Language
|
KINNEWS and KIRNEWS: Benchmarking Cross-Lingual Text Classification for
Kinyarwanda and Kirundi
|
Recent progress in text classification has been focused on high-resource
languages such as English and Chinese. For low-resource languages, amongst them
most African languages, the lack of well-annotated data and effective
preprocessing, is hindering the progress and the transfer of successful
methods. In this paper, we introduce two news datasets (KINNEWS and KIRNEWS)
for multi-class classification of news articles in Kinyarwanda and Kirundi, two
low-resource African languages. The two languages are mutually intelligible,
but while Kinyarwanda has been studied in Natural Language Processing (NLP) to
some extent, this work constitutes the first study on Kirundi. Along with the
datasets, we provide statistics, guidelines for preprocessing, and monolingual
and cross-lingual baseline models. Our experiments show that training
embeddings on the relatively higher-resourced Kinyarwanda yields successful
cross-lingual transfer to Kirundi. In addition, the design of the created
datasets allows for a wider use in NLP beyond text classification in future
studies, such as representation learning, cross-lingual learning with more
distant languages, or as base for new annotations for tasks such as parsing,
POS tagging, and NER. The datasets, stopwords, and pre-trained embeddings are
publicly available at https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus .
| 2,020 |
Computation and Language
|
Learning Similarity between Movie Characters and Its Potential
Implications on Understanding Human Experiences
|
While many different aspects of human experiences have been studied by the
NLP community, none has captured its full richness. We propose a new task to
capture this richness based on an unlikely setting: movie characters. We sought
to capture theme-level similarities between movie characters that were
community-curated into 20,000 themes. By introducing a two-step approach that
balances performance and efficiency, we managed to achieve 9-27\% improvement
over recent paragraph-embedding based methods. Finally, we demonstrate how the
thematic information learnt from movie characters can potentially be used to
understand themes in the experience of people, as indicated on Reddit posts.
| 2,021 |
Computation and Language
|
Domain Divergences: a Survey and Empirical Analysis
|
Domain divergence plays a significant role in estimating the performance of a
model in new domains. While there is a significant literature on divergence
measures, researchers find it hard to choose an appropriate divergence for a
given NLP application. We address this shortcoming by both surveying the
literature and through an empirical study. We develop a taxonomy of divergence
measures consisting of three classes -- Information-theoretic, Geometric, and
Higher-order measures and identify the relationships between them. Further, to
understand the common use-cases of these measures, we recognise three novel
applications -- 1) Data Selection, 2) Learning Representation, and 3) Decisions
in the Wild -- and use it to organise our literature. From this, we identify
that Information-theoretic measures are prevalent for 1) and 3), and
Higher-order measures are more common for 2). To further help researchers
choose appropriate measures to predict drop in performance -- an important
aspect of Decisions in the Wild, we perform correlation analysis spanning 130
domain adaptation scenarios, 3 varied NLP tasks and 12 divergence measures
identified from our survey. To calculate these divergences, we consider the
current contextual word representations (CWR) and contrast with the older
distributed representations. We find that traditional measures over word
distributions still serve as strong baselines, while higher-order measures with
CWR are effective.
| 2,021 |
Computation and Language
|
Proof-theoretic aspects of NL$\lambda$
|
We present a proof-theoretic analysis of the logic NL$\lambda$ (Barker \&
Shan 2014, Barker 2019). We notably introduce a novel calculus of proof nets
and prove it is sound and complete with respect to the sequent calculus for the
logic. We study decidability and complexity of the logic using this new
calculus, proving a new upper bound for complexity of the logic (showing it is
in NP) and a new lower bound for the class of formal language generated by the
formalism (mildly context-sensitive languages extended with a permutation
closure operation). Finally, thanks to this new calculus, we present a novel
comparison between NL$\lambda$ and the hybrid type-logical grammars of Kubota
\& Levine (2020). We show there is an unexpected convergence of the natural
language analyses proposed in the two formalism. In addition to studying the
proof-theoretic properties of NL$\lambda$, we greatly extends its linguistic
coverage.
| 2,020 |
Computation and Language
|
A scalable framework for learning from implicit user feedback to improve
natural language understanding in large-scale conversational AI systems
|
Natural Language Understanding (NLU) is an established component within a
conversational AI or digital assistant system, and it is responsible for
producing semantic understanding of a user request. We propose a scalable and
automatic approach for improving NLU in a large-scale conversational AI system
by leveraging implicit user feedback, with an insight that user interaction
data and dialog context have rich information embedded from which user
satisfaction and intention can be inferred. In particular, we propose a general
domain-agnostic framework for curating new supervision data for improving NLU
from live production traffic. With an extensive set of experiments, we show the
results of applying the framework and improving NLU for a large-scale
production system and show its impact across 10 domains.
| 2,021 |
Computation and Language
|
Pre-training with Meta Learning for Chinese Word Segmentation
|
Recent researches show that pre-trained models (PTMs) are beneficial to
Chinese Word Segmentation (CWS). However, PTMs used in previous works usually
adopt language modeling as pre-training tasks, lacking task-specific prior
segmentation knowledge and ignoring the discrepancy between pre-training tasks
and downstream CWS tasks. In this paper, we propose a CWS-specific pre-trained
model METASEG, which employs a unified architecture and incorporates meta
learning algorithm into a multi-criteria pre-training task. Empirical results
show that METASEG could utilize common prior segmentation knowledge from
different existing criteria and alleviate the discrepancy between pre-trained
models and downstream CWS tasks. Besides, METASEG can achieve new
state-of-the-art performance on twelve widely-used CWS datasets and
significantly improve model performance in low-resource settings.
| 2,021 |
Computation and Language
|
ST-BERT: Cross-modal Language Model Pre-training For End-to-end Spoken
Language Understanding
|
Language model pre-training has shown promising results in various downstream
tasks. In this context, we introduce a cross-modal pre-trained language model,
called Speech-Text BERT (ST-BERT), to tackle end-to-end spoken language
understanding (E2E SLU) tasks. Taking phoneme posterior and subword-level text
as an input, ST-BERT learns a contextualized cross-modal alignment via our two
proposed pre-training tasks: Cross-modal Masked Language Modeling (CM-MLM) and
Cross-modal Conditioned Language Modeling (CM-CLM). Experimental results on
three benchmarks present that our approach is effective for various SLU
datasets and shows a surprisingly marginal performance degradation even when 1%
of the training data are available. Also, our method shows further SLU
performance gain via domain-adaptive pre-training with domain-specific
speech-text pair data.
| 2,021 |
Computation and Language
|
FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input
Representations
|
Combining several embeddings typically improves performance in downstream
tasks as different embeddings encode different information. It has been shown
that even models using embeddings from transformers still benefit from the
inclusion of standard word embeddings. However, the combination of embeddings
of different types and dimensions is challenging. As an alternative to
attention-based meta-embeddings, we propose feature-based adversarial
meta-embeddings (FAME) with an attention function that is guided by features
reflecting word-specific properties, such as shape and frequency, and show that
this is beneficial to handle subword-based embeddings. In addition, FAME uses
adversarial training to optimize the mappings of differently-sized embeddings
to the same space. We demonstrate that FAME works effectively across languages
and domains for sequence labeling and sentence classification, in particular in
low-resource settings. FAME sets the new state of the art for POS tagging in 27
languages, various NER settings and question classification in different
domains.
| 2,021 |
Computation and Language
|
A Survey on Recent Approaches for Natural Language Processing in
Low-Resource Scenarios
|
Deep neural networks and huge language models are becoming omnipresent in
natural language applications. As they are known for requiring large amounts of
training data, there is a growing body of work to improve the performance in
low-resource settings. Motivated by the recent fundamental changes towards
neural models and the popular pre-train and fine-tune paradigm, we survey
promising approaches for low-resource natural language processing. After a
discussion about the different dimensions of data availability, we give a
structured overview of methods that enable learning when training data is
sparse. This includes mechanisms to create additional labeled data like data
augmentation and distant supervision as well as transfer learning settings that
reduce the need for target supervision. A goal of our survey is to explain how
these methods differ in their requirements as understanding them is essential
for choosing a technique suited for a specific low-resource setting. Further
key aspects of this work are to highlight open issues and to outline promising
directions for future research.
| 2,021 |
Computation and Language
|
BARThez: a Skilled Pretrained French Sequence-to-Sequence Model
|
Inductive transfer learning has taken the entire NLP field by storm, with
models such as BERT and BART setting new state of the art on countless NLU
tasks. However, most of the available models and research have been conducted
for English. In this work, we introduce BARThez, the first large-scale
pretrained seq2seq model for French. Being based on BART, BARThez is
particularly well-suited for generative tasks. We evaluate BARThez on five
discriminative tasks from the FLUE benchmark and two generative tasks from a
novel summarization dataset, OrangeSum, that we created for this research. We
show BARThez to be very competitive with state-of-the-art BERT-based French
language models such as CamemBERT and FlauBERT. We also continue the
pretraining of a multilingual BART on BARThez' corpus, and show our resulting
model, mBARThez, to significantly boost BARThez' generative performance. Code,
data and models are publicly available.
| 2,021 |
Computation and Language
|
NLNDE at CANTEMIST: Neural Sequence Labeling and Parsing Approaches for
Clinical Concept Extraction
|
The recognition and normalization of clinical information, such as tumor
morphology mentions, is an important, but complex process consisting of
multiple subtasks. In this paper, we describe our system for the CANTEMIST
shared task, which is able to extract, normalize and rank ICD codes from
Spanish electronic health records using neural sequence labeling and parsing
approaches with context-aware embeddings. Our best system achieves 85.3 F1,
76.7 F1, and 77.0 MAP for the three tasks, respectively.
| 2,020 |
Computation and Language
|
Pretraining and Fine-Tuning Strategies for Sentiment Analysis of Latvian
Tweets
|
In this paper, we present various pre-training strategies that aid in
im-proving the accuracy of the sentiment classification task. We, at first,
pre-trainlanguage representation models using these strategies and then
fine-tune them onthe downstream task. Experimental results on a time-balanced
tweet evaluation setshow the improvement over the previous technique. We
achieve 76% accuracy forsentiment analysis on Latvian tweets, which is a
substantial improvement over pre-vious work
| 2,020 |
Computation and Language
|
Unsupervised Cross-lingual Adaptation for Sequence Tagging and Beyond
|
Cross-lingual adaptation with multilingual pre-trained language models
(mPTLMs) mainly consists of two lines of works: zero-shot approach and
translation-based approach, which have been studied extensively on the
sequence-level tasks. We further verify the efficacy of these cross-lingual
adaptation approaches by evaluating their performances on more fine-grained
sequence tagging tasks. After re-examining their strengths and drawbacks, we
propose a novel framework to consolidate the zero-shot approach and the
translation-based approach for better adaptation performance. Instead of simply
augmenting the source data with the machine-translated data, we tailor-make a
warm-up mechanism to quickly update the mPTLMs with the gradients estimated on
a few translated data. Then, the adaptation approach is applied to the refined
parameters and the cross-lingual transfer is performed in a warm-start way. The
experimental results on nine target languages demonstrate that our method is
beneficial to the cross-lingual adaptation of various sequence tagging tasks.
| 2,021 |
Computation and Language
|
UNER: Universal Named-Entity RecognitionFramework
|
We introduce the Universal Named-Entity Recognition (UNER)framework, a
4-level classification hierarchy, and the methodology that isbeing adopted to
create the first multilingual UNER corpus: the SETimesparallel corpus annotated
for named-entities. First, the English SETimescorpus will be annotated using
existing tools and knowledge bases. Afterevaluating the resulting annotations
through crowdsourcing campaigns,they will be propagated automatically to other
languages within the SE-Times corpora. Finally, as an extrinsic evaluation, the
UNER multilin-gual dataset will be used to train and test available NER tools.
As part offuture research directions, we aim to increase the number of
languages inthe UNER corpus and to investigate possible ways of integrating
UNERwith available knowledge graphs to improve named-entity recognition.
| 2,020 |
Computation and Language
|
SmBoP: Semi-autoregressive Bottom-up Semantic Parsing
|
The de-facto standard decoding method for semantic parsing in recent years
has been to autoregressively decode the abstract syntax tree of the target
program using a top-down depth-first traversal. In this work, we propose an
alternative approach: a Semi-autoregressive Bottom-up Parser (SmBoP) that
constructs at decoding step $t$ the top-$K$ sub-trees of height $\leq t$. Our
parser enjoys several benefits compared to top-down autoregressive parsing.
From an efficiency perspective, bottom-up parsing allows to decode all
sub-trees of a certain height in parallel, leading to logarithmic runtime
complexity rather than linear. From a modeling perspective, a bottom-up parser
learns representations for meaningful semantic sub-programs at each step,
rather than for semantically-vacuous partial trees. We apply SmBoP on Spider, a
challenging zero-shot semantic parsing benchmark, and show that SmBoP leads to
a 2.2x speed-up in decoding time and a $\sim$5x speed-up in training time,
compared to a semantic parser that uses autoregressive decoding. SmBoP obtains
71.1 denotation accuracy on Spider, establishing a new state-of-the-art, and
69.5 exact match, comparable to the 69.6 exact match of the autoregressive
RAT-SQL+GraPPa.
| 2,021 |
Computation and Language
|
Deep Learning Framework for Measuring the Digital Strategy of Companies
from Earnings Calls
|
Companies today are racing to leverage the latest digital technologies, such
as artificial intelligence, blockchain, and cloud computing. However, many
companies report that their strategies did not achieve the anticipated business
results. This study is the first to apply state of the art NLP models on
unstructured data to understand the different clusters of digital strategy
patterns that companies are Adopting. We achieve this by analyzing earnings
calls from Fortune Global 500 companies between 2015 and 2019. We use
Transformer based architecture for text classification which show a better
understanding of the conversation context. We then investigate digital strategy
patterns by applying clustering analysis. Our findings suggest that Fortune 500
companies use four distinct strategies which are product led, customer
experience led, service led, and efficiency led. This work provides an
empirical baseline for companies and researchers to enhance our understanding
of the field.
| 2,020 |
Computation and Language
|
TweetEval: Unified Benchmark and Comparative Evaluation for Tweet
Classification
|
The experimental landscape in natural language processing for social media is
too fragmented. Each year, new shared tasks and datasets are proposed, ranging
from classics like sentiment analysis to irony detection or emoji prediction.
Therefore, it is unclear what the current state of the art is, as there is no
standardized evaluation protocol, neither a strong set of baselines trained on
such domain-specific data. In this paper, we propose a new evaluation framework
(TweetEval) consisting of seven heterogeneous Twitter-specific classification
tasks. We also provide a strong set of baselines as starting point, and compare
different language modeling pre-training strategies. Our initial experiments
show the effectiveness of starting off with existing pre-trained generic
language models, and continue training them on Twitter corpora.
| 2,020 |
Computation and Language
|
Evaluating Language Tools for Fifteen EU-official Under-resourced
Languages
|
This article presents the results of the evaluation campaign of language
tools available for fifteen EU-official under-resourced languages. The
evaluation was conducted within the MSC ITN CLEOPATRA action that aims at
building the cross-lingual event-centric knowledge processing on top of the
application of linguistic processing chains (LPCs) for at least 24 EU-official
languages. In this campaign, we concentrated on three existing NLP platforms
(Stanford CoreNLP, NLP Cube, UDPipe) that all provide models for
under-resourced languages and in this first run we covered 15 under-resourced
languages for which the models were available. We present the design of the
evaluation campaign and present the results as well as discuss them. We
considered the difference between reported and our tested results within a
single percentage point as being within the limits of acceptable tolerance and
thus consider this result as reproducible. However, for a number of languages,
the results are below what was reported in the literature, and in some cases,
our testing results are even better than the ones reported previously.
Particularly problematic was the evaluation of NERC systems. One of the reasons
is the absence of universally or cross-lingually applicable named entities
classification scheme that would serve the NERC task in different languages
analogous to the Universal Dependency scheme in parsing task. To build such a
scheme has become one of our the future research directions.
| 2,020 |
Computation and Language
|
Natural Language Processing Chains Inside a Cross-lingual Event-Centric
Knowledge Pipeline for European Union Under-resourced Languages
|
This article presents the strategy for developing a platform containing
Language Processing Chains for European Union languages, consisting of
Tokenization to Parsing, also including Named Entity recognition andwith
addition ofSentiment Analysis. These chains are part of the first step of an
event-centric knowledge processing pipeline whose aim is to process
multilingual media information about major events that can cause an impactin
Europe and the rest of the world. Due to the differences in terms of
availability of language resources for each language, we have built this
strategy in three steps, starting with processing chains for the well-resourced
languages and finishing with the development of new modules for the
under-resourced ones. In order to classify all European Union official
languages in terms of resources, we have analysed the size of annotated corpora
as well as the existence of pre-trained models in mainstream Language
Processing tools, and we have combined this information with the proposed
classification published at META-NETwhitepaper series.
| 2,020 |
Computation and Language
|
HateBERT: Retraining BERT for Abusive Language Detection in English
|
In this paper, we introduce HateBERT, a re-trained BERT model for abusive
language detection in English. The model was trained on RAL-E, a large-scale
dataset of Reddit comments in English from communities banned for being
offensive, abusive, or hateful that we have collected and made available to the
public. We present the results of a detailed comparison between a general
pre-trained language model and the abuse-inclined version obtained by
retraining with posts from the banned communities on three English datasets for
offensive, abusive language and hate speech detection tasks. In all datasets,
HateBERT outperforms the corresponding general BERT model. We also discuss a
battery of experiments comparing the portability of the generic pre-trained
language model and its corresponding abusive language-inclined counterpart
across the datasets, indicating that portability is affected by compatibility
of the annotated phenomena.
| 2,021 |
Computation and Language
|
Intrinsic Quality Assessment of Arguments
|
Several quality dimensions of natural language arguments have been
investigated. Some are likely to be reflected in linguistic features (e.g., an
argument's arrangement), whereas others depend on context (e.g., relevance) or
topic knowledge (e.g., acceptability). In this paper, we study the intrinsic
computational assessment of 15 dimensions, i.e., only learning from an
argument's text. In systematic experiments with eight feature types on an
existing corpus, we observe moderate but significant learning success for most
dimensions. Rhetorical quality seems hardest to assess, and subjectivity
features turn out strong, although length bias in the corpus impedes full
validity. We also find that human assessors differ more clearly to each other
than to our approach.
| 2,020 |
Computation and Language
|
Understanding the Extent to which Summarization Evaluation Metrics
Measure the Information Quality of Summaries
|
Reference-based metrics such as ROUGE or BERTScore evaluate the content
quality of a summary by comparing the summary to a reference. Ideally, this
comparison should measure the summary's information quality by calculating how
much information the summaries have in common. In this work, we analyze the
token alignments used by ROUGE and BERTScore to compare summaries and argue
that their scores largely cannot be interpreted as measuring information
overlap, but rather the extent to which they discuss the same topics. Further,
we provide evidence that this result holds true for many other summarization
evaluation metrics. The consequence of this result is that it means the
summarization community has not yet found a reliable automatic metric that
aligns with its research goal, to generate summaries with high-quality
information. Then, we propose a simple and interpretable method of evaluating
summaries which does directly measure information overlap and demonstrate how
it can be used to gain insights into model behavior that could not be provided
by other methods alone.
| 2,020 |
Computation and Language
|
Helping users discover perspectives: Enhancing opinion mining with joint
topic models
|
Support or opposition concerning a debated claim such as abortion should be
legal can have different underlying reasons, which we call perspectives. This
paper explores how opinion mining can be enhanced with joint topic modeling, to
identify distinct perspectives within the topic, providing an informative
overview from unstructured text. We evaluate four joint topic models (TAM, JST,
VODUM, and LAM) in a user study assessing human understandability of the
extracted perspectives. Based on the results, we conclude that joint topic
models such as TAM can discover perspectives that align with human judgments.
Moreover, our results suggest that users are not influenced by their
pre-existing stance on the topic of abortion when interpreting the output of
topic models.
| 2,020 |
Computation and Language
|
Improving Robustness by Augmenting Training Sentences with
Predicate-Argument Structures
|
Existing NLP datasets contain various biases, and models tend to quickly
learn those biases, which in turn limits their robustness. Existing approaches
to improve robustness against dataset biases mostly focus on changing the
training objective so that models learn less from biased examples. Besides,
they mostly focus on addressing a specific bias, and while they improve the
performance on adversarial evaluation sets of the targeted bias, they may bias
the model in other ways, and therefore, hurt the overall robustness. In this
paper, we propose to augment the input sentences in the training data with
their corresponding predicate-argument structures, which provide a higher-level
abstraction over different realizations of the same meaning and help the model
to recognize important parts of sentences. We show that without targeting a
specific bias, our sentence augmentation improves the robustness of transformer
models against multiple biases. In addition, we show that models can still be
vulnerable to the lexical overlap bias, even when the training data does not
contain this bias, and that the sentence augmentation also improves the
robustness in this scenario. We will release our adversarial datasets to
evaluate bias in such a scenario as well as our augmentation scripts at
https://github.com/UKPLab/data-augmentation-for-robustness.
| 2,020 |
Computation and Language
|
Generating Plausible Counterfactual Explanations for Deep Transformers
in Financial Text Classification
|
Corporate mergers and acquisitions (M&A) account for billions of dollars of
investment globally every year, and offer an interesting and challenging domain
for artificial intelligence. However, in these highly sensitive domains, it is
crucial to not only have a highly robust and accurate model, but be able to
generate useful explanations to garner a user's trust in the automated system.
Regrettably, the recent research regarding eXplainable AI (XAI) in financial
text classification has received little to no attention, and many current
methods for generating textual-based explanations result in highly implausible
explanations, which damage a user's trust in the system. To address these
issues, this paper proposes a novel methodology for producing plausible
counterfactual explanations, whilst exploring the regularization benefits of
adversarial training on language models in the domain of FinTech. Exhaustive
quantitative experiments demonstrate that not only does this approach improve
the model accuracy when compared to the current state-of-the-art and human
performance, but it also generates counterfactual explanations which are
significantly more plausible based on human trials.
| 2,020 |
Computation and Language
|
Neural Passage Retrieval with Improved Negative Contrast
|
In this paper we explore the effects of negative sampling in dual encoder
models used to retrieve passages for automatic question answering. We explore
four negative sampling strategies that complement the straightforward random
sampling of negatives, typically used to train dual encoder models. Out of the
four strategies, three are based on retrieval and one on heuristics. Our
retrieval-based strategies are based on the semantic similarity and the lexical
overlap between questions and passages. We train the dual encoder models in two
stages: pre-training with synthetic data and fine tuning with domain-specific
data. We apply negative sampling to both stages. The approach is evaluated in
two passage retrieval tasks. Even though it is not evident that there is one
single sampling strategy that works best in all the tasks, it is clear that our
strategies contribute to improving the contrast between the response and all
the other passages. Furthermore, mixing the negatives from different strategies
achieve performance on par with the best performing strategy in all tasks. Our
results establish a new state-of-the-art level of performance on two of the
open-domain question answering datasets that we evaluated.
| 2,020 |
Computation and Language
|
Answering Open-Domain Questions of Varying Reasoning Steps from Text
|
We develop a unified system to answer directly from text open-domain
questions that may require a varying number of retrieval steps. We employ a
single multi-task transformer model to perform all the necessary subtasks --
retrieving supporting facts, reranking them, and predicting the answer from all
retrieved documents -- in an iterative fashion. We avoid crucial assumptions of
previous work that do not transfer well to real-world settings, including
exploiting knowledge of the fixed number of retrieval steps required to answer
each question or using structured metadata like knowledge bases or web links
that have limited availability. Instead, we design a system that can answer
open-domain questions on any text collection without prior knowledge of
reasoning complexity. To emulate this setting, we construct a new benchmark,
called BeerQA, by combining existing one- and two-step datasets with a new
collection of 530 questions that require three Wikipedia pages to answer,
unifying Wikipedia corpora versions in the process. We show that our model
demonstrates competitive performance on both existing benchmarks and this new
benchmark. We make the new benchmark available at https://beerqa.github.io/.
| 2,021 |
Computation and Language
|
GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight
Gated Injection Method
|
Large pre-trained language models such as BERT have been the driving force
behind recent improvements across many NLP tasks. However, BERT is only trained
to predict missing words - either behind masks or in the next sentence - and
has no knowledge of lexical, syntactic or semantic information beyond what it
picks up through unsupervised pre-training. We propose a novel method to
explicitly inject linguistic knowledge in the form of word embeddings into any
layer of a pre-trained BERT. Our performance improvements on multiple semantic
similarity datasets when injecting dependency-based and counter-fitted
embeddings indicate that such information is beneficial and currently missing
from the original model. Our qualitative analysis shows that counter-fitted
embedding injection particularly helps with cases involving synonym pairs.
| 2,020 |
Computation and Language
|
Multilingual BERT Post-Pretraining Alignment
|
We propose a simple method to align multilingual contextual embeddings as a
post-pretraining step for improved zero-shot cross-lingual transferability of
the pretrained models. Using parallel data, our method aligns embeddings on the
word level through the recently proposed Translation Language Modeling
objective as well as on the sentence level via contrastive learning and random
input shuffling. We also perform sentence-level code-switching with English
when finetuning on downstream tasks. On XNLI, our best model (initialized from
mBERT) improves over mBERT by 4.7% in the zero-shot setting and achieves
comparable result to XLM for translate-train while using less than 18% of the
same parallel data and 31% less model parameters. On MLQA, our model
outperforms XLM-R_Base that has 57% more parameters than ours.
| 2,021 |
Computation and Language
|
On the Transformer Growth for Progressive BERT Training
|
Due to the excessive cost of large-scale language model pre-training,
considerable efforts have been made to train BERT progressively -- start from
an inferior but low-cost model and gradually grow the model to increase the
computational complexity. Our objective is to advance the understanding of
Transformer growth and discover principles that guide progressive training.
First, we find that similar to network architecture search, Transformer growth
also favors compound scaling. Specifically, while existing methods only conduct
network growth in a single dimension, we observe that it is beneficial to use
compound growth operators and balance multiple dimensions (e.g., depth, width,
and input length of the model). Moreover, we explore alternative growth
operators in each dimension via controlled comparison to give operator
selection practical guidance. In light of our analyses, the proposed method
speeds up BERT pre-training by 73.6% and 82.2% for the base and large models
respectively, while achieving comparable performances
| 2,021 |
Computation and Language
|
Concealed Data Poisoning Attacks on NLP Models
|
Adversarial attacks alter NLP model predictions by perturbing test-time
inputs. However, it is much less understood whether, and how, predictions can
be manipulated with small, concealed changes to the training data. In this
work, we develop a new data poisoning attack that allows an adversary to
control model predictions whenever a desired trigger phrase is present in the
input. For instance, we insert 50 poison examples into a sentiment model's
training set that causes the model to frequently predict Positive whenever the
input contains "James Bond". Crucially, we craft these poison examples using a
gradient-based procedure so that they do not mention the trigger phrase. We
also apply our poison attack to language modeling ("Apple iPhone" triggers
negative generations) and machine translation ("iced coffee" mistranslated as
"hot coffee"). We conclude by proposing three defenses that can mitigate our
attack at some cost in prediction accuracy or extra human annotation.
| 2,021 |
Computation and Language
|
DICT-MLM: Improved Multilingual Pre-Training using Bilingual
Dictionaries
|
Pre-trained multilingual language models such as mBERT have shown immense
gains for several natural language processing (NLP) tasks, especially in the
zero-shot cross-lingual setting. Most, if not all, of these pre-trained models
rely on the masked-language modeling (MLM) objective as the key language
learning objective. The principle behind these approaches is that predicting
the masked words with the help of the surrounding text helps learn potent
contextualized representations. Despite the strong representation learning
capability enabled by MLM, we demonstrate an inherent limitation of MLM for
multilingual representation learning. In particular, by requiring the model to
predict the language-specific token, the MLM objective disincentivizes learning
a language-agnostic representation -- which is a key goal of multilingual
pre-training. Therefore to encourage better cross-lingual representation
learning we propose the DICT-MLM method. DICT-MLM works by incentivizing the
model to be able to predict not just the original masked word, but potentially
any of its cross-lingual synonyms as well. Our empirical analysis on multiple
downstream tasks spanning 30+ languages, demonstrates the efficacy of the
proposed approach and its ability to learn better multilingual representations.
| 2,020 |
Computation and Language
|
Ranking Creative Language Characteristics in Small Data Scenarios
|
The ability to rank creative natural language provides an important general
tool for downstream language understanding and generation. However, current
deep ranking models require substantial amounts of labeled data that are
difficult and expensive to obtain for different domains, languages and creative
characteristics. A recent neural approach, the DirectRanker, promises to reduce
the amount of training data needed but its application to text isn't fully
explored. We therefore adapt the DirectRanker to provide a new deep model for
ranking creative language with small data. We compare DirectRanker with a
Bayesian approach, Gaussian process preference learning (GPPL), which has
previously been shown to work well with sparse data. Our experiments with
sparse training data show that while the performance of standard neural ranking
approaches collapses with small training datasets, DirectRanker remains
effective. We find that combining DirectRanker with GPPL increases performance
across different settings by leveraging the complementary benefits of both
models. Our combined approach outperforms the previous state-of-the-art on
humor and metaphor novelty tasks, increasing Spearman's $\rho$ by 14% and 16%
on average.
| 2,020 |
Computation and Language
|
Unsupervised Multi-hop Question Answering by Question Generation
|
Obtaining training data for multi-hop question answering (QA) is
time-consuming and resource-intensive. We explore the possibility to train a
well-performed multi-hop QA model without referencing any human-labeled
multi-hop question-answer pairs, i.e., unsupervised multi-hop QA. We propose
MQA-QG, an unsupervised framework that can generate human-like multi-hop
training data from both homogeneous and heterogeneous data sources. MQA-QG
generates questions by first selecting/generating relevant information from
each data source and then integrating the multiple information to form a
multi-hop question. Using only generated training data, we can train a
competent multi-hop QA which achieves 61% and 83% of the supervised learning
performance for the HybridQA and the HotpotQA dataset, respectively. We also
show that pretraining the QA system with the generated data would greatly
reduce the demand for human-annotated training data. Our codes are publicly
available at https://github.com/teacherpeterpan/Unsupervised-Multi-hop-QA.
| 2,021 |
Computation and Language
|
Topic Modeling with Contextualized Word Representation Clusters
|
Clustering token-level contextualized word representations produces output
that shares many similarities with topic models for English text collections.
Unlike clusterings of vocabulary-level word embeddings, the resulting models
more naturally capture polysemy and can be used as a way of organizing
documents. We evaluate token clusterings trained from several different output
layers of popular contextualized language models. We find that BERT and GPT-2
produce high quality clusterings, but RoBERTa does not. These cluster models
are simple, reliable, and can perform as well as, if not better than, LDA topic
models, maintaining high topic quality even when the number of topics is large
relative to the size of the local collection.
| 2,020 |
Computation and Language
|
Anchor-based Bilingual Word Embeddings for Low-Resource Languages
|
Good quality monolingual word embeddings (MWEs) can be built for languages
which have large amounts of unlabeled text. MWEs can be aligned to bilingual
spaces using only a few thousand word translation pairs. For low resource
languages training MWEs monolingually results in MWEs of poor quality, and thus
poor bilingual word embeddings (BWEs) as well. This paper proposes a new
approach for building BWEs in which the vector space of the high resource
source language is used as a starting point for training an embedding space for
the low resource target language. By using the source vectors as anchors the
vector spaces are automatically aligned during training. We experiment on
English-German, English-Hiligaynon and English-Macedonian. We show that our
approach results not only in improved BWEs and bilingual lexicon induction
performance, but also in improved target language MWE quality as measured using
monolingual word similarity.
| 2,021 |
Computation and Language
|
Did You Ask a Good Question? A Cross-Domain Question Intention
Classification Benchmark for Text-to-SQL
|
Neural models have achieved significant results on the text-to-SQL task, in
which most current work assumes all the input questions are legal and generates
a SQL query for any input. However, in the real scenario, users can input any
text that may not be able to be answered by a SQL query. In this work, we
propose TriageSQL, the first cross-domain text-to-SQL question intention
classification benchmark that requires models to distinguish four types of
unanswerable questions from answerable questions. The baseline RoBERTa model
achieves a 60% F1 score on the test set, demonstrating the need for further
improvement on this task. Our dataset is available at
https://github.com/chatc/TriageSQL.
| 2,020 |
Computation and Language
|
Comparative analysis of word embeddings in assessing semantic similarity
of complex sentences
|
Semantic textual similarity is one of the open research challenges in the
field of Natural Language Processing. Extensive research has been carried out
in this field and near-perfect results are achieved by recent transformer-based
models in existing benchmark datasets like the STS dataset and the SICK
dataset. In this paper, we study the sentences in these datasets and analyze
the sensitivity of various word embeddings with respect to the complexity of
the sentences. We build a complex sentences dataset comprising of 50 sentence
pairs with associated semantic similarity values provided by 15 human
annotators. Readability analysis is performed to highlight the increase in
complexity of the sentences in the existing benchmark datasets and those in the
proposed dataset. Further, we perform a comparative analysis of the performance
of various word embeddings and language models on the existing benchmark
datasets and the proposed dataset. The results show the increase in complexity
of the sentences has a significant impact on the performance of the embedding
models resulting in a 10-20% decrease in Pearson's and Spearman's correlation.
| 2,021 |
Computation and Language
|
Posterior Differential Regularization with f-divergence for Improving
Model Robustness
|
We address the problem of enhancing model robustness through regularization.
Specifically, we focus on methods that regularize the model posterior
difference between clean and noisy inputs. Theoretically, we provide a
connection of two recent methods, Jacobian Regularization and Virtual
Adversarial Training, under this framework. Additionally, we generalize the
posterior differential regularization to the family of $f$-divergences and
characterize the overall regularization framework in terms of Jacobian matrix.
Empirically, we systematically compare those regularizations and standard BERT
training on a diverse set of tasks to provide a comprehensive profile of their
effect on model in-domain and out-of-domain generalization. For both fully
supervised and semi-supervised settings, our experiments show that regularizing
the posterior differential with $f$-divergence can result in well-improved
model robustness. In particular, with a proper $f$-divergence, a BERT-base
model can achieve comparable generalization as its BERT-large counterpart for
in-domain, adversarial and domain shift scenarios, indicating the great
potential of the proposed framework for boosting model generalization for NLP
models.
| 2,021 |
Computation and Language
|
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question
Answering
|
Coupled with the availability of large scale datasets, deep learning
architectures have enabled rapid progress on the Question Answering task.
However, most of those datasets are in English, and the performances of
state-of-the-art multilingual models are significantly lower when evaluated on
non-English data. Due to high data collection costs, it is not realistic to
obtain annotated data for each language one desires to support.
We propose a method to improve the Cross-lingual Question Answering
performance without requiring additional annotated data, leveraging Question
Generation models to produce synthetic samples in a cross-lingual fashion. We
show that the proposed method allows to significantly outperform the baselines
trained on English data only. We report a new state-of-the-art on four
multilingual datasets: MLQA, XQuAD, SQuAD-it and PIAF (fr).
| 2,021 |
Computation and Language
|
Rapid Domain Adaptation for Machine Translation with Monolingual Data
|
One challenge of machine translation is how to quickly adapt to unseen
domains in face of surging events like COVID-19, in which case timely and
accurate translation of in-domain information into multiple languages is
critical but little parallel data is available yet. In this paper, we propose
an approach that enables rapid domain adaptation from the perspective of
unsupervised translation. Our proposed approach only requires in-domain
monolingual data and can be quickly applied to a preexisting translation system
trained on general domain, reaching significant gains on in-domain translation
quality with little or no drop on general-domain. We also propose an effective
procedure of simultaneous adaptation for multiple domains and languages. To the
best of our knowledge, this is the first attempt that aims to address
unsupervised multilingual domain adaptation.
| 2,020 |
Computation and Language
|
Generating Adequate Distractors for Multiple-Choice Questions
|
This paper presents a novel approach to automatic generation of adequate
distractors for a given question-answer pair (QAP) generated from a given
article to form an adequate multiple-choice question (MCQ). Our method is a
combination of part-of-speech tagging, named-entity tagging, semantic-role
labeling, regular expressions, domain knowledge bases, word embeddings, word
edit distance, WordNet, and other algorithms. We use the US SAT (Scholastic
Assessment Test) practice reading tests as a dataset to produce QAPs and
generate three distractors for each QAP to form an MCQ. We show that, via
experiments and evaluations by human judges, each MCQ has at least one adequate
distractor and 84\% of MCQs have three adequate distractors.
| 2,020 |
Computation and Language
|
On Minimum Word Error Rate Training of the Hybrid Autoregressive
Transducer
|
Hybrid Autoregressive Transducer (HAT) is a recently proposed end-to-end
acoustic model that extends the standard Recurrent Neural Network Transducer
(RNN-T) for the purpose of the external language model (LM) fusion. In HAT, the
blank probability and the label probability are estimated using two separate
probability distributions, which provides a more accurate solution for internal
LM score estimation, and thus works better when combining with an external LM.
Previous work mainly focuses on HAT model training with the negative
log-likelihood loss, while in this paper, we study the minimum word error rate
(MWER) training of HAT -- a criterion that is closer to the evaluation metric
for speech recognition, and has been successfully applied to other types of
end-to-end models such as sequence-to-sequence (S2S) and RNN-T models. From
experiments with around 30,000 hours of training data, we show that MWER
training can improve the accuracy of HAT models, while at the same time,
improving the robustness of the model against the decoding hyper-parameters
such as length normalization and decoding beam during inference.
| 2,021 |
Computation and Language
|
Overcoming Conflicting Data when Updating a Neural Semantic Parser
|
In this paper, we explore how to use a small amount of new data to update a
task-oriented semantic parsing model when the desired output for some examples
has changed. When making updates in this way, one potential problem that arises
is the presence of conflicting data, or out-of-date labels in the original
training set. To evaluate the impact of this understudied problem, we propose
an experimental setup for simulating changes to a neural semantic parser. We
show that the presence of conflicting data greatly hinders learning of an
update, then explore several methods to mitigate its effect. Our multi-task and
data selection methods lead to large improvements in model accuracy compared to
a naive data-mixing strategy, and our best method closes 86% of the accuracy
gap between this baseline and an oracle upper bound.
| 2,021 |
Computation and Language
|
A Differentiable Relaxation of Graph Segmentation and Alignment for AMR
Parsing
|
Abstract Meaning Representations (AMR) are a broad-coverage semantic
formalism which represents sentence meaning as a directed acyclic graph. To
train most AMR parsers, one needs to segment the graph into subgraphs and align
each such subgraph to a word in a sentence; this is normally done at
preprocessing, relying on hand-crafted rules. In contrast, we treat both
alignment and segmentation as latent variables in our model and induce them as
part of end-to-end training.
As marginalizing over the structured latent variables is infeasible, we use
the variational autoencoding framework.
To ensure end-to-end differentiable optimization, we introduce a
differentiable relaxation of the segmentation and alignment problems. We
observe that inducing segmentation yields substantial gains over using a
`greedy' segmentation heuristic. The performance of our method also approaches
that of a model that relies on the segmentation rules of
\citet{lyu-titov-2018-amr}, which were hand-crafted to handle individual AMR
constructions.
| 2,022 |
Computation and Language
|
Robust Document Representations using Latent Topics and Metadata
|
Task specific fine-tuning of a pre-trained neural language model using a
custom softmax output layer is the de facto approach of late when dealing with
document classification problems. This technique is not adequate when labeled
examples are not available at training time and when the metadata artifacts in
a document must be exploited. We address these challenges by generating
document representations that capture both text and metadata artifacts in a
task agnostic manner. Instead of traditional auto-regressive or auto-encoding
based training, our novel self-supervised approach learns a soft-partition of
the input space when generating text embeddings. Specifically, we employ a
pre-learned topic model distribution as surrogate labels and construct a loss
function based on KL divergence. Our solution also incorporates metadata
explicitly rather than just augmenting them with text. The generated document
embeddings exhibit compositional characteristics and are directly used by
downstream classification tasks to create decision boundaries from a small
number of labeled examples, thereby eschewing complicated recognition methods.
We demonstrate through extensive evaluation that our proposed cross-model
fusion solution outperforms several competitive baselines on multiple datasets.
| 2,020 |
Computation and Language
|
Dynamic Contextualized Word Embeddings
|
Static word embeddings that represent words by a single vector cannot capture
the variability of word meaning in different linguistic and extralinguistic
contexts. Building on prior work on contextualized and dynamic word embeddings,
we introduce dynamic contextualized word embeddings that represent words as a
function of both linguistic and extralinguistic context. Based on a pretrained
language model (PLM), dynamic contextualized word embeddings model time and
social space jointly, which makes them attractive for a range of NLP tasks
involving semantic variability. We highlight potential application scenarios by
means of qualitative and quantitative analyses on four English datasets.
| 2,021 |
Computation and Language
|
Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced
Language Model Pre-training
|
Prior work on Data-To-Text Generation, the task of converting knowledge graph
(KG) triples into natural text, focused on domain-specific benchmark datasets.
In this paper, however, we verbalize the entire English Wikidata KG, and
discuss the unique challenges associated with a broad, open-domain, large-scale
verbalization. We further show that verbalizing a comprehensive, encyclopedic
KG like Wikidata can be used to integrate structured KGs and natural language
corpora. In contrast to the many architectures that have been developed to
integrate these two sources, our approach converts the KG into natural text,
allowing it to be seamlessly integrated into existing language models. It
carries the further advantages of improved factual accuracy and reduced
toxicity in the resulting language model. We evaluate this approach by
augmenting the retrieval corpus in a retrieval language model and showing
significant improvements on the knowledge intensive tasks of open domain QA and
the LAMA knowledge probe.
| 2,021 |
Computation and Language
|
AQuaMuSe: Automatically Generating Datasets for Query-Based
Multi-Document Summarization
|
Summarization is the task of compressing source document(s) into coherent and
succinct passages. This is a valuable tool to present users with concise and
accurate sketch of the top ranked documents related to their queries.
Query-based multi-document summarization (qMDS) addresses this pervasive need,
but the research is severely limited due to lack of training and evaluation
datasets as existing single-document and multi-document summarization datasets
are inadequate in form and scale. We propose a scalable approach called
AQuaMuSe to automatically mine qMDS examples from question answering datasets
and large document corpora. Our approach is unique in the sense that it can
general a dual dataset -- for extractive and abstractive summaries both. We
publicly release a specific instance of an AQuaMuSe dataset with 5,519
query-based summaries, each associated with an average of 6 input documents
selected from an index of 355M documents from Common Crawl. Extensive
evaluation of the dataset along with baseline summarization model experiments
are provided.
| 2,020 |
Computation and Language
|
Applying Occam's Razor to Transformer-Based Dependency Parsing: What
Works, What Doesn't, and What is Really Necessary
|
The introduction of pre-trained transformer-based contextualized word
embeddings has led to considerable improvements in the accuracy of graph-based
parsers for frameworks such as Universal Dependencies (UD). However, previous
works differ in various dimensions, including their choice of pre-trained
language models and whether they use LSTM layers. With the aims of
disentangling the effects of these choices and identifying a simple yet widely
applicable architecture, we introduce STEPS, a new modular graph-based
dependency parser. Using STEPS, we perform a series of analyses on the UD
corpora of a diverse set of languages. We find that the choice of pre-trained
embeddings has by far the greatest impact on parser performance and identify
XLM-R as a robust choice across the languages in our study. Adding LSTM layers
provides no benefits when using transformer-based embeddings. A multi-task
training setup outputting additional UD features may contort results. Taking
these insights together, we propose a simple but widely applicable parser
architecture and configuration, achieving new state-of-the-art results (in
terms of LAS) for 10 out of 12 diverse languages.
| 2,021 |
Computation and Language
|
Learning to Recognize Dialect Features
|
Building NLP systems that serve everyone requires accounting for dialect
differences. But dialects are not monolithic entities: rather, distinctions
between and within dialects are captured by the presence, absence, and
frequency of dozens of dialect features in speech and text, such as the
deletion of the copula in "He {} running". In this paper, we introduce the task
of dialect feature detection, and present two multitask learning approaches,
both based on pretrained transformers. For most dialects, large-scale annotated
corpora for these features are unavailable, making it difficult to train
recognizers. We train our models on a small number of minimal pairs, building
on how linguists typically define dialect features. Evaluation on a test set of
22 dialect features of Indian English demonstrates that these models learn to
recognize many features with high accuracy, and that a few minimal pairs can be
as effective for training as thousands of labeled examples. We also demonstrate
the downstream applicability of dialect feature detection both as a measure of
dialect density and as a dialect classifier.
| 2,021 |
Computation and Language
|
Improving Classification through Weak Supervision in Context-specific
Conversational Agent Development for Teacher Education
|
Machine learning techniques applied to the Natural Language Processing (NLP)
component of conversational agent development show promising results for
improved accuracy and quality of feedback that a conversational agent can
provide. The effort required to develop an educational scenario specific
conversational agent is time consuming as it requires domain experts to label
and annotate noisy data sources such as classroom videos. Previous approaches
to modeling annotations have relied on labeling thousands of examples and
calculating inter-annotator agreement and majority votes in order to model the
necessary scenarios. This method, while proven successful, ignores individual
annotator strengths in labeling a data point and under-utilizes examples that
do not have a majority vote for labeling. We propose using a multi-task weak
supervision method combined with active learning to address these concerns.
This approach requires less labeling than traditional methods and shows
significant improvements in precision, efficiency, and time-requirements than
the majority vote method (Ratner 2019). We demonstrate the validity of this
method on the Google Jigsaw data set and then propose a scenario to apply this
method using the Instructional Quality Assessment(IQA) to define the categories
for labeling. We propose using probabilistic modeling of annotator labeling to
generate active learning examples to further label the data. Active learning is
able to iteratively improve the training performance and accuracy of the
original classification model. This approach combines state-of-the art labeling
techniques of weak supervision and active learning to optimize results in the
educational domain and could be further used to lessen the data requirements
for expanded scenarios within the education domain through transfer learning.
| 2,020 |
Computation and Language
|
Can images help recognize entities? A study of the role of images for
Multimodal NER
|
Multimodal named entity recognition (MNER) requires to bridge the gap between
language understanding and visual context. While many multimodal neural
techniques have been proposed to incorporate images into the MNER task, the
model's ability to leverage multimodal interactions remains poorly understood.
In this work, we conduct in-depth analyses of existing multimodal fusion
techniques from different perspectives and describe the scenarios where adding
information from the image does not always boost performance. We also study the
use of captions as a way to enrich the context for MNER. Experiments on three
datasets from popular social platforms expose the bottleneck of existing
multimodal models and the situations where using captions is beneficial.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.