Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
|
Heavily overparameterized language models such as BERT, XLNet and T5 have
achieved impressive success in many NLP tasks. However, their high model
complexity requires enormous computation resources and extremely long training
time for both pre-training and fine-tuning. Many works have studied model
compression on large NLP models, but only focusing on reducing inference time
while still requiring an expensive training process. Other works use extremely
large batch sizes to shorten the pre-training time, at the expense of higher
computational resource demands. In this paper, inspired by the Early-Bird
Lottery Tickets recently studied for computer vision tasks, we propose
EarlyBERT, a general computationally-efficient training algorithm applicable to
both pre-training and fine-tuning of large-scale language models. By slimming
the self-attention and fully-connected sub-layers inside a transformer, we are
the first to identify structured winning tickets in the early stage of BERT
training. We apply those tickets towards efficient BERT training, and conduct
comprehensive pre-training and fine-tuning experiments on GLUE and SQuAD
downstream tasks. Our results show that EarlyBERT achieves comparable
performance to standard BERT, with 35~45% less training time. Code is available
at https://github.com/VITA-Group/EarlyBERT.
| 2,021 |
Computation and Language
|
Controlled Analyses of Social Biases in Wikipedia Bios
|
Social biases on Wikipedia, a widely-read global platform, could greatly
influence public opinion. While prior research has examined man/woman gender
bias in biography articles, possible influences of other demographic attributes
limit conclusions. In this work, we present a methodology for analyzing
Wikipedia pages about people that isolates dimensions of interest (e.g.,
gender), from other attributes (e.g., occupation). Given a target corpus for
analysis (e.g.~biographies about women), we present a method for constructing a
comparison corpus that matches the target corpus in as many attributes as
possible, except the target one. We develop evaluation metrics to measure how
well the comparison corpus aligns with the target corpus and then examine how
articles about gender and racial minorities (cis. women, non-binary people,
transgender women, and transgender men; African American, Asian American, and
Hispanic/Latinx American people) differ from other articles. In addition to
identifying suspect social biases, our results show that failing to control for
covariates can result in different conclusions and veil biases. Our
contributions include methodology that facilitates further analyses of bias in
Wikipedia articles, findings that can aid Wikipedia editors in reducing biases,
and a framework and evaluation metrics to guide future work in this area.
| 2,022 |
Computation and Language
|
Multi-task Retrieval for Knowledge-Intensive Tasks
|
Retrieving relevant contexts from a large corpus is a crucial step for tasks
such as open-domain question answering and fact checking. Although neural
retrieval outperforms traditional methods like tf-idf and BM25, its performance
degrades considerably when applied to out-of-domain data.
Driven by the question of whether a neural retrieval model can be universal
and perform robustly on a wide variety of problems, we propose a multi-task
trained model. Our approach not only outperforms previous methods in the
few-shot setting, but also rivals specialised neural retrievers, even when
in-domain training data is abundant. With the help of our retriever, we improve
existing models for downstream tasks and closely match or improve the state of
the art on multiple benchmarks.
| 2,021 |
Computation and Language
|
WARP: Word-level Adversarial ReProgramming
|
Transfer learning from pretrained language models recently became the
dominant approach for solving many NLP tasks. A common approach to transfer
learning for multiple tasks that maximize parameter sharing trains one or more
task-specific layers on top of the language model. In this paper, we present an
alternative approach based on adversarial reprogramming, which extends earlier
work on automatic prompt generation. Adversarial reprogramming attempts to
learn task-specific word embeddings that, when concatenated to the input text,
instruct the language model to solve the specified task. Using up to 25K
trainable parameters per task, this approach outperforms all existing methods
with up to 25M trainable parameters on the public leaderboard of the GLUE
benchmark. Our method, initialized with task-specific human-readable prompts,
also works in a few-shot setting, outperforming GPT-3 on two SuperGLUE tasks
with just 32 training samples.
| 2,021 |
Computation and Language
|
Intent Classification and Slot Filling for Privacy Policies
|
Understanding privacy policies is crucial for users as it empowers them to
learn about the information that matters to them. Sentences written in a
privacy policy document explain privacy practices, and the constituent text
spans convey further specific information about that practice. We refer to
predicting the privacy practice explained in a sentence as intent
classification and identifying the text spans sharing specific information as
slot filling. In this work, we propose PolicyIE, an English corpus consisting
of 5,250 intent and 11,788 slot annotations spanning 31 privacy policies of
websites and mobile applications. PolicyIE corpus is a challenging real-world
benchmark with limited labeled examples reflecting the cost of collecting
large-scale annotations from domain experts. We present two alternative neural
approaches as baselines, (1) intent classification and slot filling as a joint
sequence tagging and (2) modeling them as a sequence-to-sequence (Seq2Seq)
learning task. The experiment results show that both approaches perform
comparably in intent classification, while the Seq2Seq method outperforms the
sequence tagging approach in slot filling by a large margin. We perform a
detailed error analysis to reveal the challenges of the proposed corpus.
| 2,021 |
Computation and Language
|
Discourse-level Relation Extraction via Graph Pooling
|
The ability to capture complex linguistic structures and long-term
dependencies among words in the passage is essential for discourse-level
relation extraction (DRE) tasks. Graph neural networks (GNNs), one of the
methods to encode dependency graphs, have been shown effective in prior works
for DRE. However, relatively little attention has been paid to receptive fields
of GNNs, which can be crucial for cases with extremely long text that requires
discourse understanding. In this work, we leverage the idea of graph pooling
and propose to use pooling-unpooling framework on DRE tasks. The pooling branch
reduces the graph size and enables the GNNs to obtain larger receptive fields
within fewer layers; the unpooling branch restores the pooled graph to its
original resolution so that representations for entity mention can be
extracted. We propose Clause Matching (CM), a novel linguistically inspired
graph pooling method for NLP tasks. Experiments on two DRE datasets demonstrate
that our models significantly improve over baselines when modeling long-term
dependencies is required, which shows the effectiveness of the
pooling-unpooling framework and our CM pooling method.
| 2,021 |
Computation and Language
|
Sensei: Self-Supervised Sensor Name Segmentation
|
A sensor name, typically an alphanumeric string, encodes the key context
(e.g., function and location) of a sensor needed for deploying smart building
applications. Sensor names, however, are curated in a building vendor-specific
manner using different structures and vocabularies that are often esoteric.
They thus require tremendous manual effort to annotate on a per-building basis;
even to just segment these sensor names into meaningful chunks. In this paper,
we propose a fully automated self-supervised framework, Sensei, which can learn
to segment sensor names without any human annotation. Specifically, we employ a
neural language model to capture the underlying sensor naming structure and
then induce self-supervision based on information from the language model to
build the segmentation model. Extensive experiments on five real-world
buildings comprising thousands of sensors demonstrate the superiority of Sensei
over baseline methods.
| 2,021 |
Computation and Language
|
NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons
Learned
|
We review the EfficientQA competition from NeurIPS 2020. The competition
focused on open-domain question answering (QA), where systems take natural
language questions as input and return natural language answers. The aim of the
competition was to build systems that can predict correct answers while also
satisfying strict on-disk memory budgets. These memory budgets were designed to
encourage contestants to explore the trade-off between storing retrieval
corpora or the parameters of learned models. In this report, we describe the
motivation and organization of the competition, review the best submissions,
and analyze system predictions to inform a discussion of evaluation for
open-domain QA.
| 2,021 |
Computation and Language
|
De-identifying Australian Hospital Discharge Summaries: An End-to-End
Framework using Ensemble of Deep Learning Models
|
Electronic Medical Records (EMRs) contain clinical narrative text that is of
great potential value to medical researchers. However, this information is
mixed with Personally Identifiable Information (PII) that presents risks to
patient and clinician confidentiality. This paper presents an end-to-end
deidentification framework to automatically remove PII from Australian hospital
discharge summaries. Our corpus included 600 hospital discharge summaries which
were extracted from the EMRs of two principal referral hospitals in Sydney,
Australia. Our end-to-end de-identification framework consists of three
components: 1) Annotation: labelling of PII in the 600 hospital discharge
summaries using five pre-defined categories: person, address, date of birth,
individual identification number, phone/fax number; 2) Modelling: training six
named entity recognition (NER) deep learning base-models on balanced and
imbalanced datasets; and evaluating ensembles that combine all six base-models,
the three base-models with the best F1 scores and the three base-models with
the best recall scores respectively, using token-level majority voting and
stacking methods; and 3) De-identification: removing PII from the hospital
discharge summaries. Our results showed that the ensemble model combined using
the stacking Support Vector Machine (SVM) method on the three base-models with
the best F1 scores achieved excellent results with a F1 score of 99.16% on the
test set of our corpus. We also evaluated the robustness of our modelling
component on the 2014 i2b2 de-identification dataset. Our ensemble model, which
uses the token-level majority voting method on all six basemodels, achieved the
highest F1 score of 96.24% at strict entity matching and the highest F1 score
of 98.64% at binary token-level matching compared to two state-of-the-art
methods.
| 2,022 |
Computation and Language
|
Bilingual Lexicon Induction via Unsupervised Bitext Construction and
Word Alignment
|
Bilingual lexicons map words in one language to their translations in
another, and are typically induced by learning linear projections to align
monolingual word embedding spaces. In this paper, we show it is possible to
produce much higher quality lexicons with methods that combine (1) unsupervised
bitext mining and (2) unsupervised word alignment. Directly applying a pipeline
that uses recent algorithms for both subproblems significantly improves induced
lexicon quality and further gains are possible by learning to filter the
resulting lexical entries, with both unsupervised and semi-supervised schemes.
Our final model outperforms the state of the art on the BUCC 2020 shared task
by 14 $F_1$ points averaged over 12 language pairs, while also providing a more
interpretable approach that allows for rich reasoning of word meaning in
context. Further analysis of our output and the standard reference lexicons
suggests they are of comparable quality, and new benchmarks may be needed to
measure further progress on this task.
| 2,021 |
Computation and Language
|
Graphmax for Text Generation
|
In text generation, a large language model (LM) makes a choice of each new
word based only on the former selection of its context using the softmax
function. Nevertheless, the link statistics information of concurrent words
based on a scene-specific corpus is valuable in choosing the next word, which
can help to ensure the topic of the generated text to be aligned with the
current task. To fully explore the co-occurrence information,we propose a
graphmax function for task-specific text generation. Using the graph-based
regularization, graphmax enables the final word choice to be determined by both
the global knowledge from the LM and the local knowledge from the
scene-specific corpus. The traditional softmax function is regularized with a
graph total variation (GTV) term, which incorporates the local knowledge into
the LM and encourages the model to consider the statistical relationships
between words in a scene-specific corpus. The proposed graphmax is versatile
and can be readily plugged into any large pre-trained LM for text generation
and machine translation. Through extensive experiments, we demonstrate that the
new GTV-based regularization can improve performances in various natural
language processing tasks in comparison with existing methods. Moreover,
through human experiments, we observe that participants can easily distinguish
the text generated by graphmax or softmax.
| 2,023 |
Computation and Language
|
DISCOS: Bridging the Gap between Discourse Knowledge and Commonsense
Knowledge
|
Commonsense knowledge is crucial for artificial intelligence systems to
understand natural language. Previous commonsense knowledge acquisition
approaches typically rely on human annotations (for example, ATOMIC) or text
generation models (for example, COMET.) Human annotation could provide
high-quality commonsense knowledge, yet its high cost often results in
relatively small scale and low coverage. On the other hand, generation models
have the potential to automatically generate more knowledge. Nonetheless,
machine learning models often fit the training data well and thus struggle to
generate high-quality novel knowledge. To address the limitations of previous
approaches, in this paper, we propose an alternative commonsense knowledge
acquisition framework DISCOS (from DIScourse to COmmonSense), which
automatically populates expensive complex commonsense knowledge to more
affordable linguistic knowledge resources. Experiments demonstrate that we can
successfully convert discourse knowledge about eventualities from ASER, a
large-scale discourse knowledge graph, into if-then commonsense knowledge
defined in ATOMIC without any additional annotation effort. Further study
suggests that DISCOS significantly outperforms previous supervised approaches
in terms of novelty and diversity with comparable quality. In total, we can
acquire 3.4M ATOMIC-like inferential commonsense knowledge by populating ATOMIC
on the core part of ASER. Codes and data are available at
https://github.com/HKUST-KnowComp/DISCOS-commonsense.
| 2,021 |
Computation and Language
|
How Do Your Biomedical Named Entity Recognition Models Generalize to
Novel Entities?
|
The number of biomedical literature on new biomedical concepts is rapidly
increasing, which necessitates a reliable biomedical named entity recognition
(BioNER) model for identifying new and unseen entity mentions. However, it is
questionable whether existing models can effectively handle them. In this work,
we systematically analyze the three types of recognition abilities of BioNER
models: memorization, synonym generalization, and concept generalization. We
find that although current best models achieve state-of-the-art performance on
benchmarks based on overall performance, they have limitations in identifying
synonyms and new biomedical concepts, indicating they are overestimated in
terms of their generalization abilities. We also investigate failure cases of
models and identify several difficulties in recognizing unseen mentions in
biomedical literature as follows: (1) models tend to exploit dataset biases,
which hinders the models' abilities to generalize, and (2) several biomedical
names have novel morphological patterns with weak name regularity, and models
fail to recognize them. We apply a statistics-based debiasing method to our
problem as a simple remedy and show the improvement in generalization to unseen
mentions. We hope that our analyses and findings would be able to facilitate
further research into the generalization capabilities of NER models in a domain
where their reliability is of utmost importance.
| 2,022 |
Computation and Language
|
Unifying Discourse Resources with Dependency Framework
|
For text-level discourse analysis, there are various discourse schemes but
relatively few labeled data, because discourse research is still immature and
it is labor-intensive to annotate the inner logic of a text. In this paper, we
attempt to unify multiple Chinese discourse corpora under different annotation
schemes with discourse dependency framework by designing semi-automatic methods
to convert them into dependency structures. We also implement several benchmark
dependency parsers and research on how they can leverage the unified data to
improve performance.
| 2,021 |
Computation and Language
|
UnitedQA: A Hybrid Approach for Open Domain Question Answering
|
To date, most of recent work under the retrieval-reader framework for
open-domain QA focuses on either extractive or generative reader exclusively.
In this paper, we study a hybrid approach for leveraging the strengths of both
models. We apply novel techniques to enhance both extractive and generative
readers built upon recent pretrained neural language models, and find that
proper training methods can provide large improvement over previous
state-of-the-art models. We demonstrate that a simple hybrid approach by
combining answers from both readers can efficiently take advantages of
extractive and generative answer inference strategies and outperforms single
models as well as homogeneous ensembles. Our approach outperforms previous
state-of-the-art models by 3.3 and 2.7 points in exact match on
NaturalQuestions and TriviaQA respectively.
| 2,021 |
Computation and Language
|
Transformer based Automatic COVID-19 Fake News Detection System
|
Recent rapid technological advancements in online social networks such as
Twitter have led to a great incline in spreading false information and fake
news. Misinformation is especially prevalent in the ongoing coronavirus disease
(COVID-19) pandemic, leading to individuals accepting bogus and potentially
deleterious claims and articles. Quick detection of fake news can reduce the
spread of panic and confusion among the public. For our analysis in this paper,
we report a methodology to analyze the reliability of information shared on
social media pertaining to the COVID-19 pandemic. Our best approach is based on
an ensemble of three transformer models (BERT, ALBERT, and XLNET) to detecting
fake news. This model was trained and evaluated in the context of the
ConstraintAI 2021 shared task COVID19 Fake News Detection in English. Our
system obtained 0.9855 f1-score on testset and ranked 5th among 160 teams.
| 2,021 |
Computation and Language
|
Prefix-Tuning: Optimizing Continuous Prompts for Generation
|
Fine-tuning is the de facto way to leverage large pretrained language models
to perform downstream tasks. However, it modifies all the language model
parameters and therefore necessitates storing a full copy for each task. In
this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning
for natural language generation tasks, which keeps language model parameters
frozen, but optimizes a small continuous task-specific vector (called the
prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent
tokens to attend to this prefix as if it were "virtual tokens". We apply
prefix-tuning to GPT-2 for table-to-text generation and to BART for
summarization. We find that by learning only 0.1\% of the parameters,
prefix-tuning obtains comparable performance in the full data setting,
outperforms fine-tuning in low-data settings, and extrapolates better to
examples with topics unseen during training.
| 2,021 |
Computation and Language
|
On Explaining Your Explanations of BERT: An Empirical Study with
Sequence Classification
|
BERT, as one of the pretrianed language models, attracts the most attention
in recent years for creating new benchmarks across GLUE tasks via fine-tuning.
One pressing issue is to open up the blackbox and explain the decision makings
of BERT. A number of attribution techniques have been proposed to explain BERT
models, but are often limited to sequence to sequence tasks. In this paper, we
adapt existing attribution methods on explaining decision makings of BERT in
sequence classification tasks. We conduct extensive analyses of four existing
attribution methods by applying them to four different datasets in sentiment
analysis. We compare the reliability and robustness of each method via various
ablation studies. Furthermore, we test whether attribution methods explain
generalized semantics across semantically similar tasks. Our work provides
solid guidance for using attribution methods to explain decision makings of
BERT for downstream classification tasks.
| 2,021 |
Computation and Language
|
BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource
Language Understanding Evaluation in Bangla
|
In this work, we introduce BanglaBERT, a BERT-based Natural Language
Understanding (NLU) model pretrained in Bangla, a widely spoken yet
low-resource language in the NLP literature. To pretrain BanglaBERT, we collect
27.5 GB of Bangla pretraining data (dubbed `Bangla2B+') by crawling 110 popular
Bangla sites. We introduce two downstream task datasets on natural language
inference and question answering and benchmark on four diverse NLU tasks
covering text classification, sequence labeling, and span prediction. In the
process, we bring them under the first-ever Bangla Language Understanding
Benchmark (BLUB). BanglaBERT achieves state-of-the-art results outperforming
multilingual and monolingual models. We are making the models, datasets, and a
leaderboard publicly available at https://github.com/csebuetnlp/banglabert to
advance Bangla NLP.
| 2,022 |
Computation and Language
|
Subformer: Exploring Weight Sharing for Parameter Efficiency in
Generative Transformers
|
Transformers have shown improved performance when compared to previous
architectures for sequence processing such as RNNs. Despite their sizeable
performance gains, as recently suggested, the model is computationally
expensive to train and with a high parameter budget. In light of this, we
explore parameter-sharing methods in Transformers with a specific focus on
generative models. We perform an analysis of different parameter
sharing/reduction methods and develop the Subformer. Our model combines
sandwich-style parameter sharing, which overcomes naive cross-layer parameter
sharing in generative models, and self-attentive embedding factorization
(SAFE). Experiments on machine translation, abstractive summarization and
language modeling show that the Subformer can outperform the Transformer even
when using significantly fewer parameters.
| 2,021 |
Computation and Language
|
Code Generation from Natural Language with Less Prior and More
Monolingual Data
|
Training datasets for semantic parsing are typically small due to the higher
expertise required for annotation than most other NLP tasks. As a result,
models for this application usually need additional prior knowledge to be built
into the architecture or algorithm. The increased dependency on human experts
hinders automation and raises the development and maintenance costs in
practice. This work investigates whether a generic transformer-based seq2seq
model can achieve competitive performance with minimal code-generation-specific
inductive bias design. By exploiting a relatively sizeable monolingual corpus
of the target programming language, which is cheap to mine from the web, we
achieved 81.03% exact match accuracy on Django and 32.57 BLEU score on CoNaLa.
Both are SOTA to the best of our knowledge. This positive evidence highlights a
potentially easier path toward building accurate semantic parsers in practice.
| 2,021 |
Computation and Language
|
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and
Improving Models
|
While counterfactual examples are useful for analysis and training of NLP
models, current generation methods either rely on manual labor to create very
few counterfactuals, or only instantiate limited types of perturbations such as
paraphrases or word substitutions. We present Polyjuice, a general-purpose
counterfactual generator that allows for control over perturbation types and
locations, trained by finetuning GPT-2 on multiple datasets of paired
sentences. We show that Polyjuice produces diverse sets of realistic
counterfactuals, which in turn are useful in various distinct applications:
improving training and evaluation on three different tasks (with around 70%
less annotation effort than manual generation), augmenting state-of-the-art
explanation techniques, and supporting systematic counterfactual error analysis
by revealing behaviors easily missed by human experts.
| 2,021 |
Computation and Language
|
Rider: Reader-Guided Passage Reranking for Open-Domain Question
Answering
|
Current open-domain question answering systems often follow a
Retriever-Reader architecture, where the retriever first retrieves relevant
passages and the reader then reads the retrieved passages to form an answer. In
this paper, we propose a simple and effective passage reranking method, named
Reader-guIDEd Reranker (RIDER), which does not involve training and reranks the
retrieved passages solely based on the top predictions of the reader before
reranking. We show that RIDER, despite its simplicity, achieves 10 to 20
absolute gains in top-1 retrieval accuracy and 1 to 4 Exact Match (EM) gains
without refining the retriever or reader. In addition, RIDER, without any
training, outperforms state-of-the-art transformer-based supervised rerankers.
Remarkably, RIDER achieves 48.3 EM on the Natural Questions dataset and 66.4 EM
on the TriviaQA dataset when only 1,024 tokens (7.8 passages on average) are
used as the reader input after passage reranking.
| 2,021 |
Computation and Language
|
Analyzing Commonsense Emergence in Few-shot Knowledge Models
|
Recently, commonsense knowledge models - pretrained language models (LM)
fine-tuned on knowledge graph (KG) tuples - showed that considerable amounts of
commonsense knowledge can be encoded in the parameters of large language
models. However, as parallel studies show that LMs are poor hypothesizers of
declarative commonsense relationships on their own, it remains unclear whether
this knowledge is learned during pretraining or from fine-tuning on KG
examples. To investigate this question, we train commonsense knowledge models
in few-shot settings to study the emergence of their commonsense representation
abilities. Our results show that commonsense knowledge models can rapidly adapt
from limited examples, indicating that KG fine-tuning serves to learn an
interface to encoded knowledge learned during pretraining. Importantly, our
analysis of absolute, angular, and distributional parameter changes during
few-shot fine-tuning provides novel insights into how this interface is
learned.
| 2,021 |
Computation and Language
|
Modeling Fine-Grained Entity Types with Box Embeddings
|
Neural entity typing models typically represent fine-grained entity types as
vectors in a high-dimensional space, but such spaces are not well-suited to
modeling these types' complex interdependencies. We study the ability of box
embeddings, which embed concepts as d-dimensional hyperrectangles, to capture
hierarchies of types even when these relationships are not defined explicitly
in the ontology. Our model represents both types and entity mentions as boxes.
Each mention and its context are fed into a BERT-based model to embed that
mention in our box space; essentially, this model leverages typological clues
present in the surface text to hypothesize a type representation for the
mention. Box containment can then be used to derive both the posterior
probability of a mention exhibiting a given type and the conditional
probability relations between types themselves. We compare our approach with a
vector-based typing model and observe state-of-the-art performance on several
entity typing benchmarks. In addition to competitive typing performance, our
box-based model shows better performance in prediction consistency (predicting
a supertype and a subtype together) and confidence (i.e., calibration),
demonstrating that the box-based model captures the latent type hierarchies
better than the vector-based model does.
| 2,021 |
Computation and Language
|
On-the-Fly Attention Modulation for Neural Generation
|
Despite considerable advancements with deep neural language models (LMs),
neural text generation still suffers from degeneration: the generated text is
repetitive, generic, self-contradictory, and often lacks commonsense. Our
analyses on sentence-level attention patterns in LMs reveal that neural
degeneration may be associated with insufficient learning of task-specific
characteristics by the attention mechanism. This finding motivates on-the-fly
attention modulation -- a simple but effective method that enables the
injection of priors into attention computation during inference. Automatic and
human evaluation results on three text generation benchmarks demonstrate that
attention modulation helps LMs generate text with enhanced fluency, creativity,
and commonsense reasoning, in addition to significantly reduce sentence-level
repetition.
| 2,021 |
Computation and Language
|
RiddleSense: Reasoning about Riddle Questions Featuring Linguistic
Creativity and Commonsense Knowledge
|
Question: I have five fingers but I am not alive. What am I? Answer: a glove.
Answering such a riddle-style question is a challenging cognitive process, in
that it requires complex commonsense reasoning abilities, an understanding of
figurative language, and counterfactual reasoning skills, which are all
important abilities for advanced natural language understanding (NLU). However,
there are currently no dedicated datasets aiming to test these abilities.
Herein, we present RiddleSense, a new multiple-choice question answering task,
which comes with the first large dataset (5.7k examples) for answering
riddle-style commonsense questions. We systematically evaluate a wide range of
models over the challenge, and point out that there is a large gap between the
best-supervised model and human performance -- suggesting intriguing future
research in the direction of higher-order commonsense reasoning and linguistic
creativity towards building advanced NLU systems.
| 2,021 |
Computation and Language
|
Investigating Memorization of Conspiracy Theories in Text Generation
|
The adoption of natural language generation (NLG) models can leave
individuals vulnerable to the generation of harmful information memorized by
the models, such as conspiracy theories. While previous studies examine
conspiracy theories in the context of social media, they have not evaluated
their presence in the new space of generative language models. In this work, we
investigate the capability of language models to generate conspiracy theory
text. Specifically, we aim to answer: can we test pretrained generative
language models for the memorization and elicitation of conspiracy theories
without access to the model's training data? We highlight the difficulties of
this task and discuss it in the context of memorization, generalization, and
hallucination. Utilizing a new dataset consisting of conspiracy theory topics
and machine-generated conspiracy theories helps us discover that many
conspiracy theories are deeply rooted in the pretrained language models. Our
experiments demonstrate a relationship between model parameters such as size
and temperature and their propensity to generate conspiracy theory text. These
results indicate the need for a more thorough review of NLG applications before
release and an in-depth discussion of the drawbacks of memorization in
generative language models.
| 2,021 |
Computation and Language
|
What all do audio transformer models hear? Probing Acoustic
Representations for Language Delivery and its Structure
|
In recent times, BERT based transformer models have become an inseparable
part of the 'tech stack' of text processing models. Similar progress is being
observed in the speech domain with a multitude of models observing
state-of-the-art results by using audio transformer models to encode speech.
This begs the question of what are these audio transformer models learning.
Moreover, although the standard methodology is to choose the last layer
embedding for any downstream task, but is it the optimal choice? We try to
answer these questions for the two recent audio transformer models, Mockingjay
and wave2vec2.0. We compare them on a comprehensive set of language delivery
and structure features including audio, fluency and pronunciation features.
Additionally, we probe the audio models' understanding of textual surface,
syntax, and semantic features and compare them to BERT. We do this over
exhaustive settings for native, non-native, synthetic, read and spontaneous
speech datasets
| 2,021 |
Computation and Language
|
A Robust and Domain-Adaptive Approach for Low-Resource Named Entity
Recognition
|
Recently, it has attracted much attention to build reliable named entity
recognition (NER) systems using limited annotated data. Nearly all existing
works heavily rely on domain-specific resources, such as external lexicons and
knowledge bases. However, such domain-specific resources are often not
available, meanwhile it's difficult and expensive to construct the resources,
which has become a key obstacle to wider adoption. To tackle the problem, in
this work, we propose a novel robust and domain-adaptive approach RDANER for
low-resource NER, which only uses cheap and easily obtainable resources.
Extensive experiments on three benchmark datasets demonstrate that our approach
achieves the best performance when only using cheap and easily obtainable
resources, and delivers competitive results against state-of-the-art methods
which use difficultly obtainable domainspecific resources. All our code and
corpora can be found on https://github.com/houking-can/RDANER.
| 2,020 |
Computation and Language
|
Multitask Learning for Class-Imbalanced Discourse Classification
|
Small class-imbalanced datasets, common in many high-level semantic tasks
like discourse analysis, present a particular challenge to current
deep-learning architectures. In this work, we perform an extensive analysis on
sentence-level classification approaches for the News Discourse dataset, one of
the largest high-level semantic discourse datasets recently published. We show
that a multitask approach can improve 7% Micro F1-score upon current
state-of-the-art benchmarks, due in part to label corrections across tasks,
which improve performance for underrepresented classes. We also offer a
comparative review of additional techniques proposed to address resource-poor
problems in NLP, and show that none of these approaches can improve
classification accuracy in such a setting.
| 2,021 |
Computation and Language
|
VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation
|
We introduce VoxPopuli, a large-scale multilingual corpus providing 100K
hours of unlabelled speech data in 23 languages. It is the largest open data to
date for unsupervised representation learning as well as semi-supervised
learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16
languages and their aligned oral interpretations into 5 other languages
totaling 5.1K hours. We provide speech recognition baselines and validate the
versatility of VoxPopuli unlabelled data in semi-supervised learning under
challenging out-of-domain settings. We will release the corpus at
https://github.com/facebookresearch/voxpopuli under an open license.
| 2,021 |
Computation and Language
|
Which Linguist Invented the Lightbulb? Presupposition Verification for
Question-Answering
|
Many Question-Answering (QA) datasets contain unanswerable questions, but
their treatment in QA systems remains primitive. Our analysis of the Natural
Questions (Kwiatkowski et al. 2019) dataset reveals that a substantial portion
of unanswerable questions ($\sim$21%) can be explained based on the presence of
unverifiable presuppositions. We discuss the shortcomings of current models in
handling such questions, and describe how an improved system could handle them.
Through a user preference study, we demonstrate that the oracle behavior of our
proposed system that provides responses based on presupposition failure is
preferred over the oracle behavior of existing QA systems. Then we discuss how
our proposed system could be implemented, presenting a novel framework that
breaks down the problem into three steps: presupposition generation,
presupposition verification and explanation generation. We report our progress
in tackling each subproblem, and present a preliminary approach to integrating
these steps into an existing QA system. We find that adding presuppositions and
their verifiability to an existing model yields modest gains in downstream
performance and unanswerability detection. The biggest bottleneck is the
verification component, which needs to be substantially improved for the
integrated system to approach ideal behavior -- even transfer from the best
entailment models currently falls short.
| 2,021 |
Computation and Language
|
End-to-end Semantic Role Labeling with Neural Transition-based Model
|
End-to-end semantic role labeling (SRL) has been received increasing
interest. It performs the two subtasks of SRL: predicate identification and
argument role labeling, jointly. Recent work is mostly focused on graph-based
neural models, while the transition-based framework with neural networks which
has been widely used in a number of closely-related tasks, has not been studied
for the joint task yet. In this paper, we present the first work of
transition-based neural models for end-to-end SRL. Our transition model
incrementally discovers all sentential predicates as well as their arguments by
a set of transition actions. The actions of the two subtasks are executed
mutually for full interactions. Besides, we suggest high-order compositions to
extract non-local features, which can enhance the proposed transition model
further. Experimental results on CoNLL09 and Universal Proposition Bank show
that our final model can produce state-of-the-art performance, and meanwhile
keeps highly efficient in decoding. We also conduct detailed experimental
analysis for a deep understanding of our proposed model.
| 2,021 |
Computation and Language
|
Lex-BERT: Enhancing BERT based NER with lexicons
|
In this work, we represent Lex-BERT, which incorporates the lexicon
information into Chinese BERT for named entity recognition (NER) tasks in a
natural manner. Instead of using word embeddings and a newly designed
transformer layer as in FLAT, we identify the boundary of words in the
sentences using special tokens, and the modified sentence will be encoded
directly by BERT. Our model does not introduce any new parameters and are more
efficient than FLAT. In addition, we do not require any word embeddings
accompanying the lexicon collection. Experiments on Ontonotes and ZhCrossNER
show that our model outperforms FLAT and other baselines.
| 2,021 |
Computation and Language
|
Superbizarre Is Not Superb: Derivational Morphology Improves BERT's
Interpretation of Complex Words
|
How does the input segmentation of pretrained language models (PLMs) affect
their interpretations of complex words? We present the first study
investigating this question, taking BERT as the example PLM and focusing on its
semantic representations of English derivatives. We show that PLMs can be
interpreted as serial dual-route models, i.e., the meanings of complex words
are either stored or else need to be computed from the subwords, which implies
that maximally meaningful input tokens should allow for the best generalization
on new words. This hypothesis is confirmed by a series of semantic probing
tasks on which DelBERT (Derivation leveraging BERT), a model with derivational
input segmentation, substantially outperforms BERT with WordPiece segmentation.
Our results suggest that the generalization capabilities of PLMs could be
further improved if a morphologically-informed vocabulary of input tokens were
used.
| 2,021 |
Computation and Language
|
CDLM: Cross-Document Language Modeling
|
We introduce a new pretraining approach geared for multi-document language
modeling, incorporating two key ideas into the masked language modeling
self-supervised objective. First, instead of considering documents in
isolation, we pretrain over sets of multiple related documents, encouraging the
model to learn cross-document relationships. Second, we improve over recent
long-range transformers by introducing dynamic global attention that has access
to the entire input to predict masked tokens. We release CDLM (Cross-Document
Language Model), a new general language model for multi-document setting that
can be easily applied to downstream tasks. Our extensive analysis shows that
both ideas are essential for the success of CDLM, and work in synergy to set
new state-of-the-art results for several multi-text tasks. Code and models are
available at https://github.com/aviclu/CDLM.
| 2,021 |
Computation and Language
|
End-to-End Training of Neural Retrievers for Open-Domain Question
Answering
|
Recent work on training neural retrievers for open-domain question answering
(OpenQA) has employed both supervised and unsupervised approaches. However, it
remains unclear how unsupervised and supervised methods can be used most
effectively for neural retrievers. In this work, we systematically study
retriever pre-training. We first propose an approach of unsupervised
pre-training with the Inverse Cloze Task and masked salient spans, followed by
supervised finetuning using question-context pairs. This approach leads to
absolute gains of 2+ points over the previous best result in the top-20
retrieval accuracy on Natural Questions and TriviaQA datasets.
We also explore two approaches for end-to-end supervised training of the
reader and retriever components in OpenQA models. In the first approach, the
reader considers each retrieved document separately while in the second
approach, the reader considers all the retrieved documents together. Our
experiments demonstrate the effectiveness of these approaches as we obtain new
state-of-the-art results. On the Natural Questions dataset, we obtain a top-20
retrieval accuracy of 84, an improvement of 5 points over the recent DPR model.
In addition, we achieve good results on answer extraction, outperforming recent
models like REALM and RAG by 3+ points. We further scale up end-to-end training
to large models and show consistent gains in performance over smaller models.
| 2,021 |
Computation and Language
|
Substructure Substitution: Structured Data Augmentation for NLP
|
We study a family of data augmentation methods, substructure substitution
(SUB2), for natural language processing (NLP) tasks. SUB2 generates new
examples by substituting substructures (e.g., subtrees or subsequences) with
ones with the same label, which can be applied to many structured NLP tasks
such as part-of-speech tagging and parsing. For more general tasks (e.g., text
classification) which do not have explicitly annotated substructures, we
present variations of SUB2 based on constituency parse trees, introducing
structure-aware data augmentation methods to general NLP tasks. For most cases,
training with the augmented dataset by SUB2 achieves better performance than
training with the original training set. Further experiments show that SUB2 has
more consistent performance than other investigated augmentation methods,
across different tasks and sizes of the seed dataset.
| 2,021 |
Computation and Language
|
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting
|
In this paper, we generalize text infilling (e.g., masked language models) by
proposing Sequence Span Rewriting (SSR) as a self-supervised
sequence-to-sequence (seq2seq) pre-training objective. SSR provides more
fine-grained learning signals for text representations by supervising the model
to rewrite imperfect spans to ground truth, and it is more consistent than text
infilling with many downstream seq2seq tasks that rewrite a source sentences
into a target sentence. Our experiments with T5 models on various seq2seq tasks
show that SSR can substantially improve seq2seq pre-training. Moreover, we
observe SSR is especially helpful to improve pre-training a small-size seq2seq
model with a powerful imperfect span generator, which indicates a new
perspective of transferring knowledge from a large model to a smaller model for
seq2seq pre-training.
| 2,021 |
Computation and Language
|
KM-BART: Knowledge Enhanced Multimodal BART for Visual Commonsense
Generation
|
We present Knowledge Enhanced Multimodal BART (KM-BART), which is a
Transformer-based sequence-to-sequence model capable of reasoning about
commonsense knowledge from multimodal inputs of images and texts. We adapt the
generative BART architecture to a multimodal model with visual and textual
inputs. We further develop novel pretraining tasks to improve the model
performance on the Visual Commonsense Generation (VCG) task. In particular, our
pretraining task of Knowledge-based Commonsense Generation (KCG) boosts model
performance on the VCG task by leveraging commonsense knowledge from a large
language model pretrained on external commonsense knowledge graphs. To the best
of our knowledge, we are the first to propose a dedicated task for improving
model performance on the VCG task. Experimental results show that our model
reaches state-of-the-art performance on the VCG task by applying these novel
pretraining tasks.
| 2,021 |
Computation and Language
|
Learning to Generate Task-Specific Adapters from Task Description
|
Pre-trained text-to-text transformers such as BART have achieved impressive
performance across a range of NLP tasks. Recent study further shows that they
can learn to generalize to novel tasks, by including task descriptions as part
of the source sequence and training the model with (source, target) examples.
At test time, these fine-tuned models can make inferences on new tasks using
the new task descriptions as part of the input. However, this approach has
potential limitations, as the model learns to solve individual (source, target)
examples (i.e., at the instance level), instead of learning to solve tasks by
taking all examples within a task as a whole (i.e., at the task level). To this
end, we introduce Hypter, a framework that improves text-to-text transformer's
generalization ability to unseen tasks by training a hypernetwork to generate
task-specific, light-weight adapters from task descriptions. Experiments on
ZEST dataset and a synthetic SQuAD dataset demonstrate that Hypter improves
upon fine-tuning baselines. Notably, when using BART-Large as the main network,
Hypter brings 11.3% comparative improvement on ZEST dataset.
| 2,021 |
Computation and Language
|
The Highs and Lows of Simple Lexical Domain Adaptation Approaches for
Neural Machine Translation
|
Machine translation systems are vulnerable to domain mismatch, especially in
a low-resource scenario. Out-of-domain translations are often of poor quality
and prone to hallucinations, due to exposure bias and the decoder acting as a
language model. We adopt two approaches to alleviate this problem: lexical
shortlisting restricted by IBM statistical alignments, and hypothesis
re-ranking based on similarity. The methods are computationally cheap, widely
known, but not extensively experimented on domain adaptation. We demonstrate
success on low-resource out-of-domain test sets, however, the methods are
ineffective when there is sufficient data or too great domain mismatch. This is
due to both the IBM model losing its advantage over the implicitly learned
neural alignment, and issues with subword segmentation of out-of-domain words.
| 2,021 |
Computation and Language
|
Assessing Emoji Use in Modern Text Processing Tools
|
Emojis have become ubiquitous in digital communication, due to their visual
appeal as well as their ability to vividly convey human emotion, among other
factors. The growing prominence of emojis in social media and other instant
messaging also leads to an increased need for systems and tools to operate on
text containing emojis. In this study, we assess this support by considering
test sets of tweets with emojis, based on which we perform a series of
experiments investigating the ability of prominent NLP and text processing
tools to adequately process them. In particular, we consider tokenization,
part-of-speech tagging, as well as sentiment analysis. Our findings show that
many tools still have notable shortcomings when operating on text containing
emojis.
| 2,021 |
Computation and Language
|
Modeling Disclosive Transparency in NLP Application Descriptions
|
Broader disclosive transparency$-$truth and clarity in communication
regarding the function of AI systems$-$is widely considered desirable.
Unfortunately, it is a nebulous concept, difficult to both define and quantify.
This is problematic, as previous work has demonstrated possible trade-offs and
negative consequences to disclosive transparency, such as a confusion effect,
where "too much information" clouds a reader's understanding of what a system
description means. Disclosive transparency's subjective nature has rendered
deep study into these problems and their remedies difficult. To improve this
state of affairs, We introduce neural language model-based probabilistic
metrics to directly model disclosive transparency, and demonstrate that they
correlate with user and expert opinions of system transparency, making them a
valid objective proxy. Finally, we demonstrate the use of these metrics in a
pilot study quantifying the relationships between transparency, confusion, and
user perceptions in a corpus of real NLP system descriptions.
| 2,021 |
Computation and Language
|
Coreference Resolution without Span Representations
|
The introduction of pretrained language models has reduced many complex
task-specific NLP models to simple lightweight layers. An exception to this
trend is coreference resolution, where a sophisticated task-specific model is
appended to a pretrained transformer encoder. While highly effective, the model
has a very large memory footprint -- primarily due to dynamically-constructed
span and span-pair representations -- which hinders the processing of complete
documents and the ability to train on multiple instances in a single batch. We
introduce a lightweight end-to-end coreference model that removes the
dependency on span representations, handcrafted features, and heuristics. Our
model performs competitively with the current standard model, while being
simpler and more efficient.
| 2,021 |
Computation and Language
|
Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval
|
Multi-hop reasoning (i.e., reasoning across two or more documents) is a key
ingredient for NLP models that leverage large corpora to exhibit broad
knowledge. To retrieve evidence passages, multi-hop models must contend with a
fast-growing search space across the hops, represent complex queries that
combine multiple information needs, and resolve ambiguity about the best order
in which to hop between training passages. We tackle these problems via Baleen,
a system that improves the accuracy of multi-hop retrieval while learning
robustly from weak training signals in the many-hop setting. To tame the search
space, we propose condensed retrieval, a pipeline that summarizes the retrieved
passages after each hop into a single compact context. To model complex
queries, we introduce a focused late interaction retriever that allows
different parts of the same query representation to match disparate relevant
passages. Lastly, to infer the hopping dependencies among unordered training
passages, we devise latent hop ordering, a weak-supervision strategy in which
the trained retriever itself selects the sequence of hops. We evaluate Baleen
on retrieval for two-hop question answering and many-hop claim verification,
establishing state-of-the-art performance.
| 2,022 |
Computation and Language
|
Few-Shot Question Answering by Pretraining Span Selection
|
In several question answering benchmarks, pretrained models have reached
human parity through fine-tuning on an order of 100,000 annotated questions and
answers. We explore the more realistic few-shot setting, where only a few
hundred training examples are available, and observe that standard models
perform poorly, highlighting the discrepancy between current pretraining
objectives and question answering. We propose a new pretraining scheme tailored
for question answering: recurring span selection. Given a passage with multiple
sets of recurring spans, we mask in each set all recurring spans but one, and
ask the model to select the correct span in the passage for each masked span.
Masked spans are replaced with a special token, viewed as a question
representation, that is later used during fine-tuning to select the answer
span. The resulting model obtains surprisingly good results on multiple
benchmarks (e.g., 72.7 F1 on SQuAD with only 128 training examples), while
maintaining competitive performance in the high-resource setting.
| 2,021 |
Computation and Language
|
Attentive Tree-structured Network for Monotonicity Reasoning
|
Many state-of-art neural models designed for monotonicity reasoning perform
poorly on downward inference. To address this shortcoming, we developed an
attentive tree-structured neural network. It consists of a tree-based
long-short-term-memory network (Tree-LSTM) with soft attention. It is designed
to model the syntactic parse tree information from the sentence pair of a
reasoning task. A self-attentive aggregator is used for aligning the
representations of the premise and the hypothesis. We present our model and
evaluate it using the Monotonicity Entailment Dataset (MED). We show and
attempt to explain that our model outperforms existing models on MED.
| 2,021 |
Computation and Language
|
An Efficient Transformer Decoder with Compressed Sub-layers
|
The large attention-based encoder-decoder network (Transformer) has become
prevailing recently due to its effectiveness. But the high computation
complexity of its decoder raises the inefficiency issue. By examining the
mathematic formulation of the decoder, we show that under some mild conditions,
the architecture could be simplified by compressing its sub-layers, the basic
building block of Transformer, and achieves a higher parallelism. We thereby
propose Compressed Attention Network, whose decoder layer consists of only one
sub-layer instead of three. Extensive experiments on 14 WMT machine translation
tasks show that our model is 1.42x faster with performance on par with a strong
baseline. This strong baseline is already 2x faster than the widely used
standard baseline without loss in performance.
| 2,023 |
Computation and Language
|
Recoding latent sentence representations -- Dynamic gradient-based
activation modification in RNNs
|
In Recurrent Neural Networks (RNNs), encoding information in a suboptimal or
erroneous way can impact the quality of representations based on later elements
in the sequence and subsequently lead to wrong predictions and a worse model
performance. In humans, challenging cases like garden path sentences (an
instance of this being the infamous "The horse raced past the barn fell") can
lead their language understanding astray. However, they are still able to
correct their representation accordingly and recover when new information is
encountered. Inspired by this, I propose an augmentation to standard RNNs in
form of a gradient-based correction mechanism: This way I hope to enable such
models to dynamically adapt their inner representation of a sentence, adding a
way to correct deviations as soon as they occur. This could therefore lead to
more robust models using more flexible representations, even during inference
time.
I conduct different experiments in the context of language modeling, where
the impact of using such a mechanism is examined in detail. To this end, I look
at modifications based on different kinds of time-dependent error signals and
how they influence the model performance. Furthermore, this work contains a
study of the model's confidence in its predictions during training and for
challenging test samples and the effect of the manipulation thereof. Lastly, I
also study the difference in behavior of these novel models compared to a
standard LSTM baseline and investigate error cases in detail to identify points
of future research. I show that while the proposed approach comes with
promising theoretical guarantees and an appealing intuition, it is only able to
produce minor improvements over the baseline due to challenges in its practical
application and the efficacy of the tested model variants.
| 2,021 |
Computation and Language
|
Coreference Resolution: Are the eliminated spans totally worthless?
|
Various neural-based methods have been proposed so far for joint mention
detection and coreference resolution. However, existing works on coreference
resolution are mainly dependent on filtered mention representation, while other
spans are largely neglected. In this paper, we aim at increasing the
utilization rate of data and investigating whether those eliminated spans are
totally useless, or to what extent they can improve the performance of
coreference resolution. To achieve this, we propose a mention representation
refining strategy where spans highly related to mentions are well leveraged
using a pointer network for representation enhancing. Notably, we utilize an
additional loss term in this work to encourage the diversity between entity
clusters. Experimental results on the document-level CoNLL-2012 Shared Task
English dataset show that eliminated spans are indeed much effective and our
approach can achieve competitive results when compared with previous
state-of-the-art in coreference resolution.
| 2,021 |
Computation and Language
|
Benchmarking Knowledge-Enhanced Commonsense Question Answering via
Knowledge-to-Text Transformation
|
A fundamental ability of humans is to utilize commonsense knowledge in
language understanding and question answering. In recent years, many
knowledge-enhanced Commonsense Question Answering (CQA) approaches have been
proposed. However, it remains unclear: (1) How far can we get by exploiting
external knowledge for CQA? (2) How much potential of knowledge has been
exploited in current CQA models? (3) Which are the most promising directions
for future CQA? To answer these questions, we benchmark knowledge-enhanced CQA
by conducting extensive experiments on multiple standard CQA datasets using a
simple and effective knowledge-to-text transformation framework. Experiments
show that: (1) Our knowledge-to-text framework is effective and achieves
state-of-the-art performance on CommonsenseQA dataset, providing a simple and
strong knowledge-enhanced baseline for CQA; (2) The potential of knowledge is
still far from being fully exploited in CQA -- there is a significant
performance gap from current models to our models with golden knowledge; and
(3) Context-sensitive knowledge selection, heterogeneous knowledge
exploitation, and commonsense-rich language models are promising CQA
directions.
| 2,021 |
Computation and Language
|
A Joint Training Dual-MRC Framework for Aspect Based Sentiment Analysis
|
Aspect based sentiment analysis (ABSA) involves three fundamental subtasks:
aspect term extraction, opinion term extraction, and aspect-level sentiment
classification. Early works only focused on solving one of these subtasks
individually. Some recent work focused on solving a combination of two
subtasks, e.g., extracting aspect terms along with sentiment polarities or
extracting the aspect and opinion terms pair-wisely. More recently, the triple
extraction task has been proposed, i.e., extracting the (aspect term, opinion
term, sentiment polarity) triples from a sentence. However, previous approaches
fail to solve all subtasks in a unified end-to-end framework. In this paper, we
propose a complete solution for ABSA. We construct two machine reading
comprehension (MRC) problems and solve all subtasks by joint training two
BERT-MRC models with parameters sharing. We conduct experiments on these
subtasks, and results on several benchmark datasets demonstrate the
effectiveness of our proposed framework, which significantly outperforms
existing state-of-the-art methods.
| 2,021 |
Computation and Language
|
Outline to Story: Fine-grained Controllable Story Generation from
Cascaded Events
|
Large-scale pretrained language models have shown thrilling generation
capabilities, especially when they generate consistent long text in thousands
of words with ease. However, users of these models can only control the prefix
of sentences or certain global aspects of generated text. It is challenging to
simultaneously achieve fine-grained controllability and preserve the
state-of-the-art unconditional text generation capability. In this paper, we
first propose a new task named "Outline to Story" (O2S) as a test bed for
fine-grained controllable generation of long text, which generates a
multi-paragraph story from cascaded events, i.e. a sequence of outline events
that guide subsequent paragraph generation. We then create dedicate datasets
for future benchmarks, built by state-of-the-art keyword extraction techniques.
Finally, we propose an extremely simple yet strong baseline method for the O2S
task, which fine tunes pre-trained language models on augmented sequences of
outline-story pairs with simple language modeling objective. Our method does
not introduce any new parameters or perform any architecture modification,
except several special tokens as delimiters to build augmented sequences.
Extensive experiments on various datasets demonstrate state-of-the-art
conditional story generation performance with our model, achieving better
fine-grained controllability and user flexibility. Our paper is among the first
ones by our knowledge to propose a model and to create datasets for the task of
"outline to story". Our work also instantiates research interest of
fine-grained controllable generation of open-domain long text, where
controlling inputs are represented by short text.
| 2,021 |
Computation and Language
|
Transformer-based Conditional Variational Autoencoder for Controllable
Story Generation
|
We investigate large-scale latent variable models (LVMs) for neural story
generation -- an under-explored application for open-domain long text -- with
objectives in two threads: generation effectiveness and controllability. LVMs,
especially the variational autoencoder (VAE), have achieved both effective and
controllable generation through exploiting flexible distributional latent
representations. Recently, Transformers and its variants have achieved
remarkable effectiveness without explicit latent representation learning, thus
lack satisfying controllability in generation. In this paper, we advocate to
revive latent variable modeling, essentially the power of representation
learning, in the era of Transformers to enhance controllability without hurting
state-of-the-art generation effectiveness. Specifically, we integrate latent
representation vectors with a Transformer-based pre-trained architecture to
build conditional variational autoencoder (CVAE). Model components such as
encoder, decoder and the variational posterior are all built on top of
pre-trained language models -- GPT2 specifically in this paper. Experiments
demonstrate state-of-the-art conditional generation ability of our model, as
well as its excellent representation learning capability and controllability.
| 2,021 |
Computation and Language
|
How to Train Your Agent to Read and Write
|
Reading and writing research papers is one of the most privileged abilities
that a qualified researcher should master. However, it is difficult for new
researchers (\eg{students}) to fully {grasp} this ability. It would be
fascinating if we could train an intelligent agent to help people read and
summarize papers, and perhaps even discover and exploit the potential knowledge
clues to write novel papers. Although there have been existing works focusing
on summarizing (\emph{i.e.}, reading) the knowledge in a given text or
generating (\emph{i.e.}, writing) a text based on the given knowledge, the
ability of simultaneously reading and writing is still under development.
Typically, this requires an agent to fully understand the knowledge from the
given text materials and generate correct and fluent novel paragraphs, which is
very challenging in practice. In this paper, we propose a Deep ReAder-Writer
(DRAW) network, which consists of a \textit{Reader} that can extract knowledge
graphs (KGs) from input paragraphs and discover potential knowledge, a
graph-to-text \textit{Writer} that generates a novel paragraph, and a
\textit{Reviewer} that reviews the generated paragraph from three different
aspects. Extensive experiments show that our DRAW network outperforms
considered baselines and several state-of-the-art methods on AGENDA and
M-AGENDA datasets. Our code and supplementary are released at
https://github.com/menggehe/DRAW.
| 2,021 |
Computation and Language
|
CRSLab: An Open-Source Toolkit for Building Conversational Recommender
System
|
In recent years, conversational recommender system (CRS) has received much
attention in the research community. However, existing studies on CRS vary in
scenarios, goals and techniques, lacking unified, standardized implementation
or comparison. To tackle this challenge, we propose an open-source CRS toolkit
CRSLab, which provides a unified and extensible framework with highly-decoupled
modules to develop CRSs. Based on this framework, we collect 6 commonly-used
human-annotated CRS datasets and implement 18 models that include recent
techniques such as graph neural network and pre-training models. Besides, our
toolkit provides a series of automatic evaluation protocols and a human-machine
interaction interface to test and compare different CRS methods. The project
and documents are released at https://github.com/RUCAIBox/CRSLab.
| 2,021 |
Computation and Language
|
Advanced Machine Learning Techniques for Fake News (Online
Disinformation) Detection: A Systematic Mapping Study
|
Fake news has now grown into a big problem for societies and also a major
challenge for people fighting disinformation. This phenomenon plagues
democratic elections, reputations of individual persons or organizations, and
has negatively impacted citizens, (e.g., during the COVID-19 pandemic in the US
or Brazil). Hence, developing effective tools to fight this phenomenon by
employing advanced Machine Learning (ML) methods poses a significant challenge.
The following paper displays the present body of knowledge on the application
of such intelligent tools in the fight against disinformation. It starts by
showing the historical perspective and the current role of fake news in the
information war. Proposed solutions based solely on the work of experts are
analysed and the most important directions of the application of intelligent
systems in the detection of misinformation sources are pointed out.
Additionally, the paper presents some useful resources (mainly datasets useful
when assessing ML solutions for fake news detection) and provides a short
overview of the most important R&D projects related to this subject. The main
purpose of this work is to analyse the current state of knowledge in detecting
fake news; on the one hand to show possible solutions, and on the other hand to
identify the main challenges and methodological gaps to motivate future
research.
| 2,021 |
Computation and Language
|
Improving Portuguese Semantic Role Labeling with Transformers and
Transfer Learning
|
The Natural Language Processing task of determining "Who did what to whom" is
called Semantic Role Labeling. For English, recent methods based on Transformer
models have allowed for major improvements in this task over the previous state
of the art. However, for low resource languages, like Portuguese, currently
available semantic role labeling models are hindered by scarce training data.
In this paper, we explore a model architecture with only a pre-trained
Transformer-based model, a linear layer, softmax and Viterbi decoding. We
substantially improve the state-of-the-art performance in Portuguese by over 15
F1. Additionally, we improve semantic role labeling results in Portuguese
corpora by exploiting cross-lingual transfer learning using multilingual
pre-trained models, and transfer learning from dependency parsing in
Portuguese, evaluating the various proposed approaches empirically.
| 2,021 |
Computation and Language
|
Reddit Entity Linking Dataset
|
We introduce and make publicly available an entity linking dataset from
Reddit that contains 17,316 linked entities, each annotated by three human
annotators and then grouped into Gold, Silver, and Bronze to indicate
inter-annotator agreement. We analyze the different errors and disagreements
made by annotators and suggest three types of corrections to the raw data.
Finally, we tested existing entity linking models that are trained and tuned on
text from non-social media datasets. We find that, although these existing
entity linking models perform very well on their original datasets, they
perform poorly on this social media dataset. We also show that the majority of
these errors can be attributed to poor performance on the mention detection
subtask. These results indicate the need for better entity linking models that
can be applied to the enormous amount of social media text.
| 2,021 |
Computation and Language
|
I-BERT: Integer-only BERT Quantization
|
Transformer based models, like BERT and RoBERTa, have achieved
state-of-the-art results in many Natural Language Processing tasks. However,
their memory footprint, inference latency, and power consumption are
prohibitive efficient inference at the edge, and even at the data center. While
quantization can be a viable solution for this, previous work on quantizing
Transformer based models use floating-point arithmetic during inference, which
cannot efficiently utilize integer-only logical units such as the recent Turing
Tensor Cores, or traditional integer-only ARM processors. In this work, we
propose I-BERT, a novel quantization scheme for Transformer based models that
quantizes the entire inference with integer-only arithmetic. Based on
lightweight integer-only approximation methods for nonlinear operations, e.g.,
GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end
integer-only BERT inference without any floating point calculation. We evaluate
our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that
for both cases, I-BERT achieves similar (and slightly higher) accuracy as
compared to the full-precision baseline. Furthermore, our preliminary
implementation of I-BERT shows a speedup of 2.4-4.0x for INT8 inference on a T4
GPU system as compared to FP32 inference. The framework has been developed in
PyTorch and has been open-sourced.
| 2,021 |
Computation and Language
|
Evaluating Empathetic Chatbots in Customer Service Settings
|
Customer service is a setting that calls for empathy in live human agent
responses. Recent advances have demonstrated how open-domain chatbots can be
trained to demonstrate empathy when responding to live human utterances. We
show that a blended skills chatbot model that responds to customer queries is
more likely to resemble actual human agent response if it is trained to
recognize emotion and exhibit appropriate empathy, than a model without such
training. For our analysis, we leverage a Twitter customer service dataset
containing several million customer<->agent dialog examples in customer service
contexts from 20 well-known brands.
| 2,021 |
Computation and Language
|
Integration of Domain Knowledge using Medical Knowledge Graph Deep
Learning for Cancer Phenotyping
|
A key component of deep learning (DL) for natural language processing (NLP)
is word embeddings. Word embeddings that effectively capture the meaning and
context of the word that they represent can significantly improve the
performance of downstream DL models for various NLP tasks. Many existing word
embeddings techniques capture the context of words based on word co-occurrence
in documents and text; however, they often cannot capture broader
domain-specific relationships between concepts that may be crucial for the NLP
task at hand. In this paper, we propose a method to integrate external
knowledge from medical terminology ontologies into the context captured by word
embeddings. Specifically, we use a medical knowledge graph, such as the unified
medical language system (UMLS), to find connections between clinical terms in
cancer pathology reports. This approach aims to minimize the distance between
connected clinical concepts. We evaluate the proposed approach using a
Multitask Convolutional Neural Network (MT-CNN) to extract six cancer
characteristics -- site, subsite, laterality, behavior, histology, and grade --
from a dataset of ~900K cancer pathology reports. The results show that the
MT-CNN model which uses our domain informed embeddings outperforms the same
MT-CNN using standard word2vec embeddings across all tasks, with an improvement
in the overall micro- and macro-F1 scores by 4.97\%and 22.5\%, respectively.
| 2,021 |
Computation and Language
|
Reinforcement Learning based Collective Entity Alignment with Adaptive
Features
|
Entity alignment (EA) is the task of identifying the entities that refer to
the same real-world object but are located in different knowledge graphs (KGs).
For entities to be aligned, existing EA solutions treat them separately and
generate alignment results as ranked lists of entities on the other side.
Nevertheless, this decision-making paradigm fails to take into account the
interdependence among entities. Although some recent efforts mitigate this
issue by imposing the 1-to-1 constraint on the alignment process, they still
cannot adequately model the underlying interdependence and the results tend to
be sub-optimal. To fill in this gap, in this work, we delve into the dynamics
of the decision-making process, and offer a reinforcement learning (RL) based
model to align entities collectively. Under the RL framework, we devise the
coherence and exclusiveness constraints to characterize the interdependence and
restrict collective alignment. Additionally, to generate more precise inputs to
the RL framework, we employ representative features to capture different
aspects of the similarity between entities in heterogeneous KGs, which are
integrated by an adaptive feature fusion strategy. Our proposal is evaluated on
both cross-lingual and mono-lingual EA benchmarks and compared against
state-of-the-art solutions. The empirical results verify its effectiveness and
superiority.
| 2,021 |
Computation and Language
|
Political Depolarization of News Articles Using Attribute-aware Word
Embeddings
|
Political polarization in the US is on the rise. This polarization negatively
affects the public sphere by contributing to the creation of ideological echo
chambers. In this paper, we focus on addressing one of the factors that
contributes to this polarity, polarized media. We introduce a framework for
depolarizing news articles. Given an article on a certain topic with a
particular ideological slant (eg., liberal or conservative), the framework
first detects polar language in the article and then generates a new article
with the polar language replaced with neutral expressions. To detect polar
words, we train a multi-attribute-aware word embedding model that is aware of
ideology and topics on 360k full-length media articles. Then, for text
generation, we propose a new algorithm called Text Annealing Depolarization
Algorithm (TADA). TADA retrieves neutral expressions from the word embedding
model that not only decrease ideological polarity but also preserve the
original argument of the text, while maintaining grammatical correctness. We
evaluate our framework by comparing the depolarized output of our model in two
modes, fully-automatic and semi-automatic, on 99 stories spanning 11 topics.
Based on feedback from 161 human testers, our framework successfully
depolarized 90.1% of paragraphs in semi-automatic mode and 78.3% of paragraphs
in fully-automatic mode. Furthermore, 81.2% of the testers agree that the
non-polar content information is well-preserved and 79% agree that
depolarization does not harm semantic correctness when they compare the
original text and the depolarized text. Our work shows that data-driven methods
can help to locate political polarity and aid in the depolarization of
articles.
| 2,021 |
Computation and Language
|
PhoNLP: A joint multi-task learning model for Vietnamese part-of-speech
tagging, named entity recognition and dependency parsing
|
We present the first multi-task learning model -- named PhoNLP -- for joint
Vietnamese part-of-speech (POS) tagging, named entity recognition (NER) and
dependency parsing. Experiments on Vietnamese benchmark datasets show that
PhoNLP produces state-of-the-art results, outperforming a single-task learning
approach that fine-tunes the pre-trained Vietnamese language model PhoBERT
(Nguyen and Nguyen, 2020) for each task independently. We publicly release
PhoNLP as an open-source toolkit under the Apache License 2.0. Although we
specify PhoNLP for Vietnamese, our PhoNLP training and evaluation command
scripts in fact can directly work for other languages that have a pre-trained
BERT-based language model and gold annotated corpora available for the three
tasks of POS tagging, NER and dependency parsing. We hope that PhoNLP can serve
as a strong baseline and useful toolkit for future NLP research and
applications to not only Vietnamese but also the other languages. Our PhoNLP is
available at: https://github.com/VinAIResearch/PhoNLP
| 2,021 |
Computation and Language
|
Local Translation Services for Neglected Languages
|
Taking advantage of computationally lightweight, but high-quality translators
prompt consideration of new applications that address neglected languages.
Locally run translators for less popular languages may assist data projects
with protected or personal data that may require specific compliance checks
before posting to a public translation API, but which could render reasonable,
cost-effective solutions if done with an army of local, small-scale pair
translators. Like handling a specialist's dialect, this research illustrates
translating two historically interesting, but obfuscated languages: 1)
hacker-speak ("l33t") and 2) reverse (or "mirror") writing as practiced by
Leonardo da Vinci. The work generalizes a deep learning architecture to
translatable variants of hacker-speak with lite, medium, and hard vocabularies.
The original contribution highlights a fluent translator of hacker-speak in
under 50 megabytes and demonstrates a generator for augmenting future datasets
with greater than a million bilingual sentence pairs. The long short-term
memory, recurrent neural network (LSTM-RNN) extends previous work demonstrating
an English-to-foreign translation service built from as little as 10,000
bilingual sentence pairs. This work further solves the equivalent translation
problem in twenty-six additional (non-obfuscated) languages and rank orders
those models and their proficiency quantitatively with Italian as the most
successful and Mandarin Chinese as the most challenging. For neglected
languages, the method prototypes novel services for smaller niche translations
such as Kabyle (Algerian dialect) which covers between 5-7 million speakers but
one which for most enterprise translators, has not yet reached development. One
anticipates the extension of this approach to other important dialects, such as
translating technical (medical or legal) jargon and processing health records.
| 2,021 |
Computation and Language
|
On the interaction of automatic evaluation and task framing in headline
style transfer
|
An ongoing debate in the NLG community concerns the best way to evaluate
systems, with human evaluation often being considered the most reliable method,
compared to corpus-based metrics. However, tasks involving subtle textual
differences, such as style transfer, tend to be hard for humans to perform. In
this paper, we propose an evaluation method for this task based on
purposely-trained classifiers, showing that it better reflects system
differences than traditional metrics such as BLEU and ROUGE.
| 2,021 |
Computation and Language
|
Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent
Semantic Parsing
|
Semantic parsing has long been a fundamental problem in natural language
processing. Recently, cross-domain context-dependent semantic parsing has
become a new focus of research. Central to the problem is the challenge of
leveraging contextual information of both natural language utterance and
database schemas in the interaction history. In this paper, we present a
dynamic graph framework that is capable of effectively modelling contextual
utterances, tokens, database schemas, and their complicated interaction as the
conversation proceeds. The framework employs a dynamic memory decay mechanism
that incorporates inductive bias to integrate enriched contextual relation
representation, which is further enhanced with a powerful reranking model. At
the time of writing, we demonstrate that the proposed framework outperforms all
existing models by large margins, achieving new state-of-the-art performance on
two large-scale benchmarks, the SParC and CoSQL datasets. Specifically, the
model attains a 55.8% question-match and 30.8% interaction-match accuracy on
SParC, and a 46.8% question-match and 17.0% interaction-match accuracy on
CoSQL.
| 2,021 |
Computation and Language
|
Personalized Food Recommendation as Constrained Question Answering over
a Large-scale Food Knowledge Graph
|
Food recommendation has become an important means to help guide users to
adopt healthy dietary habits. Previous works on food recommendation either i)
fail to consider users' explicit requirements, ii) ignore crucial health
factors (e.g., allergies and nutrition needs), or iii) do not utilize the rich
food knowledge for recommending healthy recipes. To address these limitations,
we propose a novel problem formulation for food recommendation, modeling this
task as constrained question answering over a large-scale food knowledge
base/graph (KBQA). Besides the requirements from the user query, personalized
requirements from the user's dietary preferences and health guidelines are
handled in a unified way as additional constraints to the QA system. To
validate this idea, we create a QA style dataset for personalized food
recommendation based on a large-scale food knowledge graph and health
guidelines. Furthermore, we propose a KBQA-based personalized food
recommendation framework which is equipped with novel techniques for handling
negations and numerical comparisons in the queries. Experimental results on the
benchmark show that our approach significantly outperforms non-personalized
counterparts (average 59.7% absolute improvement across various evaluation
metrics), and is able to recommend more relevant and healthier recipes.
| 2,021 |
Computation and Language
|
ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic
|
Pre-trained language models (LMs) are currently integral to many natural
language processing systems. Although multilingual LMs were also introduced to
serve many languages, these have limitations such as being costly at inference
time and the size and diversity of non-English data involved in their
pre-training. We remedy these issues for a collection of diverse Arabic
varieties by introducing two powerful deep bidirectional transformer-based
models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a
new benchmark for multi-dialectal Arabic language understanding evaluation.
ARLUE is built using 42 datasets targeting six different task clusters,
allowing us to offer a series of standardized experiments under rich
conditions. When fine-tuned on ARLUE, our models collectively achieve new
state-of-the-art results across the majority of tasks (37 out of 48
classification tasks, on the 42 datasets). Our best model acquires the highest
ARLUE score (77.40) across all six task clusters, outperforming all other
models including XLM-R Large (~ 3.4 x larger size). Our models are publicly
available at https://github.com/UBC-NLP/marbert and ARLUE will be released
through the same repository.
| 2,021 |
Computation and Language
|
Taxonomy Completion via Triplet Matching Network
|
Automatically constructing taxonomy finds many applications in e-commerce and
web search. One critical challenge is as data and business scope grow in real
applications, new concepts are emerging and needed to be added to the existing
taxonomy. Previous approaches focus on the taxonomy expansion, i.e. finding an
appropriate hypernym concept from the taxonomy for a new query concept. In this
paper, we formulate a new task, "taxonomy completion", by discovering both the
hypernym and hyponym concepts for a query. We propose Triplet Matching Network
(TMN), to find the appropriate <hypernym, hyponym> pairs for a given query
concept. TMN consists of one primal scorer and multiple auxiliary scorers.
These auxiliary scorers capture various fine-grained signals (e.g., query to
hypernym or query to hyponym semantics), and the primal scorer makes a holistic
prediction on <query, hypernym, hyponym> triplet based on the internal feature
representations of all auxiliary scorers. Also, an innovative channel-wise
gating mechanism that retains task-specific information in concept
representations is introduced to further boost model performance. Experiments
on four real-world large-scale datasets show that TMN achieves the best
performance on both taxonomy completion task and the previous taxonomy
expansion task, outperforming existing methods.
| 2,021 |
Computation and Language
|
Deep Neural Network Based Relation Extraction: An Overview
|
Knowledge is a formal way of understanding the world, providing a human-level
cognition and intelligence for the next-generation artificial intelligence
(AI). One of the representations of knowledge is semantic relations between
entities. An effective way to automatically acquire this important knowledge,
called Relation Extraction (RE), a sub-task of information extraction, plays a
vital role in Natural Language Processing (NLP). Its purpose is to identify
semantic relations between entities from natural language text. To date, there
are several studies for RE in previous works, which have documented these
techniques based on Deep Neural Networks (DNNs) become a prevailing technique
in this research. Especially, the supervised and distant supervision methods
based on DNNs are the most popular and reliable solutions for RE. This article
1) introduces some general concepts, and further 2) gives a comprehensive
overview of DNNs in RE from two points of view: supervised RE, which attempts
to improve the standard RE systems, and distant supervision RE, which adopts
DNNs to design sentence encoder and de-noise method. We further 3) cover some
novel methods and recent trends as well as discuss possible future research
directions for this task.
| 2,021 |
Computation and Language
|
SF-QA: Simple and Fair Evaluation Library for Open-domain Question
Answering
|
Although open-domain question answering (QA) draws great attention in recent
years, it requires large amounts of resources for building the full system and
is often difficult to reproduce previous results due to complex configurations.
In this paper, we introduce SF-QA: simple and fair evaluation framework for
open-domain QA. SF-QA framework modularizes the pipeline open-domain QA system,
which makes the task itself easily accessible and reproducible to research
groups without enough computing resources. The proposed evaluation framework is
publicly available and anyone can contribute to the code and evaluations.
| 2,021 |
Computation and Language
|
Curriculum-Meta Learning for Order-Robust Continual Relation Extraction
|
Continual relation extraction is an important task that focuses on extracting
new facts incrementally from unstructured text. Given the sequential arrival
order of the relations, this task is prone to two serious challenges, namely
catastrophic forgetting and order-sensitivity. We propose a novel
curriculum-meta learning method to tackle the above two challenges in continual
relation extraction. We combine meta learning and curriculum learning to
quickly adapt model parameters to a new task and to reduce interference of
previously seen tasks on the current task. We design a novel relation
representation learning method through the distribution of domain and range
types of relations. Such representations are utilized to quantify the
difficulty of tasks for the construction of curricula. Moreover, we also
present novel difficulty-based metrics to quantitatively measure the extent of
order-sensitivity of a given model, suggesting new ways to evaluate model
robustness. Our comprehensive experiments on three benchmark datasets show that
our proposed method outperforms the state-of-the-art techniques. The code is
available at the anonymous GitHub repository:
https://github.com/wutong8023/AAAI_CML.
| 2,021 |
Computation and Language
|
EfficientQA : a RoBERTa Based Phrase-Indexed Question-Answering System
|
State-of-the-art extractive question answering models achieve superhuman
performances on the SQuAD benchmark. Yet, they are unreasonably heavy and need
expensive GPU computing to answer questions in a reasonable time. Thus, they
cannot be used for real-world queries on hundreds of thousands of documents in
the open-domain question answering paradigm. In this paper, we explore the
possibility to transfer the natural language understanding of language models
into dense vectors representing questions and answer candidates, in order to
make the task of question-answering compatible with a simple nearest neighbor
search task. This new model, that we call EfficientQA, takes advantage from the
pair of sequences kind of input of BERT-based models to build meaningful dense
representations of candidate answers. These latter are extracted from the
context in a question-agnostic fashion. Our model achieves state-of-the-art
results in Phrase-Indexed Question Answering (PIQA) beating the previous
state-of-art by 1.3 points in exact-match and 1.4 points in f1-score. These
results show that dense vectors are able to embed very rich semantic
representations of sequences, although these ones were built from language
models not originally trained for the use-case. Thus, in order to build more
resource efficient NLP systems in the future, training language models that are
better adapted to build dense representations of phrases is one of the
possibilities.
| 2,021 |
Computation and Language
|
Order Embeddings from Merged Ontologies using Sketching
|
We give a simple, low resource method to produce order embeddings from
ontologies. Such embeddings map words to vectors so that order relations on the
words, such as hypernymy/hyponymy, are represented in a direct way. Our method
uses sketching techniques, in particular countsketch, for dimensionality
reduction. We also study methods to merge ontologies, in particular those in
medical domains, so that order relations are preserved. We give computational
results for medical ontologies and for wordnet, showing that our merging
techniques are effective and our embedding yields an accurate representation in
both generic and specialised domains.
| 2,021 |
Computation and Language
|
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit
Reasoning Strategies
|
A key limitation in current datasets for multi-hop reasoning is that the
required steps for answering the question are mentioned in it explicitly. In
this work, we introduce StrategyQA, a question answering (QA) benchmark where
the required reasoning steps are implicit in the question, and should be
inferred using a strategy. A fundamental challenge in this setup is how to
elicit such creative questions from crowdsourcing workers, while covering a
broad range of potential strategies. We propose a data collection procedure
that combines term-based priming to inspire annotators, careful control over
the annotator population, and adversarial filtering for eliminating reasoning
shortcuts. Moreover, we annotate each question with (1) a decomposition into
reasoning steps for answering it, and (2) Wikipedia paragraphs that contain the
answers to each step. Overall, StrategyQA includes 2,780 examples, each
consisting of a strategy question, its decomposition, and evidence paragraphs.
Analysis shows that questions in StrategyQA are short, topic-diverse, and cover
a wide range of strategies. Empirically, we show that humans perform well (87%)
on this task, while our best baseline reaches an accuracy of $\sim$66%.
| 2,021 |
Computation and Language
|
Can RNNs learn Recursive Nested Subject-Verb Agreements?
|
One of the fundamental principles of contemporary linguistics states that
language processing requires the ability to extract recursively nested tree
structures. However, it remains unclear whether and how this code could be
implemented in neural circuits. Recent advances in Recurrent Neural Networks
(RNNs), which achieve near-human performance in some language tasks, provide a
compelling model to address such questions. Here, we present a new framework to
study recursive processing in RNNs, using subject-verb agreement as a probe
into the representations of the neural network. We trained six distinct types
of RNNs on a simplified probabilistic context-free grammar designed to
independently manipulate the length of a sentence and the depth of its
syntactic tree. All RNNs generalized to subject-verb dependencies longer than
those seen during training. However, none systematically generalized to deeper
tree structures, even those with a structural bias towards learning nested tree
(i.e., stack-RNNs). In addition, our analyses revealed primacy and recency
effects in the generalization patterns of LSTM-based models, showing that these
models tend to perform well on the outer- and innermost parts of a
center-embedded tree structure, but poorly on its middle levels. Finally,
probing the internal states of the model during the processing of sentences
with nested tree structures, we found a complex encoding of grammatical
agreement information (e.g. grammatical number), in which all the information
for multiple words nouns was carried by a single unit. Taken together, these
results indicate how neural networks may extract bounded nested tree
structures, without learning a systematic recursive rule.
| 2,021 |
Computation and Language
|
Multitask Learning for Emotion and Personality Detection
|
In recent years, deep learning-based automated personality trait detection
has received a lot of attention, especially now, due to the massive digital
footprints of an individual. Moreover, many researchers have demonstrated that
there is a strong link between personality traits and emotions. In this paper,
we build on the known correlation between personality traits and emotional
behaviors, and propose a novel multitask learning framework, SoGMTL that
simultaneously predicts both of them. We also empirically evaluate and discuss
different information-sharing mechanisms between the two tasks. To ensure the
high quality of the learning process, we adopt a MAML-like framework for model
optimization. Our more computationally efficient CNN-based multitask model
achieves the state-of-the-art performance across multiple famous personality
and emotion datasets, even outperforming Language Model based models.
| 2,021 |
Computation and Language
|
Applying Transfer Learning for Improving Domain-Specific Search
Experience Using Query to Question Similarity
|
Search is one of the most common platforms used to seek information. However,
users mostly get overloaded with results whenever they use such a platform to
resolve their queries. Nowadays, direct answers to queries are being provided
as a part of the search experience. The question-answer (QA) retrieval process
plays a significant role in enriching the search experience. Most off-the-shelf
Semantic Textual Similarity models work fine for well-formed search queries,
but their performances degrade when applied to a domain-specific setting having
incomplete or grammatically ill-formed search queries in prevalence. In this
paper, we discuss a framework for calculating similarities between a given
input query and a set of predefined questions to retrieve the question which
matches to it the most. We have used it for the financial domain, but the
framework is generalized for any domain-specific search engine and can be used
in other domains as well. We use Siamese network [6] over Long Short-Term
Memory (LSTM) [3] models to train a classifier which generates unnormalized and
normalized similarity scores for a given pair of questions. Moreover, for each
of these question pairs, we calculate three other similarity scores: cosine
similarity between their average word2vec embeddings [15], cosine similarity
between their sentence embeddings [7] generated using RoBERTa [17] and their
customized fuzzy-match score. Finally, we develop a metaclassifier using
Support Vector Machines [19] for combining these five scores to detect if a
given pair of questions is similar. We benchmark our model's performance
against existing State Of The Art (SOTA) models on Quora Question Pairs (QQP)
dataset as well as a dataset specific to the financial domain.
| 2,021 |
Computation and Language
|
Exploring Text-transformers in AAAI 2021 Shared Task: COVID-19 Fake News
Detection in English
|
In this paper, we describe our system for the AAAI 2021 shared task of
COVID-19 Fake News Detection in English, where we achieved the 3rd position
with the weighted F1 score of 0.9859 on the test set. Specifically, we proposed
an ensemble method of different pre-trained language models such as BERT,
Roberta, Ernie, etc. with various training strategies including
warm-up,learning rate schedule and k-fold cross-validation. We also conduct an
extensive analysis of the samples that are not correctly classified. The code
is available
at:https://github.com/archersama/3rd-solution-COVID19-Fake-News-Detection-in-English.
| 2,021 |
Computation and Language
|
Read, Retrospect, Select: An MRC Framework to Short Text Entity Linking
|
Entity linking (EL) for the rapidly growing short text (e.g. search queries
and news titles) is critical to industrial applications. Most existing
approaches relying on adequate context for long text EL are not effective for
the concise and sparse short text. In this paper, we propose a novel framework
called Multi-turn Multiple-choice Machine reading comprehension (M3}) to solve
the short text EL from a new perspective: a query is generated for each
ambiguous mention exploiting its surrounding context, and an option selection
module is employed to identify the golden entity from candidates using the
query. In this way, M3 framework sufficiently interacts limited context with
candidate entities during the encoding process, as well as implicitly considers
the dissimilarities inside the candidate bunch in the selection stage. In
addition, we design a two-stage verifier incorporated into M3 to address the
commonly existed unlinkable problem in short text. To further consider the
topical coherence and interdependence among referred entities, M3 leverages a
multi-turn fashion to deal with mentions in a sequence manner by retrospecting
historical cues. Evaluation shows that our M3 framework achieves the
state-of-the-art performance on five Chinese and English datasets for the
real-world short text EL.
| 2,021 |
Computation and Language
|
Homonym Identification using BERT -- Using a Clustering Approach
|
Homonym identification is important for WSD that require coarse-grained
partitions of senses. The goal of this project is to determine whether
contextual information is sufficient for identifying a homonymous word. To
capture the context, BERT embeddings are used as opposed to Word2Vec, which
conflates senses into one vector. SemCor is leveraged to retrieve the
embeddings. Various clustering algorithms are applied to the embeddings.
Finally, the embeddings are visualized in a lower-dimensional space to
understand the feasibility of the clustering process.
| 2,021 |
Computation and Language
|
Towards a Smart Data Processing and Storage Model
|
In several domains it is crucial to store and manipulate data whose origin
needs to be completely traceable to guarantee the consistency, trustworthiness
and reliability on the data itself typically for ethical and legal reasons. It
is also important to guarantee that such properties are also carried further
when such data is composed and processed into new data. In this article we
present the main requirements and theorethical problems that arise by the
design of a system supporting data with such capabilities. We present an
architecture for implementing a system as well as a prototype developed in
Pharo.
| 2,020 |
Computation and Language
|
Ask2Transformers: Zero-Shot Domain labelling with Pre-trained Language
Models
|
In this paper we present a system that exploits different pre-trained
Language Models for assigning domain labels to WordNet synsets without any kind
of supervision. Furthermore, the system is not restricted to use a particular
set of domain labels. We exploit the knowledge encoded within different
off-the-shelf pre-trained Language Models and task formulations to infer the
domain label of a particular WordNet definition. The proposed zero-shot system
achieves a new state-of-the-art on the English dataset used in the evaluation.
| 2,021 |
Computation and Language
|
A Novel Word Sense Disambiguation Approach Using WordNet Knowledge Graph
|
Various applications in computational linguistics and artificial intelligence
rely on high-performing word sense disambiguation techniques to solve
challenging tasks such as information retrieval, machine translation, question
answering, and document clustering. While text comprehension is intuitive for
humans, machines face tremendous challenges in processing and interpreting a
human's natural language. This paper presents a novel knowledge-based word
sense disambiguation algorithm, namely Sequential Contextual Similarity Matrix
Multiplication (SCSMM). The SCSMM algorithm combines semantic similarity,
heuristic knowledge, and document context to respectively exploit the merits of
local context between consecutive terms, human knowledge about terms, and a
document's main topic in disambiguating terms. Unlike other algorithms, the
SCSMM algorithm guarantees the capture of the maximum sentence context while
maintaining the terms' order within the sentence. The proposed algorithm
outperformed all other algorithms when disambiguating nouns on the combined
gold standard datasets, while demonstrating comparable results to current
state-of-the-art word sense disambiguation systems when dealing with each
dataset separately. Furthermore, the paper discusses the impact of granularity
level, ambiguity rate, sentence size, and part of speech distribution on the
performance of the proposed algorithm.
| 2,021 |
Computation and Language
|
Effect of Word Embedding Variable Parameters on Arabic Sentiment
Analysis Performance
|
Social media such as Twitter, Facebook, etc. has led to a generated growing
number of comments that contains users opinions. Sentiment analysis research
deals with these comments to extract opinions which are positive or negative.
Arabic language is a rich morphological language; thus, classical techniques of
English sentiment analysis cannot be used for Arabic. Word embedding technique
can be considered as one of successful methods to gaping the morphological
problem of Arabic. Many works have been done for Arabic sentiment analysis
based on word embedding, but there is no study focused on variable parameters.
This study will discuss three parameters (Window size, Dimension of vector and
Negative Sample) for Arabic sentiment analysis using DBOW and DMPV
architectures. A large corpus of previous works generated to learn word
representations and extract features. Four binary classifiers (Logistic
Regression, Decision Tree, Support Vector Machine and Naive Bayes) are used to
detect sentiment. The performance of classifiers evaluated based on; Precision,
Recall and F1-score.
| 2,021 |
Computation and Language
|
LiteMuL: A Lightweight On-Device Sequence Tagger using Multi-task
Learning
|
Named entity detection and Parts-of-speech tagging are the key tasks for many
NLP applications. Although the current state of the art methods achieved near
perfection for long, formal, structured text there are hindrances in deploying
these models on memory-constrained devices such as mobile phones. Furthermore,
the performance of these models is degraded when they encounter short,
informal, and casual conversations. To overcome these difficulties, we present
LiteMuL - a lightweight on-device sequence tagger that can efficiently process
the user conversations using a Multi-Task Learning (MTL) approach. To the best
of our knowledge, the proposed model is the first on-device MTL neural model
for sequence tagging. Our LiteMuL model is about 2.39 MB in size and achieved
an accuracy of 0.9433 (for NER), 0.9090 (for POS) on the CoNLL 2003 dataset.
The proposed LiteMuL not only outperforms the current state of the art results
but also surpasses the results of our proposed on-device task-specific models,
with accuracy gains of up to 11% and model-size reduction by 50%-56%. Our model
is competitive with other MTL approaches for NER and POS tasks while outshines
them with a low memory footprint. We also evaluated our model on custom-curated
user conversations and observed impressive results.
| 2,021 |
Computation and Language
|
EmpLite: A Lightweight Sequence Labeling Model for Emphasis Selection of
Short Texts
|
Word emphasis in textual content aims at conveying the desired intention by
changing the size, color, typeface, style (bold, italic, etc.), and other
typographical features. The emphasized words are extremely helpful in drawing
the readers' attention to specific information that the authors wish to
emphasize. However, performing such emphasis using a soft keyboard for social
media interactions is time-consuming and has an associated learning curve. In
this paper, we propose a novel approach to automate the emphasis word detection
on short written texts. To the best of our knowledge, this work presents the
first lightweight deep learning approach for smartphone deployment of emphasis
selection. Experimental results show that our approach achieves comparable
accuracy at a much lower model size than existing models. Our best lightweight
model has a memory footprint of 2.82 MB with a matching score of 0.716 on
SemEval-2020 public benchmark dataset.
| 2,020 |
Computation and Language
|
Scalable Cross-lingual Document Similarity through Language-specific
Concept Hierarchies
|
With the ongoing growth in number of digital articles in a wider set of
languages and the expanding use of different languages, we need annotation
methods that enable browsing multi-lingual corpora. Multilingual probabilistic
topic models have recently emerged as a group of semi-supervised machine
learning models that can be used to perform thematic explorations on
collections of texts in multiple languages. However, these approaches require
theme-aligned training data to create a language-independent space. This
constraint limits the amount of scenarios that this technique can offer
solutions to train and makes it difficult to scale up to situations where a
huge collection of multi-lingual documents are required during the training
phase. This paper presents an unsupervised document similarity algorithm that
does not require parallel or comparable corpora, or any other type of
translation resource. The algorithm annotates topics automatically created from
documents in a single language with cross-lingual labels and describes
documents by hierarchies of multi-lingual concepts from independently-trained
models. Experiments performed on the English, Spanish and French editions of
JCR-Acquis corpora reveal promising results on classifying and sorting
documents by similar content.
| 2,020 |
Computation and Language
|
User-friendly automatic transcription of low-resource languages:
Plugging ESPnet into Elpis
|
This paper reports on progress integrating the speech recognition toolkit
ESPnet into Elpis, a web front-end originally designed to provide access to the
Kaldi automatic speech recognition toolkit. The goal of this work is to make
end-to-end speech recognition models available to language workers via a
user-friendly graphical interface. Encouraging results are reported on (i)
development of an ESPnet recipe for use in Elpis, with preliminary results on
data sets previously used for training acoustic models with the Persephone
toolkit along with a new data set that had not previously been used in speech
recognition, and (ii) incorporating ESPnet into Elpis along with UI
enhancements and a CUDA-supported Dockerfile.
| 2,021 |
Computation and Language
|
MeisterMorxrc at SemEval-2020 Task 9: Fine-Tune Bert and Multitask
Learning for Sentiment Analysis of Code-Mixed Tweets
|
Natural language processing (NLP) has been applied to various fields
including text classification and sentiment analysis. In the shared task of
sentiment analysis of code-mixed tweets, which is a part of the SemEval-2020
competition~\cite{patwa2020sentimix}, we preprocess datasets by replacing emoji
and deleting uncommon characters and so on, and then fine-tune the
Bidirectional Encoder Representation from Transformers(BERT) to perform the
best. After exhausting top3 submissions, Our team MeisterMorxrc achieves an
averaged F1 score of 0.730 in this task, and and our codalab username is
MeisterMorxrc.
| 2,021 |
Computation and Language
|
"Let's Eat Grandma": Does Punctuation Matter in Sentence Representation?
|
Neural network-based embeddings have been the mainstream approach for
creating a vector representation of the text to capture lexical and semantic
similarities and dissimilarities. In general, existing encoding methods dismiss
the punctuation as insignificant information; consequently, they are routinely
treated as a predefined token/word or eliminated in the pre-processing phase.
However, punctuation could play a significant role in the semantics of the
sentences, as in "Let's eat\hl{,} grandma" and "Let's eat grandma". We
hypothesize that a punctuation-aware representation model would affect the
performance of the downstream tasks. Thereby, we propose a model-agnostic
method that incorporates both syntactic and contextual information to improve
the performance of the sentiment classification task. We corroborate our
findings by conducting experiments on publicly available datasets and provide
case studies that our model generates representations with respect to the
punctuation in the sentence.
| 2,022 |
Computation and Language
|
Misspelling Correction with Pre-trained Contextual Language Model
|
Spelling irregularities, known now as spelling mistakes, have been found for
several centuries. As humans, we are able to understand most of the misspelled
words based on their location in the sentence, perceived pronunciation, and
context. Unlike humans, computer systems do not possess the convenient auto
complete functionality of which human brains are capable. While many programs
provide spelling correction functionality, many systems do not take context
into account. Moreover, Artificial Intelligence systems function in the way
they are trained on. With many current Natural Language Processing (NLP)
systems trained on grammatically correct text data, many are vulnerable against
adversarial examples, yet correctly spelled text processing is crucial for
learning. In this paper, we investigate how spelling errors can be corrected in
context, with a pre-trained language model BERT. We present two experiments,
based on BERT and the edit distance algorithm, for ranking and selecting
candidate corrections. The results of our experiments demonstrated that when
combined properly, contextual word embeddings of BERT and edit distance are
capable of effectively correcting spelling errors.
| 2,021 |
Computation and Language
|
Leveraging Multilingual Transformers for Hate Speech Detection
|
Detecting and classifying instances of hate in social media text has been a
problem of interest in Natural Language Processing in the recent years. Our
work leverages state of the art Transformer language models to identify hate
speech in a multilingual setting. Capturing the intent of a post or a comment
on social media involves careful evaluation of the language style, semantic
content and additional pointers such as hashtags and emojis. In this paper, we
look at the problem of identifying whether a Twitter post is hateful and
offensive or not. We further discriminate the detected toxic content into one
of the following three classes: (a) Hate Speech (HATE), (b) Offensive (OFFN)
and (c) Profane (PRFN). With a pre-trained multilingual Transformer-based text
encoder at the base, we are able to successfully identify and classify hate
speech from multiple languages. On the provided testing corpora, we achieve
Macro F1 scores of 90.29, 81.87 and 75.40 for English, German and Hindi
respectively while performing hate speech detection and of 60.70, 53.28 and
49.74 during fine-grained classification. In our experiments, we show the
efficacy of Perspective API features for hate speech classification and the
effects of exploiting a multilingual training scheme. A feature selection study
is provided to illustrate impacts of specific features upon the architecture's
classification head.
| 2,021 |
Computation and Language
|
Graph-of-Tweets: A Graph Merging Approach to Sub-event Identification
|
Graph structures are powerful tools for modeling the relationships between
textual elements. Graph-of-Words (GoW) has been adopted in many Natural
Language tasks to encode the association between terms. However, GoW provides
few document-level relationships in cases when the connections between
documents are also essential. For identifying sub-events on social media like
Twitter, features from both word- and document-level can be useful as they
supply different information of the event. We propose a hybrid Graph-of-Tweets
(GoT) model which combines the word- and document-level structures for modeling
Tweets. To compress large amount of raw data, we propose a graph merging method
which utilizes FastText word embeddings to reduce the GoW. Furthermore, we
present a novel method to construct GoT with the reduced GoW and a Mutual
Information (MI) measure. Finally, we identify maximal cliques to extract
popular sub-events. Our model showed promising results on condensing
lexical-level information and capturing keywords of sub-events.
| 2,021 |
Computation and Language
|
Breaking Writer's Block: Low-cost Fine-tuning of Natural Language
Generation Models
|
It is standard procedure these days to solve Information Extraction task by
fine-tuning large pre-trained language models. This is not the case for
generation task, which relies on a variety of techniques for controlled
language generation. In this paper, we describe a system that fine-tunes a
natural language generation model for the problem of solving Writer's Block.
The fine-tuning changes the conditioning to also include the right context in
addition to the left context, as well as an optional list of entities, the
size, the genre and a summary of the paragraph that the human author wishes to
generate. Our proposed fine-tuning obtains excellent results, even with a small
number of epochs and a total cost of USD 150. The system can be accessed as a
web-service, and all the code is released. A video showcasing the interface and
the model is also available.
| 2,021 |
Computation and Language
|
Domain-aware Neural Language Models for Speech Recognition
|
As voice assistants become more ubiquitous, they are increasingly expected to
support and perform well on a wide variety of use-cases across different
domains. We present a domain-aware rescoring framework suitable for achieving
domain-adaptation during second-pass rescoring in production settings. In our
framework, we fine-tune a domain-general neural language model on several
domains, and use an LSTM-based domain classification model to select the
appropriate domain-adapted model to use for second-pass rescoring. This
domain-aware rescoring improves the word error rate by up to 2.4% and slot word
error rate by up to 4.1% on three individual domains -- shopping, navigation,
and music -- compared to domain general rescoring. These improvements are
obtained while maintaining accuracy for the general use case.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.