Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Multilingual Modeling Method for Span-Extraction Reading Comprehension
|
Span-extraction reading comprehension models have made tremendous advances
enabled by the availability of large-scale, high-quality training datasets.
Despite such rapid progress and widespread application, extractive reading
comprehension datasets in languages other than English remain scarce, and
creating such a sufficient amount of training data for each language is costly
and even impossible. An alternative to creating large-scale high-quality
monolingual span-extraction training datasets is to develop multilingual
modeling approaches and systems which can transfer to the target language
without requiring training data in that language. In this paper, in order to
solve the scarce availability of extractive reading comprehension training data
in the target language, we propose a multilingual extractive reading
comprehension approach called XLRC by simultaneously modeling the existing
extractive reading comprehension training data in a multilingual environment
using self-adaptive attention and multilingual attention. Specifically, we
firstly construct multilingual parallel corpora by translating the existing
extractive reading comprehension datasets (i.e., CMRC 2018) from the target
language (i.e., Chinese) into different language families (i.e., English).
Secondly, to enhance the final target representation, we adopt self-adaptive
attention (SAA) to combine self-attention and inter-attention to extract the
semantic relations from each pair of the target and source languages.
Furthermore, we propose multilingual attention (MLA) to learn the rich
knowledge from various language families. Experimental results show that our
model outperforms the state-of-the-art baseline (i.e., RoBERTa_Large) on the
CMRC 2018 task, which demonstrate the effectiveness of our proposed
multi-lingual modeling approach and show the potentials in multilingual NLP
tasks.
| 2,021 |
Computation and Language
|
An Exploratory Analysis of the Relation Between Offensive Language and
Mental Health
|
In this paper, we analyze the interplay between the use of offensive language
and mental health. We acquired publicly available datasets created for
offensive language identification and depression detection and we train
computational models to compare the use of offensive language in social media
posts written by groups of individuals with and without self-reported
depression diagnosis. We also look at samples written by groups of individuals
whose posts show signs of depression according to recent related studies. Our
analysis indicates that offensive language is more frequently used in the
samples written by individuals with self-reported depression as well as
individuals showing signs of depression. The results discussed here open new
avenues in research in politeness/offensiveness and mental health.
| 2,021 |
Computation and Language
|
GWLAN: General Word-Level AutocompletioN for Computer-Aided Translation
|
Computer-aided translation (CAT), the use of software to assist a human
translator in the translation process, has been proven to be useful in
enhancing the productivity of human translators. Autocompletion, which suggests
translation results according to the text pieces provided by human translators,
is a core function of CAT. There are two limitations in previous research in
this line. First, most research works on this topic focus on sentence-level
autocompletion (i.e., generating the whole translation as a sentence based on
human input), but word-level autocompletion is under-explored so far. Second,
almost no public benchmarks are available for the autocompletion task of CAT.
This might be among the reasons why research progress in CAT is much slower
compared to automatic MT. In this paper, we propose the task of general
word-level autocompletion (GWLAN) from a real-world CAT scenario, and construct
the first public benchmark to facilitate research in this topic. In addition,
we propose an effective method for GWLAN and compare it with several strong
baselines. Experiments demonstrate that our proposed method can give
significantly more accurate predictions than the baseline methods on our
benchmark datasets.
| 2,021 |
Computation and Language
|
How Lexical Gold Standards Have Effects On The Usefulness Of Text
Analysis Tools For Digital Scholarship
|
This paper describes how the current lexical similarity and analogy gold
standards are built to conform to certain ideas about what the models they are
designed to evaluate are used for. Topical relevance has always been the most
important target notion for information access tools and related language
technology technologies, and while this has proven a useful starting point for
much of what information technology is used for, it does not always align well
with other uses to which technologies are being put, most notably use cases
from digital scholarship in the humanities or social sciences. This paper
argues for more systematic formulation of requirements from the digital
humanities and social sciences and more explicit description of the assumptions
underlying model design.
| 2,021 |
Computation and Language
|
Document-level Event Extraction via Heterogeneous Graph-based
Interaction Model with a Tracker
|
Document-level event extraction aims to recognize event information from a
whole piece of article. Existing methods are not effective due to two
challenges of this task: a) the target event arguments are scattered across
sentences; b) the correlation among events in a document is non-trivial to
model. In this paper, we propose Heterogeneous Graph-based Interaction Model
with a Tracker (GIT) to solve the aforementioned two challenges. For the first
challenge, GIT constructs a heterogeneous graph interaction network to capture
global interactions among different sentences and entity mentions. For the
second, GIT introduces a Tracker module to track the extracted events and hence
capture the interdependency among the events. Experiments on a large-scale
dataset (Zheng et al., 2019) show GIT outperforms the previous methods by 2.8
F1. Further analysis reveals GIT is effective in extracting multiple correlated
events and event arguments that scatter across the document. Our code is
available at https://github.com/RunxinXu/GIT.
| 2,021 |
Computation and Language
|
Do Multilingual Neural Machine Translation Models Contain Language Pair
Specific Attention Heads?
|
Recent studies on the analysis of the multilingual representations focus on
identifying whether there is an emergence of language-independent
representations, or whether a multilingual model partitions its weights among
different languages. While most of such work has been conducted in a
"black-box" manner, this paper aims to analyze individual components of a
multilingual neural translation (NMT) model. In particular, we look at the
encoder self-attention and encoder-decoder attention heads (in a many-to-one
NMT model) that are more specific to the translation of a certain language pair
than others by (1) employing metrics that quantify some aspects of the
attention weights such as "variance" or "confidence", and (2) systematically
ranking the importance of attention heads with respect to translation quality.
Experimental results show that surprisingly, the set of most important
attention heads are very similar across the language pairs and that it is
possible to remove nearly one-third of the less important heads without hurting
the translation quality greatly.
| 2,021 |
Computation and Language
|
Crowdsourcing Learning as Domain Adaptation: A Case Study on Named
Entity Recognition
|
Crowdsourcing is regarded as one prospective solution for effective
supervised learning, aiming to build large-scale annotated training data by
crowd workers. Previous studies focus on reducing the influences from the
noises of the crowdsourced annotations for supervised models. We take a
different point in this work, regarding all crowdsourced annotations as
gold-standard with respect to the individual annotators. In this way, we find
that crowdsourcing could be highly similar to domain adaptation, and then the
recent advances of cross-domain methods can be almost directly applied to
crowdsourcing. Here we take named entity recognition (NER) as a study case,
suggesting an annotator-aware representation learning model that inspired by
the domain adaptation methods which attempt to capture effective domain-aware
features. We investigate both unsupervised and supervised crowdsourcing
learning, assuming that no or only small-scale expert annotations are
available. Experimental results on a benchmark crowdsourced NER dataset show
that our method is highly effective, leading to a new state-of-the-art
performance. In addition, under the supervised setting, we can achieve
impressive performance gains with only a very small scale of expert
annotations.
| 2,021 |
Computation and Language
|
Neural Bi-Lexicalized PCFG Induction
|
Neural lexicalized PCFGs (L-PCFGs) have been shown effective in grammar
induction. However, to reduce computational complexity, they make a strong
independence assumption on the generation of the child word and thus bilexical
dependencies are ignored. In this paper, we propose an approach to parameterize
L-PCFGs without making implausible independence assumptions. Our approach
directly models bilexical dependencies and meanwhile reduces both learning and
representation complexities of L-PCFGs. Experimental results on the English WSJ
dataset confirm the effectiveness of our approach in improving both running
speed and unsupervised parsing performance.
| 2,021 |
Computation and Language
|
DiaKG: an Annotated Diabetes Dataset for Medical Knowledge Graph
Construction
|
Knowledge Graph has been proven effective in modeling structured information
and conceptual knowledge, especially in the medical domain. However, the lack
of high-quality annotated corpora remains a crucial problem for advancing the
research and applications on this task. In order to accelerate the research for
domain-specific knowledge graphs in the medical domain, we introduce DiaKG, a
high-quality Chinese dataset for Diabetes knowledge graph, which contains
22,050 entities and 6,890 relations in total. We implement recent typical
methods for Named Entity Recognition and Relation Extraction as a benchmark to
evaluate the proposed dataset thoroughly. Empirical results show that the DiaKG
is challenging for most existing methods and further analysis is conducted to
discuss future research direction for improvements. We hope the release of this
dataset can assist the construction of diabetes knowledge graphs and facilitate
AI-based applications.
| 2,021 |
Computation and Language
|
Factorising Meaning and Form for Intent-Preserving Paraphrasing
|
We propose a method for generating paraphrases of English questions that
retain the original intent but use a different surface form. Our model combines
a careful choice of training objective with a principled information
bottleneck, to induce a latent encoding space that disentangles meaning and
form. We train an encoder-decoder model to reconstruct a question from a
paraphrase with the same meaning and an exemplar with the same surface form,
leading to separated encoding spaces. We use a Vector-Quantized Variational
Autoencoder to represent the surface form as a set of discrete latent
variables, allowing us to use a classifier to select a different surface form
at test time. Crucially, our method does not require access to an external
source of target exemplars. Extensive experiments and a human evaluation show
that we are able to generate paraphrases with a better tradeoff between
semantic preservation and syntactic novelty compared to previous methods.
| 2,021 |
Computation and Language
|
Telling Stories through Multi-User Dialogue by Modeling Character
Relations
|
This paper explores character-driven story continuation, in which the story
emerges through characters' first- and second-person narration as well as
dialogue -- requiring models to select language that is consistent with a
character's persona and their relationships with other characters while
following and advancing the story. We hypothesize that a multi-task model that
trains on character dialogue plus character relationship information improves
transformer-based story continuation. To this end, we extend the Critical Role
Dungeons and Dragons Dataset (Rameshkumar and Bailey, 2020) -- consisting of
dialogue transcripts of people collaboratively telling a story while playing
the role-playing game Dungeons and Dragons -- with automatically extracted
relationships between each pair of interacting characters as well as their
personas. A series of ablations lend evidence to our hypothesis, showing that
our multi-task model using character relationships improves story continuation
accuracy over strong baselines.
| 2,021 |
Computation and Language
|
Adapting High-resource NMT Models to Translate Low-resource Related
Languages without Parallel Data
|
The scarcity of parallel data is a major obstacle for training high-quality
machine translation systems for low-resource languages. Fortunately, some
low-resource languages are linguistically related or similar to high-resource
languages; these related languages may share many lexical or syntactic
structures. In this work, we exploit this linguistic overlap to facilitate
translating to and from a low-resource language with only monolingual data, in
addition to any parallel data in the related high-resource language. Our
method, NMT-Adapt, combines denoising autoencoding, back-translation and
adversarial objectives to utilize monolingual data for low-resource adaptation.
We experiment on 7 languages from three different language families and show
that our technique significantly improves translation into low-resource
language compared to other translation baselines.
| 2,021 |
Computation and Language
|
SA2SL: From Aspect-Based Sentiment Analysis to Social Listening System
for Business Intelligence
|
In this paper, we present a process of building a social listening system
based on aspect-based sentiment analysis in Vietnamese from creating a dataset
to building a real application. Firstly, we create UIT-ViSFD, a Vietnamese
Smartphone Feedback Dataset as a new benchmark corpus built based on a strict
annotation schemes for evaluating aspect-based sentiment analysis, consisting
of 11,122 human-annotated comments for mobile e-commerce, which is freely
available for research purposes. We also present a proposed approach based on
the Bi-LSTM architecture with the fastText word embeddings for the Vietnamese
aspect based sentiment task. Our experiments show that our approach achieves
the best performances with the F1-score of 84.48% for the aspect task and
63.06% for the sentiment task, which performs several conventional machine
learning and deep learning systems. Last but not least, we build SA2SL, a
social listening system based on the best performance model on our dataset,
which will inspire more social listening systems in future.
| 2,021 |
Computation and Language
|
Beyond Noise: Mitigating the Impact of Fine-grained Semantic Divergences
on Neural Machine Translation
|
While it has been shown that Neural Machine Translation (NMT) is highly
sensitive to noisy parallel training samples, prior work treats all types of
mismatches between source and target as noise. As a result, it remains unclear
how samples that are mostly equivalent but contain a small number of
semantically divergent tokens impact NMT training. To close this gap, we
analyze the impact of different types of fine-grained semantic divergences on
Transformer models. We show that models trained on synthetic divergences output
degenerated text more frequently and are less confident in their predictions.
Based on these findings, we introduce a divergent-aware NMT framework that uses
factors to help NMT recover from the degradation caused by naturally occurring
divergences, improving both translation quality and model calibration on EN-FR
tasks.
| 2,021 |
Computation and Language
|
Learning from Perturbations: Diverse and Informative Dialogue Generation
with Inverse Adversarial Training
|
In this paper, we propose Inverse Adversarial Training (IAT) algorithm for
training neural dialogue systems to avoid generic responses and model dialogue
history better. In contrast to standard adversarial training algorithms, IAT
encourages the model to be sensitive to the perturbation in the dialogue
history and therefore learning from perturbations. By giving higher rewards for
responses whose output probability reduces more significantly when dialogue
history is perturbed, the model is encouraged to generate more diverse and
consistent responses. By penalizing the model when generating the same response
given perturbed dialogue history, the model is forced to better capture
dialogue history and generate more informative responses. Experimental results
on two benchmark datasets show that our approach can better model dialogue
history and generate more diverse and consistent responses. In addition, we
point out a problem of the widely used maximum mutual information (MMI) based
methods for improving the diversity of dialogue response generation models and
demonstrate it empirically.
| 2,021 |
Computation and Language
|
Reinforced Generative Adversarial Network for Abstractive Text
Summarization
|
Sequence-to-sequence models provide a viable new approach to generative
summarization, allowing models that are no longer limited to simply selecting
and recombining sentences from the original text. However, these models have
three drawbacks: their grasp of the details of the original text is often
inaccurate, and the text generated by such models often has repetitions, while
it is difficult to handle words that are beyond the word list. In this paper,
we propose a new architecture that combines reinforcement learning and
adversarial generative networks to enhance the sequence-to-sequence attention
model. First, we use a hybrid pointer-generator network that copies words
directly from the source text, contributing to accurate reproduction of
information without sacrificing the ability of generators to generate new
words. Second, we use both intra-temporal and intra-decoder attention to
penalize summarized content and thus discourage repetition. We apply our model
to our own proposed COVID-19 paper title summarization task and achieve close
approximations to the current model on ROUEG, while bringing better
readability.
| 2,021 |
Computation and Language
|
How transfer learning impacts linguistic knowledge in deep NLP models?
|
Transfer learning from pre-trained neural language models towards downstream
tasks has been a predominant theme in NLP recently. Several researchers have
shown that deep NLP models learn non-trivial amount of linguistic knowledge,
captured at different layers of the model. We investigate how fine-tuning
towards downstream NLP tasks impacts the learned linguistic knowledge. We carry
out a study across popular pre-trained models BERT, RoBERTa and XLNet using
layer and neuron-level diagnostic classifiers. We found that for some GLUE
tasks, the network relies on the core linguistic information and preserve it
deeper in the network, while for others it forgets. Linguistic information is
distributed in the pre-trained language models but becomes localized to the
lower layers post fine-tuning, reserving higher layers for the task specific
knowledge. The pattern varies across architectures, with BERT retaining
linguistic information relatively deeper in the network compared to RoBERTa and
XLNet, where it is predominantly delegated to the lower layers.
| 2,021 |
Computation and Language
|
More than just Frequency? Demasking Unsupervised Hypernymy Prediction
Methods
|
This paper presents a comparison of unsupervised methods of hypernymy
prediction (i.e., to predict which word in a pair of words such as fish-cod is
the hypernym and which the hyponym). Most importantly, we demonstrate across
datasets for English and for German that the predictions of three methods
(WeedsPrec, invCL, SLQS Row) strongly overlap and are highly correlated with
frequency-based predictions. In contrast, the second-order method SLQS shows an
overall lower accuracy but makes correct predictions where the others go wrong.
Our study once more confirms the general need to check the frequency bias of a
computational method in order to identify frequency-(un)related effects.
| 2,021 |
Computation and Language
|
Language Model Evaluation Beyond Perplexity
|
We propose an alternate approach to quantifying how well language models
learn natural language: we ask how well they match the statistical tendencies
of natural language. To answer this question, we analyze whether text generated
from language models exhibits the statistical tendencies present in the
human-generated text on which they were trained. We provide a framework--paired
with significance tests--for evaluating the fit of language models to these
trends. We find that neural language models appear to learn only a subset of
the tendencies considered, but align much more closely with empirical trends
than proposed theoretical distributions (when present). Further, the fit to
different distributions is highly-dependent on both model architecture and
generation strategy. As concrete examples, text generated under the nucleus
sampling scheme adheres more closely to the type--token relationship of natural
language than text produced using standard ancestral sampling; text from LSTMs
reflects the natural language distributions over length, stopwords, and symbols
surprisingly well.
| 2,021 |
Computation and Language
|
Text Summarization with Latent Queries
|
The availability of large-scale datasets has driven the development of neural
models that create summaries from single documents, for generic purposes. When
using a summarization system, users often have specific intents with various
language realizations, which, depending on the information need, can range from
a single keyword to a long narrative composed of multiple questions. Existing
summarization systems, however, often either fail to support or act robustly on
this query focused summarization task. We introduce LaQSum, the first unified
text summarization system that learns Latent Queries from documents for
abstractive summarization with any existing query forms. Under a deep
generative framework, our system jointly optimizes a latent query model and a
conditional language model, allowing users to plug-and-play queries of any type
at test time. Despite learning from only generic summarization data and
requiring no further optimization for downstream summarization tasks, our
system robustly outperforms strong comparison systems across summarization
benchmarks with different query types, document settings, and target domains.
| 2,021 |
Computation and Language
|
Bringing Structure into Summaries: a Faceted Summarization Dataset for
Long Scientific Documents
|
Faceted summarization provides briefings of a document from different
perspectives. Readers can quickly comprehend the main points of a long document
with the help of a structured outline. However, little research has been
conducted on this subject, partially due to the lack of large-scale faceted
summarization datasets. In this study, we present FacetSum, a faceted
summarization benchmark built on Emerald journal articles, covering a diverse
range of domains. Different from traditional document-summary pairs, FacetSum
provides multiple summaries, each targeted at specific sections of a long
document, including the purpose, method, findings, and value. Analyses and
empirical results on our dataset reveal the importance of bringing structure
into summaries. We believe FacetSum will spur further advances in summarization
research and foster the development of NLP systems that can leverage the
structured information in both long texts and summaries.
| 2,021 |
Computation and Language
|
Training ELECTRA Augmented with Multi-word Selection
|
Pre-trained text encoders such as BERT and its variants have recently
achieved state-of-the-art performances on many NLP tasks. While being
effective, these pre-training methods typically demand massive computation
resources. To accelerate pre-training, ELECTRA trains a discriminator that
predicts whether each input token is replaced by a generator. However, this new
task, as a binary classification, is less semantically informative. In this
study, we present a new text encoder pre-training method that improves ELECTRA
based on multi-task learning. Specifically, we train the discriminator to
simultaneously detect replaced tokens and select original tokens from candidate
sets. We further develop two techniques to effectively combine all pre-training
tasks: (1) using attention-based networks for task-specific heads, and (2)
sharing bottom layers of the generator and the discriminator. Extensive
experiments on GLUE and SQuAD datasets demonstrate both the effectiveness and
the efficiency of our proposed method.
| 2,022 |
Computation and Language
|
An Exploratory Analysis of Multilingual Word-Level Quality Estimation
with Cross-Lingual Transformers
|
Most studies on word-level Quality Estimation (QE) of machine translation
focus on language-specific models. The obvious disadvantages of these
approaches are the need for labelled data for each language pair and the high
cost required to maintain several language-specific models. To overcome these
problems, we explore different approaches to multilingual, word-level QE. We
show that these QE models perform on par with the current language-specific
models. In the cases of zero-shot and few-shot QE, we demonstrate that it is
possible to accurately predict word-level quality for any given new language
pair from models trained on other language pairs. Our findings suggest that the
word-level QE models based on powerful pre-trained transformers that we propose
in this paper generalise well across languages, making them more useful in
real-world scenarios.
| 2,021 |
Computation and Language
|
Corpus-Based Paraphrase Detection Experiments and Review
|
Paraphrase detection is important for a number of applications, including
plagiarism detection, authorship attribution, question answering, text
summarization, text mining in general, etc. In this paper, we give a
performance overview of various types of corpus-based models, especially deep
learning (DL) models, with the task of paraphrase detection. We report the
results of eight models (LSI, TF-IDF, Word2Vec, Doc2Vec, GloVe, FastText, ELMO,
and USE) evaluated on three different public available corpora: Microsoft
Research Paraphrase Corpus, Clough and Stevenson and Webis Crowd Paraphrase
Corpus 2011. Through a great number of experiments, we decided on the most
appropriate approaches for text pre-processing: hyper-parameters, sub-model
selection-where they exist (e.g., Skipgram vs. CBOW), distance measures, and
semantic similarity/paraphrase detection threshold. Our findings and those of
other researchers who have used deep learning models show that DL models are
very competitive with traditional state-of-the-art approaches and have
potential that should be further developed.
| 2,020 |
Computation and Language
|
HiddenCut: Simple Data Augmentation for Natural Language Understanding
with Better Generalization
|
Fine-tuning large pre-trained models with task-specific data has achieved
great success in NLP. However, it has been demonstrated that the majority of
information within the self-attention networks is redundant and not utilized
effectively during the fine-tuning stage. This leads to inferior results when
generalizing the obtained models to out-of-domain distributions. To this end,
we propose a simple yet effective data augmentation technique, HiddenCut, to
better regularize the model and encourage it to learn more generalizable
features. Specifically, contiguous spans within the hidden space are
dynamically and strategically dropped during training. Experiments show that
our HiddenCut method outperforms the state-of-the-art augmentation methods on
the GLUE benchmark, and consistently exhibits superior generalization
performances on out-of-distribution and challenging counterexamples. We have
publicly released our code at https://github.com/GT-SALT/HiddenCut.
| 2,021 |
Computation and Language
|
HERALD: An Annotation Efficient Method to Detect User Disengagement in
Social Conversations
|
Open-domain dialog systems have a user-centric goal: to provide humans with
an engaging conversation experience. User engagement is one of the most
important metrics for evaluating open-domain dialog systems, and could also be
used as real-time feedback to benefit dialog policy learning. Existing work on
detecting user disengagement typically requires hand-labeling many dialog
samples. We propose HERALD, an efficient annotation framework that reframes the
training data annotation process as a denoising problem. Specifically, instead
of manually labeling training samples, we first use a set of labeling
heuristics to label training samples automatically. We then denoise the weakly
labeled data using the Shapley algorithm. Finally, we use the denoised data to
train a user engagement detector. Our experiments show that HERALD improves
annotation efficiency significantly and achieves 86% user disengagement
detection accuracy in two dialog corpora.
| 2,021 |
Computation and Language
|
Gender Bias Amplification During Speed-Quality Optimization in Neural
Machine Translation
|
Is bias amplified when neural machine translation (NMT) models are optimized
for speed and evaluated on generic test sets using BLEU? We investigate
architectures and techniques commonly used to speed up decoding in
Transformer-based models, such as greedy search, quantization, average
attention networks (AANs) and shallow decoder models and show their effect on
gendered noun translation. We construct a new gender bias test set, SimpleGEN,
based on gendered noun phrases in which there is a single, unambiguous, correct
answer. While we find minimal overall BLEU degradation as we apply speed
optimizations, we observe that gendered noun translation performance degrades
at a much faster rate.
| 2,021 |
Computation and Language
|
Gender Bias Hidden Behind Chinese Word Embeddings: The Case of Chinese
Adjectives
|
Gender bias in word embeddings gradually becomes a vivid research field in
recent years. Most studies in this field aim at measurement and debiasing
methods with English as the target language. This paper investigates gender
bias in static word embeddings from a unique perspective, Chinese adjectives.
By training word representations with different models, the gender bias behind
the vectors of adjectives is assessed. Through a comparison between the
produced results and a human-scored data set, we demonstrate how gender bias
encoded in word embeddings differentiates from people's attitudes.
| 2,021 |
Computation and Language
|
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D
World
|
We propose PIGLeT: a model that learns physical commonsense knowledge through
interaction, and then uses this knowledge to ground language. We factorize
PIGLeT into a physical dynamics model, and a separate language model. Our
dynamics model learns not just what objects are but also what they do: glass
cups break when thrown, plastic ones don't. We then use it as the interface to
our language model, giving us a unified model of linguistic form and grounded
meaning. PIGLeT can read a sentence, simulate neurally what might happen next,
and then communicate that result through a literal symbolic representation, or
natural language.
Experimental results show that our model effectively learns world dynamics,
along with how to communicate them. It is able to correctly forecast "what
happens next" given an English sentence over 80% of the time, outperforming a
100x larger, text-to-text approach by over 10%. Likewise, its natural language
summaries of physical interactions are also judged by humans as more accurate
than LM alternatives. We present comprehensive analysis showing room for future
work.
| 2,022 |
Computation and Language
|
Multilingual Speech Translation with Unified Transformer: Huawei Noah's
Ark Lab at IWSLT 2021
|
This paper describes the system submitted to the IWSLT 2021 Multilingual
Speech Translation (MultiST) task from Huawei Noah's Ark Lab. We use a unified
transformer architecture for our MultiST model, so that the data from different
modalities (i.e., speech and text) and different tasks (i.e., Speech
Recognition, Machine Translation, and Speech Translation) can be exploited to
enhance the model's ability. Specifically, speech and text inputs are firstly
fed to different feature extractors to extract acoustic and textual features,
respectively. Then, these features are processed by a shared encoder--decoder
architecture. We apply several training techniques to improve the performance,
including multi-task learning, task-level curriculum learning, data
augmentation, etc. Our final system achieves significantly better results than
bilingual baselines on supervised language pairs and yields reasonable results
on zero-shot language pairs.
| 2,021 |
Computation and Language
|
Iterative Hierarchical Attention for Answering Complex Questions over
Long Documents
|
We propose a new model, DocHopper, that iteratively attends to different
parts of long, hierarchically structured documents to answer complex questions.
Similar to multi-hop question-answering (QA) systems, at each step, DocHopper
uses a query $q$ to attend to information from a document, combines this
``retrieved'' information with $q$ to produce the next query. However, in
contrast to most previous multi-hop QA systems, DocHopper is able to
``retrieve'' either short passages or long sections of the document, thus
emulating a multi-step process of ``navigating'' through a long document to
answer a question. To enable this novel behavior, DocHopper does not combine
document information with $q$ by concatenating text to the text of $q$, but by
combining a compact neural representation of $q$ with a compact neural
representation of a hierarchical part of the document, which can potentially be
quite large. We experiment with DocHopper on four different QA tasks that
require reading long and complex documents to answer multi-hop questions, and
show that DocHopper achieves state-of-the-art results on three of the datasets.
Additionally, DocHopper is efficient at inference time, being 3--10 times
faster than the baselines.
| 2,021 |
Computation and Language
|
Improving Formality Style Transfer with Context-Aware Rule Injection
|
Models pre-trained on large-scale regular text corpora often do not work well
for user-generated data where the language styles differ significantly from the
mainstream text. Here we present Context-Aware Rule Injection (CARI), an
innovative method for formality style transfer (FST). CARI injects multiple
rules into an end-to-end BERT-based encoder and decoder model. It learns to
select optimal rules based on context. The intrinsic evaluation showed that
CARI achieved the new highest performance on the FST benchmark dataset. Our
extrinsic evaluation showed that CARI can greatly improve the regular
pre-trained models' performance on several tweet sentiment analysis tasks.
| 2,021 |
Computation and Language
|
Discontinuous Named Entity Recognition as Maximal Clique Discovery
|
Named entity recognition (NER) remains challenging when entity mentions can
be discontinuous. Existing methods break the recognition process into several
sequential steps. In training, they predict conditioned on the golden
intermediate results, while at inference relying on the model output of the
previous steps, which introduces exposure bias. To solve this problem, we first
construct a segment graph for each sentence, in which each node denotes a
segment (a continuous entity on its own, or a part of discontinuous entities),
and an edge links two nodes that belong to the same entity. The nodes and edges
can be generated respectively in one stage with a grid tagging scheme and
learned jointly using a novel architecture named Mac. Then discontinuous NER
can be reformulated as a non-parametric process of discovering maximal cliques
in the graph and concatenating the spans in each clique. Experiments on three
benchmarks show that our method outperforms the state-of-the-art (SOTA)
results, with up to 3.5 percentage points improvement on F1, and achieves 5x
speedup over the SOTA model.
| 2,021 |
Computation and Language
|
Question-aware Transformer Models for Consumer Health Question
Summarization
|
Searching for health information online is becoming customary for more and
more consumers every day, which makes the need for efficient and reliable
question answering systems more pressing. An important contributor to the
success rates of these systems is their ability to fully understand the
consumers' questions. However, these questions are frequently longer than
needed and mention peripheral information that is not useful in finding
relevant answers. Question summarization is one of the potential solutions to
simplifying long and complex consumer questions before attempting to find an
answer. In this paper, we study the task of abstractive summarization for
real-world consumer health questions. We develop an abstractive question
summarization model that leverages the semantic interpretation of a question
via recognition of medical entities, which enables the generation of
informative summaries. Towards this, we propose multiple Cloze tasks (i.e. the
task of filing missing words in a given context) to identify the key medical
entities that enforce the model to have better coverage in question-focus
recognition. Additionally, we infuse the decoder inputs with question-type
information to generate question-type driven summaries. When evaluated on the
MeQSum benchmark corpus, our framework outperformed the state-of-the-art method
by 10.2 ROUGE-L points. We also conducted a manual evaluation to assess the
correctness of the generated summaries.
| 2,021 |
Computation and Language
|
Improving Automatic Hate Speech Detection with Multiword Expression
Features
|
The task of automatically detecting hate speech in social media is gaining
more and more attention. Given the enormous volume of content posted daily,
human monitoring of hate speech is unfeasible. In this work, we propose new
word-level features for automatic hate speech detection (HSD): multiword
expressions (MWEs). MWEs are lexical units greater than a word that have
idiomatic and compositional meanings. We propose to integrate MWE features in a
deep neural network-based HSD framework. Our baseline HSD system relies on
Universal Sentence Encoder (USE). To incorporate MWE features, we create a
three-branch deep neural network: one branch for USE, one for MWE categories,
and one for MWE embeddings. We conduct experiments on two hate speech tweet
corpora with different MWE categories and with two types of MWE embeddings,
word2vec and BERT. Our experiments demonstrate that the proposed HSD system
with MWE features significantly outperforms the baseline system in terms of
macro-F1.
| 2,021 |
Computation and Language
|
Volta at SemEval-2021 Task 6: Towards Detecting Persuasive Texts and
Images using Textual and Multimodal Ensemble
|
Memes are one of the most popular types of content used to spread information
online. They can influence a large number of people through rhetorical and
psychological techniques. The task, Detection of Persuasion Techniques in Texts
and Images, is to detect these persuasive techniques in memes. It consists of
three subtasks: (A) Multi-label classification using textual content, (B)
Multi-label classification and span identification using textual content, and
(C) Multi-label classification using visual and textual content. In this paper,
we propose a transfer learning approach to fine-tune BERT-based models in
different modalities. We also explore the effectiveness of ensembles of models
trained in different modalities. We achieve an F1-score of 57.0, 48.2, and 52.1
in the corresponding subtasks.
| 2,021 |
Computation and Language
|
Reinforced Iterative Knowledge Distillation for Cross-Lingual Named
Entity Recognition
|
Named entity recognition (NER) is a fundamental component in many
applications, such as Web Search and Voice Assistants. Although deep neural
networks greatly improve the performance of NER, due to the requirement of
large amounts of training data, deep neural networks can hardly scale out to
many languages in an industry setting. To tackle this challenge, cross-lingual
NER transfers knowledge from a rich-resource language to languages with low
resources through pre-trained multilingual language models. Instead of using
training data in target languages, cross-lingual NER has to rely on only
training data in source languages, and optionally adds the translated training
data derived from source languages. However, the existing cross-lingual NER
methods do not make good use of rich unlabeled data in target languages, which
is relatively easy to collect in industry applications. To address the
opportunities and challenges, in this paper we describe our novel practice in
Microsoft to leverage such large amounts of unlabeled data in target languages
in real production settings. To effectively extract weak supervision signals
from the unlabeled data, we develop a novel approach based on the ideas of
semi-supervised learning and reinforcement learning. The empirical study on
three benchmark data sets verifies that our approach establishes the new
state-of-the-art performance with clear edges. Now, the NER techniques reported
in this paper are on their way to become a fundamental component for Web
ranking, Entity Pane, Answers Triggering, and Question Answering in the
Microsoft Bing search engine. Moreover, our techniques will also serve as part
of the Spoken Language Understanding module for a commercial voice assistant.
We plan to open source the code of the prototype framework after deployment.
| 2,021 |
Computation and Language
|
Volta at SemEval-2021 Task 9: Statement Verification and Evidence
Finding with Tables using TAPAS and Transfer Learning
|
Tables are widely used in various kinds of documents to present information
concisely. Understanding tables is a challenging problem that requires an
understanding of language and table structure, along with numerical and logical
reasoning. In this paper, we present our systems to solve Task 9 of
SemEval-2021: Statement Verification and Evidence Finding with Tables
(SEM-TAB-FACTS). The task consists of two subtasks: (A) Given a table and a
statement, predicting whether the table supports the statement and (B)
Predicting which cells in the table provide evidence for/against the statement.
We fine-tune TAPAS (a model which extends BERT's architecture to capture
tabular structure) for both the subtasks as it has shown state-of-the-art
performance in various table understanding tasks. In subtask A, we evaluate how
transfer learning and standardizing tables to have a single header row improves
TAPAS' performance. In subtask B, we evaluate how different fine-tuning
strategies can improve TAPAS' performance. Our systems achieve an F1 score of
67.34 in subtask A three-way classification, 72.89 in subtask A two-way
classification, and 62.95 in subtask B.
| 2,021 |
Computation and Language
|
ViTA: Visual-Linguistic Translation by Aligning Object Tags
|
Multimodal Machine Translation (MMT) enriches the source text with visual
information for translation. It has gained popularity in recent years, and
several pipelines have been proposed in the same direction. Yet, the task lacks
quality datasets to illustrate the contribution of visual modality in the
translation systems. In this paper, we propose our system under the team name
Volta for the Multimodal Translation Task of WAT 2021 from English to Hindi. We
also participate in the textual-only subtask of the same language pair for
which we use mBART, a pretrained multilingual sequence-to-sequence model. For
multimodal translation, we propose to enhance the textual input by bringing the
visual information to a textual domain by extracting object tags from the
image. We also explore the robustness of our system by systematically degrading
the source text. Finally, we achieve a BLEU score of 44.6 and 51.6 on the test
set and challenge set of the multimodal task.
| 2,021 |
Computation and Language
|
A Coarse to Fine Question Answering System based on Reinforcement
Learning
|
In this paper, we present a coarse to fine question answering (CFQA) system
based on reinforcement learning which can efficiently processes documents with
different lengths by choosing appropriate actions. The system is designed using
an actor-critic based deep reinforcement learning model to achieve multi-step
question answering. Compared to previous QA models targeting on datasets mainly
containing either short or long documents, our multi-step coarse to fine model
takes the merits from multiple system modules, which can handle both short and
long documents. The system hence obtains a much better accuracy and faster
trainings speed compared to the current state-of-the-art models. We test our
model on four QA datasets, WIKEREADING, WIKIREADING LONG, CNN and SQuAD, and
demonstrate 1.3$\%$-1.7$\%$ accuracy improvements with 1.5x-3.4x training
speed-ups in comparison to the baselines using state-of-the-art models.
| 2,021 |
Computation and Language
|
Exploring Dynamic Selection of Branch Expansion Orders for Code
Generation
|
Due to the great potential in facilitating software development, code
generation has attracted increasing attention recently. Generally, dominant
models are Seq2Tree models, which convert the input natural language
description into a sequence of tree-construction actions corresponding to the
pre-order traversal of an Abstract Syntax Tree (AST). However, such a traversal
order may not be suitable for handling all multi-branch nodes. In this paper,
we propose to equip the Seq2Tree model with a context-based Branch Selector,
which is able to dynamically determine optimal expansion orders of branches for
multi-branch nodes. Particularly, since the selection of expansion orders is a
non-differentiable multi-step operation, we optimize the selector through
reinforcement learning, and formulate the reward function as the difference of
model losses obtained through different expansion orders. Experimental results
and in-depth analysis on several commonly-used datasets demonstrate the
effectiveness and generality of our approach. We have released our code at
https://github.com/DeepLearnXMU/CG-RL.
| 2,021 |
Computation and Language
|
Preview, Attend and Review: Schema-Aware Curriculum Learning for
Multi-Domain Dialog State Tracking
|
Existing dialog state tracking (DST) models are trained with dialog data in a
random order, neglecting rich structural information in a dataset. In this
paper, we propose to use curriculum learning (CL) to better leverage both the
curriculum structure and schema structure for task-oriented dialogs.
Specifically, we propose a model-agnostic framework called Schema-aware
Curriculum Learning for Dialog State Tracking (SaCLog), which consists of a
preview module that pre-trains a DST model with schema information, a
curriculum module that optimizes the model with CL, and a review module that
augments mispredicted data to reinforce the CL training. We show that our
proposed approach improves DST performance over both a transformer-based and
RNN-based DST model (TripPy and TRADE) and achieves new state-of-the-art
results on WOZ2.0 and MultiWOZ2.1.
| 2,021 |
Computation and Language
|
LenAtten: An Effective Length Controlling Unit For Text Summarization
|
Fixed length summarization aims at generating summaries with a preset number
of words or characters. Most recent researches incorporate length information
with word embeddings as the input to the recurrent decoding unit, causing a
compromise between length controllability and summary quality. In this work, we
present an effective length controlling unit Length Attention (LenAtten) to
break this trade-off. Experimental results show that LenAtten not only brings
improvements in length controllability and ROGUE scores but also has great
generalization ability. In the task of generating a summary with the target
length, our model is 732 times better than the best-performing length
controllable summarizer in length controllability on the CNN/Daily Mail
dataset.
| 2,021 |
Computation and Language
|
Distribution Matching for Rationalization
|
The task of rationalization aims to extract pieces of input text as
rationales to justify neural network predictions on text classification tasks.
By definition, rationales represent key text pieces used for prediction and
thus should have similar classification feature distribution compared to the
original input text. However, previous methods mainly focused on maximizing the
mutual information between rationales and labels while neglecting the
relationship between rationales and input text. To address this issue, we
propose a novel rationalization method that matches the distributions of
rationales and input text in both the feature space and output space.
Empirically, the proposed distribution matching approach consistently
outperforms previous methods by a large margin. Our data and code are
available.
| 2,021 |
Computation and Language
|
An In-depth Study on Internal Structure of Chinese Words
|
Unlike English letters, Chinese characters have rich and specific meanings.
Usually, the meaning of a word can be derived from its constituent characters
in some way. Several previous works on syntactic parsing propose to annotate
shallow word-internal structures for better utilizing character-level
information. This work proposes to model the deep internal structures of
Chinese words as dependency trees with 11 labels for distinguishing syntactic
relationships. First, based on newly compiled annotation guidelines, we
manually annotate a word-internal structure treebank (WIST) consisting of over
30K multi-char words from Chinese Penn Treebank. To guarantee quality, each
word is independently annotated by two annotators and inconsistencies are
handled by a third senior annotator. Second, we present detailed and
interesting analysis on WIST to reveal insights on Chinese word formation.
Third, we propose word-internal structure parsing as a new task, and conduct
benchmark experiments using a competitive dependency parser. Finally, we
present two simple ways to encode word-internal structures, leading to
promising gains on the sentence-level syntactic parsing task.
| 2,021 |
Computation and Language
|
Replicating and Extending "Because Their Treebanks Leak": Graph
Isomorphism, Covariants, and Parser Performance
|
S{\o}gaard (2020) obtained results suggesting the fraction of trees occurring
in the test data isomorphic to trees in the training set accounts for a
non-trivial variation in parser performance. Similar to other statistical
analyses in NLP, the results were based on evaluating linear regressions.
However, the study had methodological issues and was undertaken using a small
sample size leading to unreliable results. We present a replication study in
which we also bin sentences by length and find that only a small subset of
sentences vary in performance with respect to graph isomorphism. Further, the
correlation observed between parser performance and graph isomorphism in the
wild disappears when controlling for covariants. However, in a controlled
experiment, where covariants are kept fixed, we do observe a strong
correlation. We suggest that conclusions drawn from statistical analyses like
this need to be tempered and that controlled experiments can complement them by
more readily teasing factors apart.
| 2,021 |
Computation and Language
|
Sub-Character Tokenization for Chinese Pretrained Language Models
|
Tokenization is fundamental to pretrained language models (PLMs). Existing
tokenization methods for Chinese PLMs typically treat each character as an
indivisible token. However, they ignore the unique feature of the Chinese
writing system where additional linguistic information exists below the
character level, i.e., at the sub-character level. To utilize such information,
we propose sub-character (SubChar for short) tokenization. Specifically, we
first encode the input text by converting each Chinese character into a short
sequence based on its glyph or pronunciation, and then construct the vocabulary
based on the encoded text with sub-word segmentation. Experimental results show
that SubChar tokenizers have two main advantages over existing tokenizers: 1)
They can tokenize inputs into much shorter sequences, thus improving the
computational efficiency. 2) Pronunciation-based SubChar tokenizers can encode
Chinese homophones into the same transliteration sequences and produce the same
tokenization output, hence being robust to homophone typos. At the same time,
models trained with SubChar tokenizers perform competitively on downstream
tasks. We release our code and models at
https://github.com/thunlp/SubCharTokenization to facilitate future work.
| 2,023 |
Computation and Language
|
Nora: The Well-Being Coach
|
The current pandemic has forced people globally to remain in isolation and
practice social distancing, which creates the need for a system to combat the
resulting loneliness and negative emotions. In this paper we propose Nora, a
virtual coaching platform designed to utilize natural language understanding in
its dialogue system and suggest other recommendations based on user
interactions. It is intended to provide assistance and companionship to people
undergoing self-quarantine or work-from-home routines. Nora helps users gauge
their well-being by detecting and recording the user's emotion, sentiment, and
stress. Nora also recommends various workout, meditation, or yoga exercises to
users in support of developing a healthy daily routine. In addition, we provide
a social community inside Nora, where users can connect and share their
experiences with others undergoing a similar isolation procedure. Nora can be
accessed from anywhere via a web link and has support for both English and
Mandarin.
| 2,021 |
Computation and Language
|
Dialogue-oriented Pre-training
|
Pre-trained language models (PrLM) has been shown powerful in enhancing a
broad range of downstream tasks including various dialogue related ones.
However, PrLMs are usually trained on general plain text with common language
model (LM) training objectives, which cannot sufficiently capture dialogue
exclusive features due to the limitation of such training setting, so that
there is an immediate need to fill the gap between a specific dialogue task and
the LM task. As it is unlikely to collect huge dialogue data for
dialogue-oriented pre-training, in this paper, we propose three strategies to
simulate the conversation features on general plain text. Our proposed method
differs from existing post-training methods that it may yield a general-purpose
PrLM and does not individualize to any detailed task while keeping the
capability of learning dialogue related features including speaker awareness,
continuity and consistency. The resulted Dialog-PrLM is fine-tuned on three
public multi-turn dialogue datasets and helps achieve significant and
consistent improvement over the plain PrLMs.
| 2,021 |
Computation and Language
|
KGPool: Dynamic Knowledge Graph Context Selection for Relation
Extraction
|
We present a novel method for relation extraction (RE) from a single
sentence, mapping the sentence and two given entities to a canonical fact in a
knowledge graph (KG). Especially in this presumed sentential RE setting, the
context of a single sentence is often sparse. This paper introduces the KGPool
method to address this sparsity, dynamically expanding the context with
additional facts from the KG. It learns the representation of these facts
(entity alias, entity descriptions, etc.) using neural methods, supplementing
the sentential context. Unlike existing methods that statically use all
expanded facts, KGPool conditions this expansion on the sentence. We study the
efficacy of KGPool by evaluating it with different neural models and KGs
(Wikidata and NYT Freebase). Our experimental evaluation on standard datasets
shows that by feeding the KGPool representation into a Graph Neural Network,
the overall method is significantly more accurate than state-of-the-art
methods.
| 2,021 |
Computation and Language
|
SemEval-2021 Task 1: Lexical Complexity Prediction
|
This paper presents the results and main findings of SemEval-2021 Task 1 -
Lexical Complexity Prediction. We provided participants with an augmented
version of the CompLex Corpus (Shardlow et al 2020). CompLex is an English
multi-domain corpus in which words and multi-word expressions (MWEs) were
annotated with respect to their complexity using a five point Likert scale.
SemEval-2021 Task 1 featured two Sub-tasks: Sub-task 1 focused on single words
and Sub-task 2 focused on MWEs. The competition attracted 198 teams in total,
of which 54 teams submitted official runs on the test data to Sub-task 1 and 37
to Sub-task 2.
| 2,021 |
Computation and Language
|
DoT: An efficient Double Transformer for NLP tasks with tables
|
Transformer-based approaches have been successfully used to obtain
state-of-the-art accuracy on natural language processing (NLP) tasks with
semi-structured tables. These model architectures are typically deep, resulting
in slow training and inference, especially for long inputs. To improve
efficiency while maintaining a high accuracy, we propose a new architecture,
DoT, a double transformer model, that decomposes the problem into two
sub-tasks: A shallow pruning transformer that selects the top-K tokens,
followed by a deep task-specific transformer that takes as input those K
tokens. Additionally, we modify the task-specific attention to incorporate the
pruning scores. The two transformers are jointly trained by optimizing the
task-specific loss. We run experiments on three benchmarks, including
entailment and question-answering. We show that for a small drop of accuracy,
DoT improves training and inference time by at least 50%. We also show that the
pruning transformer effectively selects relevant tokens enabling the end-to-end
model to maintain similar accuracy as slower baseline models. Finally, we
analyse the pruning and give some insight into its impact on the task model.
| 2,021 |
Computation and Language
|
Towards Quantifiable Dialogue Coherence Evaluation
|
Automatic dialogue coherence evaluation has attracted increasing attention
and is crucial for developing promising dialogue systems. However, existing
metrics have two major limitations: (a) they are mostly trained in a simplified
two-level setting (coherent vs. incoherent), while humans give Likert-type
multi-level coherence scores, dubbed as "quantifiable"; (b) their predicted
coherence scores cannot align with the actual human rating standards due to the
absence of human guidance during training. To address these limitations, we
propose Quantifiable Dialogue Coherence Evaluation (QuantiDCE), a novel
framework aiming to train a quantifiable dialogue coherence metric that can
reflect the actual human rating standards. Specifically, QuantiDCE includes two
training stages, Multi-Level Ranking (MLR) pre-training and Knowledge
Distillation (KD) fine-tuning. During MLR pre-training, a new MLR loss is
proposed for enabling the model to learn the coarse judgement of coherence
degrees. Then, during KD fine-tuning, the pretrained model is further finetuned
to learn the actual human rating standards with only very few human-annotated
data. To advocate the generalizability even with limited fine-tuning data, a
novel KD regularization is introduced to retain the knowledge learned at the
pre-training stage. Experimental results show that the model trained by
QuantiDCE presents stronger correlations with human judgements than the other
state-of-the-art metrics.
| 2,021 |
Computation and Language
|
CIDER: Commonsense Inference for Dialogue Explanation and Reasoning
|
Commonsense inference to understand and explain human language is a
fundamental research problem in natural language processing. Explaining human
conversations poses a great challenge as it requires contextual understanding,
planning, inference, and several aspects of reasoning including causal,
temporal, and commonsense reasoning. In this work, we introduce CIDER -- a
manually curated dataset that contains dyadic dialogue explanations in the form
of implicit and explicit knowledge triplets inferred using contextual
commonsense inference. Extracting such rich explanations from conversations can
be conducive to improving several downstream applications. The annotated
triplets are categorized by the type of commonsense knowledge present (e.g.,
causal, conditional, temporal). We set up three different tasks conditioned on
the annotated dataset: Dialogue-level Natural Language Inference, Span
Extraction, and Multi-choice Span Selection. Baseline results obtained with
transformer-based models reveal that the tasks are difficult, paving the way
for promising future research. The dataset and the baseline implementations are
publicly available at https://cider-task.github.io/cider/.
| 2,021 |
Computation and Language
|
NewsEmbed: Modeling News through Pre-trained Document Representations
|
Effectively modeling text-rich fresh content such as news articles at
document-level is a challenging problem. To ensure a content-based model
generalize well to a broad range of applications, it is critical to have a
training dataset that is large beyond the scale of human labels while achieving
desired quality. In this work, we address those two challenges by proposing a
novel approach to mine semantically-relevant fresh documents, and their topic
labels, with little human supervision. Meanwhile, we design a multitask model
called NewsEmbed that alternatively trains a contrastive learning with a
multi-label classification to derive a universal document encoder. We show that
the proposed approach can provide billions of high quality organic training
examples and can be naturally extended to multilingual setting where texts in
different languages are encoded in the same semantic space. We experimentally
demonstrate NewsEmbed's competitive performance across multiple natural
language understanding tasks, both supervised and unsupervised.
| 2,021 |
Computation and Language
|
SpanNER: Named Entity Re-/Recognition as Span Prediction
|
Recent years have seen the paradigm shift of Named Entity Recognition (NER)
systems from sequence labeling to span prediction. Despite its preliminary
effectiveness, the span prediction model's architectural bias has not been
fully understood. In this paper, we first investigate the strengths and
weaknesses when the span prediction model is used for named entity recognition
compared with the sequence labeling framework and how to further improve it,
which motivates us to make complementary advantages of systems based on
different paradigms. We then reveal that span prediction, simultaneously, can
serve as a system combiner to re-recognize named entities from different
systems' outputs. We experimentally implement 154 systems on 11 datasets,
covering three languages, comprehensive results show the effectiveness of span
prediction models that both serve as base NER systems and system combiners. We
make all code and datasets available: \url{https://github.com/neulab/spanner},
as well as an online system demo: \url{http://spanner.sh}. Our model also has
been deployed into the ExplainaBoard platform, which allows users to flexibly
perform a system combination of top-scoring systems in an interactive way:
\url{http://explainaboard.nlpedia.ai/leaderboard/task-ner/}.
| 2,021 |
Computation and Language
|
Validating GAN-BioBERT: A Methodology For Assessing Reporting Trends In
Clinical Trials
|
In the past decade, there has been much discussion about the issue of biased
reporting in clinical research. Despite this attention, there have been limited
tools developed for the systematic assessment of qualitative statements made in
clinical research, with most studies assessing qualitative statements relying
on the use of manual expert raters, which limits their size. Also, previous
attempts to develop larger scale tools, such as those using natural language
processing, were limited by both their accuracy and the number of categories
used for the classification of their findings. With these limitations in mind,
this study's goal was to develop a classification algorithm that was both
suitably accurate and finely grained to be applied on a large scale for
assessing the qualitative sentiment expressed in clinical trial abstracts.
Additionally, this study seeks to compare the performance of the proposed
algorithm, GAN-BioBERT, to previous studies as well as to expert manual rating
of clinical trial abstracts. This study develops a three-class sentiment
classification algorithm for clinical trial abstracts using a semi-supervised
natural language process model based on the Bidirectional Encoder
Representation from Transformers (BERT) model, from a series of clinical trial
abstracts annotated by a group of experts in academic medicine. Results: The
use of this algorithm was found to have a classification accuracy of 91.3%,
with a macro F1-Score of 0.92, which is a significant improvement in accuracy
when compared to previous methods and expert ratings, while also making the
sentiment classification finer grained than previous studies. The proposed
algorithm, GAN-BioBERT, is a suitable classification model for the large-scale
assessment of qualitative statements in clinical trial literature, providing an
accurate, reproducible tool for the large-scale study of clinical publication
trends.
| 2,021 |
Computation and Language
|
VILA: Improving Structured Content Extraction from Scientific PDFs Using
Visual Layout Groups
|
Accurately extracting structured content from PDFs is a critical first step
for NLP over scientific papers. Recent work has improved extraction accuracy by
incorporating elementary layout information, e.g., each token's 2D position on
the page, into language model pretraining. We introduce new methods that
explicitly model VIsual LAyout (VILA) groups, i.e., text lines or text blocks,
to further improve performance. In our I-VILA approach, we show that simply
inserting special tokens denoting layout group boundaries into model inputs can
lead to a 1.9% Macro F1 improvement in token classification. In the H-VILA
approach, we show that hierarchical encoding of layout-groups can result in
up-to 47% inference time reduction with less than 0.8% Macro F1 loss. Unlike
prior layout-aware approaches, our methods do not require expensive additional
pretraining, only fine-tuning, which we show can reduce training cost by up to
95%. Experiments are conducted on a newly curated evaluation suite, S2-VLUE,
that unifies existing automatically-labeled datasets and includes a new dataset
of manual annotations covering diverse papers from 19 scientific disciplines.
Pre-trained weights, benchmark datasets, and source code are available at
https://github.com/allenai/VILA.
| 2,022 |
Computation and Language
|
Implicit Representations of Meaning in Neural Language Models
|
Does the effectiveness of neural language models derive entirely from
accurate modeling of surface word co-occurrence statistics, or do these models
represent and reason about the world they describe? In BART and T5 transformer
language models, we identify contextual word representations that function as
models of entities and situations as they evolve throughout a discourse. These
neural representations have functional similarities to linguistic models of
dynamic semantics: they support a linear readout of each entity's current
properties and relations, and can be manipulated with predictable effects on
language generation. Our results indicate that prediction in pretrained neural
language models is supported, at least in part, by dynamic representations of
meaning and implicit simulation of entity state, and that this behavior can be
learned with only text as training data. Code and data are available at
https://github.com/belindal/state-probes .
| 2,021 |
Computation and Language
|
A systematic review of Hate Speech automatic detection using Natural
Language Processing
|
With the multiplication of social media platforms, which offer anonymity,
easy access and online community formation, and online debate, the issue of
hate speech detection and tracking becomes a growing challenge to society,
individual, policy-makers and researchers. Despite efforts for leveraging
automatic techniques for automatic detection and monitoring, their performances
are still far from satisfactory, which constantly calls for future research on
the issue. This paper provides a systematic review of literature in this field,
with a focus on natural language processing and deep learning technologies,
highlighting the terminology, processing pipeline, core methods employed, with
a focal point on deep learning architecture. From a methodological perspective,
we adopt PRISMA guideline of systematic review of the last 10 years literature
from ACM Digital Library and Google Scholar. In the sequel, existing surveys,
limitations, and future research directions are extensively discussed.
| 2,021 |
Computation and Language
|
Part of Speech and Universal Dependency effects on English Arabic
Machine Translation
|
In this research paper, I will elaborate on a method to evaluate machine
translation models based on their performance on underlying syntactical
phenomena between English and Arabic languages. This method is especially
important as such "neural" and "machine learning" are hard to fine-tune and
change. Thus, finding a way to evaluate them easily and diversely would greatly
help the task of bettering them.
| 2,021 |
Computation and Language
|
Higher-order Derivatives of Weighted Finite-state Machines
|
Weighted finite-state machines are a fundamental building block of NLP
systems. They have withstood the test of time -- from their early use in noisy
channel models in the 1990s up to modern-day neurally parameterized conditional
random fields. This work examines the computation of higher-order derivatives
with respect to the normalization constant for weighted finite-state machines.
We provide a general algorithm for evaluating derivatives of all orders, which
has not been previously described in the literature. In the case of
second-order derivatives, our scheme runs in the optimal $\mathcal{O}(A^2 N^4)$
time where $A$ is the alphabet size and $N$ is the number of states. Our
algorithm is significantly faster than prior algorithms. Additionally, our
approach leads to a significantly faster algorithm for computing second-order
expectations, such as covariance matrices and gradients of first-order
expectations.
| 2,023 |
Computation and Language
|
On Finding the $K$-best Non-projective Dependency Trees
|
The connection between the maximum spanning tree in a directed graph and the
best dependency tree of a sentence has been exploited by the NLP community.
However, for many dependency parsing schemes, an important detail of this
approach is that the spanning tree must have exactly one edge emanating from
the root. While work has been done to efficiently solve this problem for
finding the one-best dependency tree, no research has attempted to extend this
solution to finding the $K$-best dependency trees. This is arguably a more
important extension as a larger proportion of decoded trees will not be subject
to the root constraint of dependency trees. Indeed, we show that the rate of
root constraint violations increases by an average of $13$ times when decoding
with $K\!=\!50$ as opposed to $K\!=\!1$. In this paper, we provide a
simplification of the $K$-best spanning tree algorithm of Camerini et al.
(1980). Our simplification allows us to obtain a constant time speed-up over
the original algorithm. Furthermore, we present a novel extension of the
algorithm for decoding the $K$-best dependency trees of a graph which are
subject to a root constraint.
| 2,021 |
Computation and Language
|
DYPLOC: Dynamic Planning of Content Using Mixed Language Models for Text
Generation
|
We study the task of long-form opinion text generation, which faces at least
two distinct challenges. First, existing neural generation models fall short of
coherence, thus requiring efficient content planning. Second, diverse types of
information are needed to guide the generator to cover both subjective and
objective content. To this end, we propose DYPLOC, a generation framework that
conducts dynamic planning of content while generating the output based on a
novel design of mixed language models. To enrich the generation with diverse
content, we further propose to use large pre-trained models to predict relevant
concepts and to generate claims. We experiment with two challenging tasks on
newly collected datasets: (1) argument generation with Reddit ChangeMyView, and
(2) writing articles using New York Times' Opinion section. Automatic
evaluation shows that our model significantly outperforms competitive
comparisons. Human judges further confirm that our generations are more
coherent with richer content.
| 2,021 |
Computation and Language
|
CoRI: Collective Relation Integration with Data Augmentation for Open
Information Extraction
|
Integrating extracted knowledge from the Web to knowledge graphs (KGs) can
facilitate tasks like question answering. We study relation integration that
aims to align free-text relations in subject-relation-object extractions to
relations in a target KG. To address the challenge that free-text relations are
ambiguous, previous methods exploit neighbor entities and relations for
additional context. However, the predictions are made independently, which can
be mutually inconsistent. We propose a two-stage Collective Relation
Integration (CoRI) model, where the first stage independently makes candidate
predictions, and the second stage employs a collective model that accesses all
candidate predictions to make globally coherent predictions. We further improve
the collective model with augmented data from the portion of the target KG that
is otherwise unused. Experiment results on two datasets show that CoRI can
significantly outperform the baselines, improving AUC from .677 to .748 and
from .716 to .780, respectively.
| 2,021 |
Computation and Language
|
What Ingredients Make for an Effective Crowdsourcing Protocol for
Difficult NLU Data Collection Tasks?
|
Crowdsourcing is widely used to create data for common natural language
understanding tasks. Despite the importance of these datasets for measuring and
refining model understanding of language, there has been little focus on the
crowdsourcing methods used for collecting the datasets. In this paper, we
compare the efficacy of interventions that have been proposed in prior work as
ways of improving data quality. We use multiple-choice question answering as a
testbed and run a randomized trial by assigning crowdworkers to write questions
under one of four different data collection protocols. We find that asking
workers to write explanations for their examples is an ineffective stand-alone
strategy for boosting NLU example difficulty. However, we find that training
crowdworkers, and then using an iterative process of collecting data, sending
feedback, and qualifying workers based on expert judgments is an effective
means of collecting challenging data. But using crowdsourced, instead of expert
judgments, to qualify workers and send feedback does not prove to be effective.
We observe that the data from the iterative protocol with expert assessments is
more challenging by several measures. Notably, the human--model gap on the
unanimous agreement portion of this data is, on average, twice as large as the
gap for the baseline protocol data.
| 2,021 |
Computation and Language
|
ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive
Summarization with Argument Mining
|
While online conversations can cover a vast amount of information in many
different formats, abstractive text summarization has primarily focused on
modeling solely news articles. This research gap is due, in part, to the lack
of standardized datasets for summarizing online discussions. To address this
gap, we design annotation protocols motivated by an
issues--viewpoints--assertions framework to crowdsource four new datasets on
diverse online conversation forms of news comments, discussion forums,
community question answering forums, and email threads. We benchmark
state-of-the-art models on our datasets and analyze characteristics associated
with the data. To create a comprehensive benchmark, we also evaluate these
models on widely-used conversation summarization datasets to establish strong
baselines in this domain. Furthermore, we incorporate argument mining through
graph construction to directly model the issues, viewpoints, and assertions
present in a conversation and filter noisy input, showing comparable or
improved results according to automatic and human evaluations.
| 2,021 |
Computation and Language
|
Comparing Test Sets with Item Response Theory
|
Recent years have seen numerous NLP datasets introduced to evaluate the
performance of fine-tuned models on natural language understanding tasks.
Recent results from large pretrained models, though, show that many of these
datasets are largely saturated and unlikely to be able to detect further
progress. What kind of datasets are still effective at discriminating among
strong models, and what kind of datasets should we expect to be able to detect
future improvements? To measure this uniformly across datasets, we draw on Item
Response Theory and evaluate 29 datasets using predictions from 18 pretrained
Transformer models on individual test examples. We find that Quoref, HellaSwag,
and MC-TACO are best suited for distinguishing among state-of-the-art models,
while SNLI, MNLI, and CommitmentBank seem to be saturated for current strong
models. We also observe span selection task format, which is used for QA
datasets like QAMR or SQuAD2.0, is effective in differentiating between strong
and weak models.
| 2,021 |
Computation and Language
|
Parameter-Efficient Neural Question Answering Models via Graph-Enriched
Document Representations
|
As the computational footprint of modern NLP systems grows, it becomes
increasingly important to arrive at more efficient models. We show that by
employing graph convolutional document representation, we can arrive at a
question answering system that performs comparably to, and in some cases
exceeds the SOTA solutions, while using less than 5\% of their resources in
terms of trainable parameters. As it currently stands, a major issue in
applying GCNs to NLP is document representation. In this paper, we show that a
GCN enriched document representation greatly improves the results seen in
HotPotQA, even when using a trivial topology. Our model (gQA), performs
admirably when compared to the current SOTA, and requires little to no
preprocessing. In Shao et al. 2020, the authors suggest that graph networks are
not necessary for good performance in multi-hop QA. In this paper, we suggest
that large language models are not necessary for good performance by showing a
na\"{i}ve implementation of a GCN performs comparably to SoTA models based on
pretrained language models.
| 2,021 |
Computation and Language
|
Claim Matching Beyond English to Scale Global Fact-Checking
|
Manual fact-checking does not scale well to serve the needs of the internet.
This issue is further compounded in non-English contexts. In this paper, we
discuss claim matching as a possible solution to scale fact-checking. We define
claim matching as the task of identifying pairs of textual messages containing
claims that can be served with one fact-check. We construct a novel dataset of
WhatsApp tipline and public group messages alongside fact-checked claims that
are first annotated for containing "claim-like statements" and then matched
with potentially similar items and annotated for claim matching. Our dataset
contains content in high-resource (English, Hindi) and lower-resource (Bengali,
Malayalam, Tamil) languages. We train our own embedding model using knowledge
distillation and a high-quality "teacher" model in order to address the
imbalance in embedding quality between the low- and high-resource languages in
our dataset. We provide evaluations on the performance of our solution and
compare with baselines and existing state-of-the-art multilingual embedding
models, namely LASER and LaBSE. We demonstrate that our performance exceeds
LASER and LaBSE in all settings. We release our annotated datasets, codebooks,
and trained embedding model to allow for further research.
| 2,021 |
Computation and Language
|
On the Efficacy of Adversarial Data Collection for Question Answering:
Results from a Large-Scale Randomized Study
|
In adversarial data collection (ADC), a human workforce interacts with a
model in real time, attempting to produce examples that elicit incorrect
predictions. Researchers hope that models trained on these more challenging
datasets will rely less on superficial patterns, and thus be less brittle.
However, despite ADC's intuitive appeal, it remains unclear when training on
adversarial datasets produces more robust models. In this paper, we conduct a
large-scale controlled study focused on question answering, assigning workers
at random to compose questions either (i) adversarially (with a model in the
loop); or (ii) in the standard fashion (without a model). Across a variety of
models and datasets, we find that models trained on adversarial data usually
perform better on other adversarial datasets but worse on a diverse collection
of out-of-domain evaluation sets. Finally, we provide a qualitative analysis of
adversarial (vs standard) data, identifying key differences and offering
guidance for future research.
| 2,021 |
Computation and Language
|
Conversational Question Answering: A Survey
|
Question answering (QA) systems provide a way of querying the information
available in various formats including, but not limited to, unstructured and
structured data in natural languages. It constitutes a considerable part of
conversational artificial intelligence (AI) which has led to the introduction
of a special research topic on Conversational Question Answering (CQA), wherein
a system is required to understand the given context and then engages in
multi-turn QA to satisfy the user's information needs. Whilst the focus of most
of the existing research work is subjected to single-turn QA, the field of
multi-turn QA has recently grasped attention and prominence owing to the
availability of large-scale, multi-turn QA datasets and the development of
pre-trained language models. With a good amount of models and research papers
adding to the literature every year recently, there is a dire need of arranging
and presenting the related work in a unified manner to streamline future
research. This survey, therefore, is an effort to present a comprehensive
review of the state-of-the-art research trends of CQA primarily based on
reviewed papers from 2016-2021. Our findings show that there has been a trend
shift from single-turn to multi-turn QA which empowers the field of
Conversational AI from different perspectives. This survey is intended to
provide an epitome for the research community with the hope of laying a strong
foundation for the field of CQA.
| 2,021 |
Computation and Language
|
Evaluating Word Embeddings with Categorical Modularity
|
We introduce categorical modularity, a novel low-resource intrinsic metric to
evaluate word embedding quality. Categorical modularity is a graph modularity
metric based on the $k$-nearest neighbor graph constructed with embedding
vectors of words from a fixed set of semantic categories, in which the goal is
to measure the proportion of words that have nearest neighbors within the same
categories. We use a core set of 500 words belonging to 59 neurobiologically
motivated semantic categories in 29 languages and analyze three word embedding
models per language (FastText, MUSE, and subs2vec). We find moderate to strong
positive correlations between categorical modularity and performance on the
monolingual tasks of sentiment analysis and word similarity calculation and on
the cross-lingual task of bilingual lexicon induction both to and from English.
Overall, we suggest that categorical modularity provides non-trivial predictive
information about downstream task performance, with breakdowns of correlations
by model suggesting some meta-predictive properties about semantic information
loss as well.
| 2,021 |
Computation and Language
|
Efficient Passage Retrieval with Hashing for Open-domain Question
Answering
|
Most state-of-the-art open-domain question answering systems use a neural
retrieval model to encode passages into continuous vectors and extract them
from a knowledge source. However, such retrieval models often require large
memory to run because of the massive size of their passage index. In this
paper, we introduce Binary Passage Retriever (BPR), a memory-efficient neural
retrieval model that integrates a learning-to-hash technique into the
state-of-the-art Dense Passage Retriever (DPR) to represent the passage index
using compact binary codes rather than continuous vectors. BPR is trained with
a multi-task objective over two tasks: efficient candidate generation based on
binary codes and accurate reranking based on continuous vectors. Compared with
DPR, BPR substantially reduces the memory cost from 65GB to 2GB without a loss
of accuracy on two standard open-domain question answering benchmarks: Natural
Questions and TriviaQA. Our code and trained models are available at
https://github.com/studio-ousia/bpr.
| 2,021 |
Computation and Language
|
Exploiting Global Contextual Information for Document-level Named Entity
Recognition
|
Most existing named entity recognition (NER) approaches are based on sequence
labeling models, which focus on capturing the local context dependencies.
However, the way of taking one sentence as input prevents the modeling of
non-sequential global context, which is useful especially when local context
information is limited or ambiguous. To this end, we propose a model called
Global Context enhanced Document-level NER (GCDoc) to leverage global
contextual information from two levels, i.e., both word and sentence. At
word-level, a document graph is constructed to model a wider range of
dependencies between words, then obtain an enriched contextual representation
for each word via graph neural networks (GNN). To avoid the interference of
noise information, we further propose two strategies. First we apply the
epistemic uncertainty theory to find out tokens whose representations are less
reliable, thereby helping prune the document graph. Then a selective auxiliary
classifier is proposed to effectively learn the weight of edges in document
graph and reduce the importance of noisy neighbour nodes. At sentence-level,
for appropriately modeling wider context beyond single sentence, we employ a
cross-sentence module which encodes adjacent sentences and fuses it with the
current sentence representation via attention and gating mechanisms. Extensive
experiments on two benchmark NER datasets (CoNLL 2003 and Ontonotes 5.0 English
dataset) demonstrate the effectiveness of our proposed model. Our model reaches
F1 score of 92.22 (93.40 with BERT) on CoNLL 2003 dataset and 88.32 (90.49 with
BERT) on Ontonotes 5.0 dataset, achieving new state-of-the-art performance.
| 2,021 |
Computation and Language
|
High-Quality Diversification for Task-Oriented Dialogue Systems
|
Many task-oriented dialogue systems use deep reinforcement learning (DRL) to
learn policies that respond to the user appropriately and complete the tasks
successfully. Training DRL agents with diverse dialogue trajectories prepare
them well for rare user requests and unseen situations. One effective
diversification method is to let the agent interact with a diverse set of
learned user models. However, trajectories created by these artificial user
models may contain generation errors, which can quickly propagate into the
agent's policy. It is thus important to control the quality of the
diversification and resist the noise. In this paper, we propose a novel
dialogue diversification method for task-oriented dialogue systems trained in
simulators. Our method, Intermittent Short Extension Ensemble (I-SEE),
constrains the intensity to interact with an ensemble of diverse user models
and effectively controls the quality of the diversification. Evaluations on the
Multiwoz dataset show that I-SEE successfully boosts the performance of several
state-of-the-art DRL dialogue agents.
| 2,021 |
Computation and Language
|
Solving Arithmetic Word Problems with Transformers and Preprocessing of
Problem Text
|
This paper outlines the use of Transformer networks trained to translate math
word problems to equivalent arithmetic expressions in infix, prefix, and
postfix notations. We compare results produced by many neural configurations
and find that most configurations outperform previously reported approaches on
three of four datasets with significant increases in accuracy of over 20
percentage points. The best neural approaches boost accuracy by 30% when
compared to the previous state-of-the-art on some datasets.
| 2,021 |
Computation and Language
|
Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in
Non-Autoregressive Translation
|
Knowledge distillation (KD) is commonly used to construct synthetic data for
training non-autoregressive translation (NAT) models. However, there exists a
discrepancy on low-frequency words between the distilled and the original data,
leading to more errors on predicting low-frequency words. To alleviate the
problem, we directly expose the raw data into NAT by leveraging pretraining. By
analyzing directed alignments, we found that KD makes low-frequency source
words aligned with targets more deterministically but fails to align sufficient
low-frequency words from target to source. Accordingly, we propose reverse KD
to rejuvenate more alignments for low-frequency target words. To make the most
of authentic and synthetic data, we combine these complementary approaches as a
new training strategy for further boosting NAT performance. We conduct
experiments on five translation benchmarks over two advanced architectures.
Results demonstrate that the proposed approach can significantly and
universally improve translation quality by reducing translation errors on
low-frequency words. Encouragingly, our approach achieves 28.2 and 33.9 BLEU
points on the WMT14 English-German and WMT16 Romanian-English datasets,
respectively. Our code, data, and trained models are available at
\url{https://github.com/alphadl/RLFW-NAT}.
| 2,022 |
Computation and Language
|
DialoGraph: Incorporating Interpretable Strategy-Graph Networks into
Negotiation Dialogues
|
To successfully negotiate a deal, it is not enough to communicate fluently:
pragmatic planning of persuasive negotiation strategies is essential. While
modern dialogue agents excel at generating fluent sentences, they still lack
pragmatic grounding and cannot reason strategically. We present DialoGraph, a
negotiation system that incorporates pragmatic strategies in a negotiation
dialogue using graph neural networks. DialoGraph explicitly incorporates
dependencies between sequences of strategies to enable improved and
interpretable prediction of next optimal strategies, given the dialogue
context. Our graph-based method outperforms prior state-of-the-art negotiation
models both in the accuracy of strategy/dialogue act prediction and in the
quality of downstream dialogue response generation. We qualitatively show
further benefits of learned strategy-graphs in providing explicit associations
between effective negotiation strategies over the course of the dialogue,
leading to interpretable and strategic dialogues.
| 2,021 |
Computation and Language
|
OntoGUM: Evaluating Contextualized SOTA Coreference Resolution on 12
More Genres
|
SOTA coreference resolution produces increasingly impressive scores on the
OntoNotes benchmark. However lack of comparable data following the same scheme
for more genres makes it difficult to evaluate generalizability to open domain
data. This paper provides a dataset and comprehensive evaluation showing that
the latest neural LM based end-to-end systems degrade very substantially out of
domain. We make an OntoNotes-like coreference dataset called OntoGUM publicly
available, converted from GUM, an English corpus covering 12 genres, using
deterministic rules, which we evaluate. Thanks to the rich syntactic and
discourse annotations in GUM, we are able to create the largest human-annotated
coreference corpus following the OntoNotes guidelines, and the first to be
evaluated for consistency with the OntoNotes scheme. Out-of-domain evaluation
across 12 genres shows nearly 15-20% degradation for both deterministic and
deep learning systems, indicating a lack of generalizability or covert
overfitting in existing coreference resolution models.
| 2,021 |
Computation and Language
|
Discrete Cosine Transform as Universal Sentence Encoder
|
Modern sentence encoders are used to generate dense vector representations
that capture the underlying linguistic characteristics for a sequence of words,
including phrases, sentences, or paragraphs. These kinds of representations are
ideal for training a classifier for an end task such as sentiment analysis,
question answering and text classification. Different models have been proposed
to efficiently generate general purpose sentence representations to be used in
pretraining protocols. While averaging is the most commonly used efficient
sentence encoder, Discrete Cosine Transform (DCT) was recently proposed as an
alternative that captures the underlying syntactic characteristics of a given
text without compromising practical efficiency compared to averaging. However,
as with most other sentence encoders, the DCT sentence encoder was only
evaluated in English. To this end, we utilize DCT encoder to generate universal
sentence representation for different languages such as German, French, Spanish
and Russian. The experimental results clearly show the superior effectiveness
of DCT encoding in which consistent performance improvements are achieved over
strong baselines on multiple standardized datasets.
| 2,021 |
Computation and Language
|
Self-Training Sampling with Monolingual Data Uncertainty for Neural
Machine Translation
|
Self-training has proven effective for improving NMT performance by
augmenting model training with synthetic parallel data. The common practice is
to construct synthetic data based on a randomly sampled subset of large-scale
monolingual data, which we empirically show is sub-optimal. In this work, we
propose to improve the sampling procedure by selecting the most informative
monolingual sentences to complement the parallel data. To this end, we compute
the uncertainty of monolingual sentences using the bilingual dictionary
extracted from the parallel data. Intuitively, monolingual sentences with lower
uncertainty generally correspond to easy-to-translate patterns which may not
provide additional gains. Accordingly, we design an uncertainty-based sampling
strategy to efficiently exploit the monolingual data for self-training, in
which monolingual sentences with higher uncertainty would be sampled with
higher probability. Experimental results on large-scale WMT
English$\Rightarrow$German and English$\Rightarrow$Chinese datasets demonstrate
the effectiveness of the proposed approach. Extensive analyses suggest that
emphasizing the learning on uncertain monolingual sentences by our approach
does improve the translation quality of high-uncertainty sentences and also
benefits the prediction of low-frequency words at the target side.
| 2,021 |
Computation and Language
|
Unsupervised Out-of-Domain Detection via Pre-trained Transformers
|
Deployed real-world machine learning applications are often subject to
uncontrolled and even potentially malicious inputs. Those out-of-domain inputs
can lead to unpredictable outputs and sometimes catastrophic safety issues.
Prior studies on out-of-domain detection require in-domain task labels and are
limited to supervised classification scenarios. Our work tackles the problem of
detecting out-of-domain samples with only unsupervised in-domain data. We
utilize the latent representations of pre-trained transformers and propose a
simple yet effective method to transform features across all layers to
construct out-of-domain detectors efficiently. Two domain-specific fine-tuning
approaches are further proposed to boost detection accuracy. Our empirical
evaluations of related methods on two datasets validate that our method greatly
improves out-of-domain detection ability in a more general scenario.
| 2,022 |
Computation and Language
|
A Multi-Level Attention Model for Evidence-Based Fact Checking
|
Evidence-based fact checking aims to verify the truthfulness of a claim
against evidence extracted from textual sources. Learning a representation that
effectively captures relations between a claim and evidence can be challenging.
Recent state-of-the-art approaches have developed increasingly sophisticated
models based on graph structures. We present a simple model that can be trained
on sequence structures. Our model enables inter-sentence attentions at
different levels and can benefit from joint training. Results on a large-scale
dataset for Fact Extraction and VERification (FEVER) show that our model
outperforms the graph-based approaches and yields 1.09% and 1.42% improvements
in label accuracy and FEVER score, respectively, over the best published model.
| 2,021 |
Computation and Language
|
When and Why does a Model Fail? A Human-in-the-loop Error Detection
Framework for Sentiment Analysis
|
Although deep neural networks have been widely employed and proven effective
in sentiment analysis tasks, it remains challenging for model developers to
assess their models for erroneous predictions that might exist prior to
deployment. Once deployed, emergent errors can be hard to identify in
prediction run-time and impossible to trace back to their sources. To address
such gaps, in this paper we propose an error detection framework for sentiment
analysis based on explainable features. We perform global-level feature
validation with human-in-the-loop assessment, followed by an integration of
global and local-level feature contribution analysis. Experimental results show
that, given limited human-in-the-loop intervention, our method is able to
identify erroneous model predictions on unseen data with high precision.
| 2,021 |
Computation and Language
|
Answer Generation for Retrieval-based Question Answering Systems
|
Recent advancements in transformer-based models have greatly improved the
ability of Question Answering (QA) systems to provide correct answers; in
particular, answer sentence selection (AS2) models, core components of
retrieval-based systems, have achieved impressive results. While generally
effective, these models fail to provide a satisfying answer when all retrieved
candidates are of poor quality, even if they contain correct information. In
AS2, models are trained to select the best answer sentence among a set of
candidates retrieved for a given question. In this work, we propose to generate
answers from a set of AS2 top candidates. Rather than selecting the best
candidate, we train a sequence to sequence transformer model to generate an
answer from a candidate set. Our tests on three English AS2 datasets show
improvement up to 32 absolute points in accuracy over the state of the art.
| 2,021 |
Computation and Language
|
RevCore: Review-augmented Conversational Recommendation
|
Existing conversational recommendation (CR) systems usually suffer from
insufficient item information when conducted on short dialogue history and
unfamiliar items. Incorporating external information (e.g., reviews) is a
potential solution to alleviate this problem. Given that reviews often provide
a rich and detailed user experience on different interests, they are potential
ideal resources for providing high-quality recommendations within an
informative conversation. In this paper, we design a novel end-to-end
framework, namely, Review-augmented Conversational Recommender (RevCore), where
reviews are seamlessly incorporated to enrich item information and assist in
generating both coherent and informative responses. In detail, we extract
sentiment-consistent reviews, perform review-enriched and entity-based
recommendations for item suggestions, as well as use a review-attentive
encoder-decoder for response generation. Experimental results demonstrate the
superiority of our approach in yielding better performance on both
recommendation and conversation responding.
| 2,021 |
Computation and Language
|
COM2SENSE: A Commonsense Reasoning Benchmark with Complementary
Sentences
|
Commonsense reasoning is intuitive for humans but has been a long-term
challenge for artificial intelligence (AI). Recent advancements in pretrained
language models have shown promising results on several commonsense benchmark
datasets. However, the reliability and comprehensiveness of these benchmarks
towards assessing model's commonsense reasoning ability remains unclear. To
this end, we introduce a new commonsense reasoning benchmark dataset comprising
natural language true/false statements, with each sample paired with its
complementary counterpart, resulting in 4k sentence pairs. We propose a
pairwise accuracy metric to reliably measure an agent's ability to perform
commonsense reasoning over a given situation. The dataset is crowdsourced and
enhanced with an adversarial model-in-the-loop setup to incentivize challenging
samples. To facilitate a systematic analysis of commonsense capabilities, we
design our dataset along the dimensions of knowledge domains, reasoning
scenarios and numeracy. Experimental results demonstrate that our strongest
baseline (UnifiedQA-3B), after fine-tuning, achieves ~71% standard accuracy and
~51% pairwise accuracy, well below human performance (~95% for both metrics).
The dataset is available at https://github.com/PlusLabNLP/Com2Sense.
| 2,021 |
Computation and Language
|
Exploring Discourse Structures for Argument Impact Classification
|
Discourse relations among arguments reveal logical structures of a debate
conversation. However, no prior work has explicitly studied how the sequence of
discourse relations influence a claim's impact. This paper empirically shows
that the discourse relations between two arguments along the context path are
essential factors for identifying the persuasive power of an argument. We
further propose DisCOC to inject and fuse the sentence-level structural
discourse information with contextualized features derived from large-scale
language models. Experimental results and extensive analysis show that the
attention and gate mechanisms that explicitly model contexts and texts can
indeed help the argument impact classification task defined by Durmus et al.
(2019), and discourse structures among the context path of the claim to be
classified can further boost the performance.
| 2,021 |
Computation and Language
|
Few-Shot Partial-Label Learning
|
Partial-label learning (PLL) generally focuses on inducing a noise-tolerant
multi-class classifier by training on overly-annotated samples, each of which
is annotated with a set of labels, but only one is the valid label. A basic
promise of existing PLL solutions is that there are sufficient partial-label
(PL) samples for training. However, it is more common than not to have just few
PL samples at hand when dealing with new tasks. Furthermore, existing few-shot
learning algorithms assume precise labels of the support set; as such,
irrelevant labels may seriously mislead the meta-learner and thus lead to a
compromised performance. How to enable PLL under a few-shot learning setting is
an important problem, but not yet well studied. In this paper, we introduce an
approach called FsPLL (Few-shot PLL). FsPLL first performs adaptive distance
metric learning by an embedding network and rectifying prototypes on the tasks
previously encountered. Next, it calculates the prototype of each class of a
new task in the embedding network. An unseen example can then be classified via
its distance to each prototype. Experimental results on widely-used few-shot
datasets (Omniglot and miniImageNet) demonstrate that our FsPLL can achieve a
superior performance than the state-of-the-art methods across different
settings, and it needs fewer samples for quickly adapting to new tasks.
| 2,021 |
Computation and Language
|
SocAoG: Incremental Graph Parsing for Social Relation Inference in
Dialogues
|
Inferring social relations from dialogues is vital for building emotionally
intelligent robots to interpret human language better and act accordingly. We
model the social network as an And-or Graph, named SocAoG, for the consistency
of relations among a group and leveraging attributes as inference cues.
Moreover, we formulate a sequential structure prediction task, and propose an
$\alpha$-$\beta$-$\gamma$ strategy to incrementally parse SocAoG for the
dynamic inference upon any incoming utterance: (i) an $\alpha$ process
predicting attributes and relations conditioned on the semantics of dialogues,
(ii) a $\beta$ process updating the social relations based on related
attributes, and (iii) a $\gamma$ process updating individual's attributes based
on interpersonal social relations. Empirical results on DialogRE and MovieGraph
show that our model infers social relations more accurately than the
state-of-the-art methods. Moreover, the ablation study shows the three
processes complement each other, and the case study demonstrates the dynamic
relational inference.
| 2,022 |
Computation and Language
|
One Teacher is Enough? Pre-trained Language Model Distillation from
Multiple Teachers
|
Pre-trained language models (PLMs) achieve great success in NLP. However,
their huge model sizes hinder their applications in many practical systems.
Knowledge distillation is a popular technique to compress PLMs, which learns a
small student model from a large teacher PLM. However, the knowledge learned
from a single teacher may be limited and even biased, resulting in low-quality
student model. In this paper, we propose a multi-teacher knowledge distillation
framework named MT-BERT for pre-trained language model compression, which can
train high-quality student model from multiple teacher PLMs. In MT-BERT we
design a multi-teacher co-finetuning method to jointly finetune multiple
teacher PLMs in downstream tasks with shared pooling and prediction layers to
align their output space for better collaborative teaching. In addition, we
propose a multi-teacher hidden loss and a multi-teacher distillation loss to
transfer the useful knowledge in both hidden states and soft labels from
multiple teacher PLMs to the student model. Experiments on three benchmark
datasets validate the effectiveness of MT-BERT in compressing PLMs.
| 2,021 |
Computation and Language
|
Why Machine Reading Comprehension Models Learn Shortcuts?
|
Recent studies report that many machine reading comprehension (MRC) models
can perform closely to or even better than humans on benchmark datasets.
However, existing works indicate that many MRC models may learn shortcuts to
outwit these benchmarks, but the performance is unsatisfactory in real-world
applications. In this work, we attempt to explore, instead of the expected
comprehension skills, why these models learn the shortcuts. Based on the
observation that a large portion of questions in current datasets have shortcut
solutions, we argue that larger proportion of shortcut questions in training
data make models rely on shortcut tricks excessively. To investigate this
hypothesis, we carefully design two synthetic datasets with annotations that
indicate whether a question can be answered using shortcut solutions. We
further propose two new methods to quantitatively analyze the learning
difficulty regarding shortcut and challenging questions, and revealing the
inherent learning mechanism behind the different performance between the two
kinds of questions. A thorough empirical analysis shows that MRC models tend to
learn shortcut questions earlier than challenging questions, and the high
proportions of shortcut questions in training sets hinder models from exploring
the sophisticated reasoning skills in the later stage of training.
| 2,021 |
Computation and Language
|
Who Blames or Endorses Whom? Entity-to-Entity Directed Sentiment
Extraction in News Text
|
Understanding who blames or supports whom in news text is a critical research
question in computational social science. Traditional methods and datasets for
sentiment analysis are, however, not suitable for the domain of political text
as they do not consider the direction of sentiments expressed between entities.
In this paper, we propose a novel NLP task of identifying directed sentiment
relationship between political entities from a given news document, which we
call directed sentiment extraction. From a million-scale news corpus, we
construct a dataset of news sentences where sentiment relations of political
entities are manually annotated. We present a simple but effective approach for
utilizing a pretrained transformer, which infers the target class by predicting
multiple question-answering tasks and combining the outcomes. We demonstrate
the utility of our proposed method for social science research questions by
analyzing positive and negative opinions between political entities in two
major events: 2016 U.S. presidential election and COVID-19. The newly proposed
problem, data, and method will facilitate future studies on interdisciplinary
NLP methods and applications.
| 2,021 |
Computation and Language
|
Hi-Transformer: Hierarchical Interactive Transformer for Efficient and
Effective Long Document Modeling
|
Transformer is important for text modeling. However, it has difficulty in
handling long documents due to the quadratic complexity with input text length.
In order to handle this problem, we propose a hierarchical interactive
Transformer (Hi-Transformer) for efficient and effective long document
modeling. Hi-Transformer models documents in a hierarchical way, i.e., first
learns sentence representations and then learns document representations. It
can effectively reduce the complexity and meanwhile capture global document
context in the modeling of each sentence. More specifically, we first use a
sentence Transformer to learn the representations of each sentence. Then we use
a document Transformer to model the global document context from these sentence
representations. Next, we use another sentence Transformer to enhance sentence
modeling using the global document context. Finally, we use hierarchical
pooling method to obtain document embedding. Extensive experiments on three
benchmark datasets validate the efficiency and effectiveness of Hi-Transformer
in long document modeling.
| 2,021 |
Computation and Language
|
Examining the Inductive Bias of Neural Language Models with Artificial
Languages
|
Since language models are used to model a wide variety of languages, it is
natural to ask whether the neural architectures used for the task have
inductive biases towards modeling particular types of languages. Investigation
of these biases has proved complicated due to the many variables that appear in
the experimental setup. Languages vary in many typological dimensions, and it
is difficult to single out one or two to investigate without the others acting
as confounders. We propose a novel method for investigating the inductive
biases of language models using artificial languages. These languages are
constructed to allow us to create parallel corpora across languages that differ
only in the typological feature being investigated, such as word order. We then
use them to train and test language models. This constitutes a fully controlled
causal framework, and demonstrates how grammar engineering can serve as a
useful tool for analyzing neural models. Using this method, we find that
commonly used neural architectures exhibit different inductive biases: LSTMs
display little preference with respect to word ordering, while transformers
display a clear preference for some orderings over others. Further, we find
that neither the inductive bias of the LSTM nor that of the transformer appears
to reflect any tendencies that we see in attested natural languages.
| 2,021 |
Computation and Language
|
Cascade versus Direct Speech Translation: Do the Differences Still Make
a Difference?
|
Five years after the first published proofs of concept, direct approaches to
speech translation (ST) are now competing with traditional cascade solutions.
In light of this steady progress, can we claim that the performance gap between
the two is closed? Starting from this question, we present a systematic
comparison between state-of-the-art systems representative of the two
paradigms. Focusing on three language directions
(English-German/Italian/Spanish), we conduct automatic and manual evaluations,
exploiting high-quality professional post-edits and annotations. Our
multi-faceted analysis on one of the few publicly available ST benchmarks
attests for the first time that: i) the gap between the two paradigms is now
closed, and ii) the subtle differences observed in their behavior are not
sufficient for humans neither to distinguish them nor to prefer one over the
other.
| 2,021 |
Computation and Language
|
Minimax and Neyman-Pearson Meta-Learning for Outlier Languages
|
Model-agnostic meta-learning (MAML) has been recently put forth as a strategy
to learn resource-poor languages in a sample-efficient fashion. Nevertheless,
the properties of these languages are often not well represented by those
available during training. Hence, we argue that the i.i.d. assumption ingrained
in MAML makes it ill-suited for cross-lingual NLP. In fact, under a
decision-theoretic framework, MAML can be interpreted as minimising the
expected risk across training languages (with a uniform prior), which is known
as Bayes criterion. To increase its robustness to outlier languages, we create
two variants of MAML based on alternative criteria: Minimax MAML reduces the
maximum risk across languages, while Neyman-Pearson MAML constrains the risk in
each language to a maximum threshold. Both criteria constitute fully
differentiable two-player games. In light of this, we propose a new adaptive
optimiser solving for a local approximation to their Nash equilibrium. We
evaluate both model variants on two popular NLP tasks, part-of-speech tagging
and question answering. We report gains for their average and minimum
performance across low-resource languages in zero- and few-shot settings,
compared to joint multi-source transfer and vanilla MAML.
| 2,021 |
Computation and Language
|
John praised Mary because he? Implicit Causality Bias and Its
Interaction with Explicit Cues in LMs
|
Some interpersonal verbs can implicitly attribute causality to either their
subject or their object and are therefore said to carry an implicit causality
(IC) bias. Through this bias, causal links can be inferred from a narrative,
aiding language comprehension. We investigate whether pre-trained language
models (PLMs) encode IC bias and use it at inference time. We find that to be
the case, albeit to different degrees, for three distinct PLM architectures.
However, causes do not always need to be implicit -- when a cause is explicitly
stated in a subordinate clause, an incongruent IC bias associated with the verb
in the main clause leads to a delay in human processing. We hypothesize that
the temporary challenge humans face in integrating the two contradicting
signals, one from the lexical semantics of the verb, one from the
sentence-level semantics, would be reflected in higher error rates for models
on tasks dependent on causal links. The results of our study lend support to
this hypothesis, suggesting that PLMs tend to prioritize lexical patterns over
higher-order signals.
| 2,021 |
Computation and Language
|
Generating Informative Conclusions for Argumentative Texts
|
The purpose of an argumentative text is to support a certain conclusion. Yet,
they are often omitted, expecting readers to infer them rather. While
appropriate when reading an individual text, this rhetorical device limits
accessibility when browsing many texts (e.g., on a search engine or on social
media). In these scenarios, an explicit conclusion makes for a good candidate
summary of an argumentative text. This is especially true if the conclusion is
informative, emphasizing specific concepts from the text. With this paper we
introduce the task of generating informative conclusions: First,
Webis-ConcluGen-21 is compiled, a large-scale corpus of 136,996 samples of
argumentative texts and their conclusions. Second, two paradigms for conclusion
generation are investigated; one extractive, the other abstractive in nature.
The latter exploits argumentative knowledge that augment the data via control
codes and finetuning the BART model on several subsets of the corpus. Third,
insights are provided into the suitability of our corpus for the task, the
differences between the two generation paradigms, the trade-off between
informativeness and conciseness, and the impact of encoding argumentative
knowledge. The corpus, code, and the trained models are publicly available.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.