Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Balancing Training for Multilingual Neural Machine Translation | When training multilingual machine translation (MT) models that can translate
to/from multiple languages, we are faced with imbalanced training sets: some
languages have much more training data than others. Standard practice is to
up-sample less resourced languages to increase representation, and the degree
of up-sampling has a large effect on the overall performance. In this paper, we
propose a method that instead automatically learns how to weight training data
through a data scorer that is optimized to maximize performance on all test
languages. Experiments on two sets of languages under both one-to-many and
many-to-one MT settings show our method not only consistently outperforms
heuristic baselines in terms of average performance, but also offers flexible
control over the performance of which languages are optimized.
| 2,020 | Computation and Language |
A Simple Yet Strong Pipeline for HotpotQA | State-of-the-art models for multi-hop question answering typically augment
large-scale language models like BERT with additional, intuitively useful
capabilities such as named entity recognition, graph-based reasoning, and
question decomposition. However, does their strong performance on popular
multi-hop datasets really justify this added design complexity? Our results
suggest that the answer may be no, because even our simple pipeline based on
BERT, named Quark, performs surprisingly well. Specifically, on HotpotQA, Quark
outperforms these models on both question answering and support identification
(and achieves performance very close to a RoBERTa model). Our pipeline has
three steps: 1) use BERT to identify potentially relevant sentences
independently of each other; 2) feed the set of selected sentences as context
into a standard BERT span prediction model to choose an answer; and 3) use the
sentence selection model, now with the chosen answer, to produce supporting
sentences. The strong performance of Quark resurfaces the importance of
carefully exploring simple model designs before using popular benchmarks to
justify the value of complex techniques.
| 2,020 | Computation and Language |
Mining Coronavirus (COVID-19) Posts in Social Media | World Health Organization (WHO) characterized the novel coronavirus
(COVID-19) as a global pandemic on March 11th, 2020. Before this and in late
January, more specifically on January 27th, while the majority of the infection
cases were still reported in China and a few cruise ships, we began crawling
social media user postings using the Twitter search API. Our goal was to
leverage machine learning and linguistic tools to better understand the impact
of the outbreak in China. Unlike our initial expectation to monitor a local
outbreak, COVID-19 rapidly spread across the globe. In this short article we
report the preliminary results of our study on automatically detecting the
positive reports of COVID-19 from social media user postings using
state-of-the-art machine learning models.
| 2,020 | Computation and Language |
A Human Evaluation of AMR-to-English Generation Systems | Most current state-of-the art systems for generating English text from
Abstract Meaning Representation (AMR) have been evaluated only using automated
metrics, such as BLEU, which are known to be problematic for natural language
generation. In this work, we present the results of a new human evaluation
which collects fluency and adequacy scores, as well as categorization of error
types, for several recent AMR generation systems. We discuss the relative
quality of these systems and how our results compare to those of automatic
metrics, finding that while the metrics are mostly successful in ranking
systems overall, collecting human judgments allows for more nuanced
comparisons. We also analyze common errors made by these systems.
| 2,020 | Computation and Language |
On the Linguistic Capacity of Real-Time Counter Automata | Counter machines have achieved a newfound relevance to the field of natural
language processing (NLP): recent work suggests some strong-performing
recurrent neural networks utilize their memory as counters. Thus, one potential
way to understand the success of these networks is to revisit the theory of
counter computation. Therefore, we study the abilities of real-time counter
machines as formal grammars, focusing on formal properties that are relevant
for NLP models. We first show that several variants of the counter machine
converge to express the same class of formal languages. We also prove that
counter languages are closed under complement, union, intersection, and many
other common set operations. Next, we show that counter machines cannot
evaluate boolean expressions, even though they can weakly validate their
syntax. This has implications for the interpretability and evaluation of neural
network systems: successfully matching syntactic patterns does not guarantee
that counter memory accurately encodes compositional semantics. Finally, we
consider whether counter languages are semilinear. This work makes general
contributions to the theory of formal languages that are of potential interest
for understanding recurrent neural networks.
| 2,021 | Computation and Language |
Coreferential Reasoning Learning for Language Representation | Language representation models such as BERT could effectively capture
contextual semantic information from plain text, and have been proved to
achieve promising results in lots of downstream NLP tasks with appropriate
fine-tuning. However, most existing language representation models cannot
explicitly handle coreference, which is essential to the coherent understanding
of the whole discourse. To address this issue, we present CorefBERT, a novel
language representation model that can capture the coreferential relations in
context. The experimental results show that, compared with existing baseline
models, CorefBERT can achieve significant improvements consistently on various
downstream NLP tasks that require coreferential reasoning, while maintaining
comparable performance to previous models on other common NLP tasks. The source
code and experiment details of this paper can be obtained from
https://github.com/thunlp/CorefBERT.
| 2,020 | Computation and Language |
TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue | The underlying difference of linguistic patterns between general text and
task-oriented dialogue makes existing pre-trained language models less useful
in practice. In this work, we unify nine human-human and multi-turn
task-oriented dialogue datasets for language modeling. To better model dialogue
behavior during pre-training, we incorporate user and system tokens into the
masked language modeling. We propose a contrastive objective function to
simulate the response selection task. Our pre-trained task-oriented dialogue
BERT (TOD-BERT) outperforms strong baselines like BERT on four downstream
task-oriented dialogue applications, including intention recognition, dialogue
state tracking, dialogue act prediction, and response selection. We also show
that TOD-BERT has a stronger few-shot ability that can mitigate the data
scarcity problem for task-oriented dialogue.
| 2,020 | Computation and Language |
Framing COVID-19: How we conceptualize and discuss the pandemic on
Twitter | Doctors and nurses in these weeks are busy in the trenches, fighting against
a new invisible enemy: Covid-19. Cities are locked down and civilians are
besieged in their own homes, to prevent the spreading of the virus. War-related
terminology is commonly used to frame the discourse around epidemics and
diseases. Arguably the discourse around the current epidemic will make use of
war-related metaphors too,not only in public discourse and the media, but also
in the tweets written by non-experts of mass communication. We hereby present
an analysis of the discourse around #Covid-19, based on a corpus of 200k tweets
posted on Twitter during March and April 2020. Using topic modelling we first
analyze the topics around which the discourse can be classified. Then, we show
that the WAR framing is used to talk about specific topics, such as the virus
treatment, but not others, such as the effects of social distancing on the
population. We then measure and compare the popularity of the WAR frame to
three alternative figurative frames (MONSTER, STORM and TSUNAMI) and a literal
frame used as control (FAMILY). The results show that while the FAMILY literal
frame covers a wider portion of the corpus, among the figurative framings WAR
is the most frequently used, and thus arguably the most conventional one.
However, we conclude, this frame is not apt to elaborate the discourse around
many aspects involved in the current situation. Therefore, we conclude, in line
with previous suggestions, a plethora of framing options, or a metaphor menu,
may facilitate the communication of various aspects involved in the
Covid-19-related discourse on the social media, and thus support civilians in
the expression of their feelings, opinions and ideas during the current
pandemic.
| 2,020 | Computation and Language |
Exploring Probabilistic Soft Logic as a framework for integrating
top-down and bottom-up processing of language in a task context | This technical report describes a new prototype architecture designed to
integrate top-down and bottom-up analysis of non-standard linguistic input,
where a semantic model of the context of an utterance is used to guide the
analysis of the non-standard surface forms, including their automated
normalization in context. While the architecture is generally applicable, as a
concrete use case of the architecture we target the generation of
semantically-informed target hypotheses for answers written by German learners
in response to reading comprehension questions, where the reading context and
possible target answers are given.
The architecture integrates existing NLP components to produce candidate
analyses on eight levels of linguistic modeling, all of which are broken down
into atomic statements and connected into a large graphical model using
Probabilistic Soft Logic (PSL) as a framework. Maximum a posteriori inference
on the resulting graphical model then assigns a belief distribution to
candidate target hypotheses. The current version of the architecture builds on
Universal Dependencies (UD) as its representation formalism on the form level
and on Abstract Meaning Representations (AMRs) to represent semantic analyses
of learner answers and the context information provided by the target answers.
These general choices will make it comparatively straightforward to apply the
architecture to other tasks and other languages.
| 2,020 | Computation and Language |
Gestalt: a Stacking Ensemble for SQuAD2.0 | We propose a deep-learning system -- for the SQuAD2.0 task -- that finds, or
indicates the lack of, a correct answer to a question in a context paragraph.
Our goal is to learn an ensemble of heterogeneous SQuAD2.0 models that, when
blended properly, outperforms the best model in the ensemble per se. We created
a stacking ensemble that combines top-N predictions from two models, based on
ALBERT and RoBERTa, into a multiclass classification task to pick the best
answer out of their predictions. We explored various ensemble configurations,
input representations, and model architectures. For evaluation, we examined
test-set EM and F1 scores; our best-performing ensemble incorporated a
CNN-based meta-model and scored 87.117 and 90.306, respectively -- a relative
improvement of 0.55% for EM and 0.61% for F1 scores, compared to the baseline
performance of the best model in the ensemble, an ALBERT-based model, at 86.644
for EM and 89.760 for F1.
| 2,020 | Computation and Language |
Analyzing analytical methods: The case of phonology in neural models of
spoken language | Given the fast development of analysis techniques for NLP and speech
processing systems, few systematic studies have been conducted to compare the
strengths and weaknesses of each method. As a step in this direction we study
the case of representations of phonology in neural network models of spoken
language. We use two commonly applied analytical techniques, diagnostic
classifiers and representational similarity analysis, to quantify to what
extent neural activation patterns encode phonemes and phoneme sequences. We
manipulate two factors that can affect the outcome of analysis. First, we
investigate the role of learning by comparing neural activations extracted from
trained versus randomly-initialized models. Second, we examine the temporal
scope of the activations by probing both local activations corresponding to a
few milliseconds of the speech signal, and global activations pooled over the
whole utterance. We conclude that reporting analysis results with randomly
initialized models is crucial, and that global-scope methods tend to yield more
consistent results and we recommend their use as a complement to local-scope
diagnostic methods.
| 2,023 | Computation and Language |
Bayesian Hierarchical Words Representation Learning | This paper presents the Bayesian Hierarchical Words Representation (BHWR)
learning algorithm. BHWR facilitates Variational Bayes word representation
learning combined with semantic taxonomy modeling via hierarchical priors. By
propagating relevant information between related words, BHWR utilizes the
taxonomy to improve the quality of such representations. Evaluation of several
linguistic datasets demonstrates the advantages of BHWR over suitable
alternatives that facilitate Bayesian modeling with or without semantic priors.
Finally, we further show that BHWR produces better representations for rare
words.
| 2,020 | Computation and Language |
PALM: Pre-training an Autoencoding&Autoregressive Language Model for
Context-conditioned Generation | Self-supervised pre-training, such as BERT, MASS and BART, has emerged as a
powerful technique for natural language understanding and generation. Existing
pre-training techniques employ autoencoding and/or autoregressive objectives to
train Transformer-based models by recovering original word tokens from
corrupted text with some masked tokens. The training goals of existing
techniques are often inconsistent with the goals of many language generation
tasks, such as generative question answering and conversational response
generation, for producing new text given context.
This work presents PALM with a novel scheme that jointly pre-trains an
autoencoding and autoregressive language model on a large unlabeled corpus,
specifically designed for generating new text conditioned on context. The new
scheme alleviates the mismatch introduced by the existing denoising scheme
between pre-training and fine-tuning where generation is more than
reconstructing original text. An extensive set of experiments show that PALM
achieves new state-of-the-art results on a variety of language generation
benchmarks covering generative question answering (Rank 1 on the official MARCO
leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword,
question generation on SQuAD, and conversational response generation on Cornell
Movie Dialogues.
| 2,020 | Computation and Language |
SPECTER: Document-level Representation Learning using Citation-informed
Transformers | Representation learning is a critical ingredient for natural language
processing systems. Recent Transformer language models like BERT learn powerful
textual representations, but these models are targeted towards token- and
sentence-level training objectives and do not leverage information on
inter-document relatedness, which limits their document-level representation
power. For applications on scientific documents, such as classification and
recommendation, the embeddings power strong performance on end tasks. We
propose SPECTER, a new method to generate document-level embedding of
scientific documents based on pretraining a Transformer language model on a
powerful signal of document-level relatedness: the citation graph. Unlike
existing pretrained language models, SPECTER can be easily applied to
downstream applications without task-specific fine-tuning. Additionally, to
encourage further research on document-level models, we introduce SciDocs, a
new evaluation benchmark consisting of seven document-level tasks ranging from
citation prediction, to document classification and recommendation. We show
that SPECTER outperforms a variety of competitive baselines on the benchmark.
| 2,020 | Computation and Language |
Entities as Experts: Sparse Memory Access with Entity Supervision | We focus on the problem of capturing declarative knowledge about entities in
the learned parameters of a language model. We introduce a new model - Entities
as Experts (EAE) - that can access distinct memories of the entities mentioned
in a piece of text. Unlike previous efforts to integrate entity knowledge into
sequence models, EAE's entity representations are learned directly from text.
We show that EAE's learned representations capture sufficient knowledge to
answer TriviaQA questions such as "Which Dr. Who villain has been played by
Roger Delgado, Anthony Ainley, Eric Roberts?", outperforming an
encoder-generator Transformer model with 10x the parameters. According to the
LAMA knowledge probes, EAE contains more factual knowledge than a similarly
sized BERT, as well as previous approaches that integrate external sources of
entity knowledge. Because EAE associates parameters with specific entities, it
only needs to access a fraction of its parameters at inference time, and we
show that the correct identification and representation of entities is
essential to EAE's performance.
| 2,020 | Computation and Language |
Learning Structured Embeddings of Knowledge Graphs with Adversarial
Learning Framework | Many large-scale knowledge graphs are now available and ready to provide
semantically structured information that is regarded as an important resource
for question answering and decision support tasks. However, they are built on
rigid symbolic frameworks which makes them hard to be used in other intelligent
systems. We present a learning method using generative adversarial architecture
designed to embed the entities and relations of the knowledge graphs into a
continuous vector space. A generative network (GN) takes two elements of a
(subject, predicate, object) triple as input and generates the vector
representation of the missing element. A discriminative network (DN) scores a
triple to distinguish a positive triple from those generated by GN. The
training goal for GN is to deceive DN to make wrong classification. When
arriving at a convergence, GN recovers the training data and can be used for
knowledge graph completion, while DN is trained to be a good triple classifier.
Unlike few previous studies based on generative adversarial architectures, our
GN is able to generate unseen instances while they just use GN to better choose
negative samples (already existed) for DN. Experiments demonstrate our method
can improve classical relational learning models (e.g.TransE) with a
significant margin on both the link prediction and triple classification tasks.
| 2,020 | Computation and Language |
Building a Multi-domain Neural Machine Translation Model using Knowledge
Distillation | Lack of specialized data makes building a multi-domain neural machine
translation tool challenging. Although emerging literature dealing with low
resource languages starts to show promising results, most state-of-the-art
models used millions of sentences. Today, the majority of multi-domain
adaptation techniques are based on complex and sophisticated architectures that
are not adapted for real-world applications. So far, no scalable method is
performing better than the simple yet effective mixed-finetuning, i.e
finetuning a generic model with a mix of all specialized data and generic data.
In this paper, we propose a new training pipeline where knowledge distillation
and multiple specialized teachers allow us to efficiently finetune a model
without adding new costs at inference time. Our experiments demonstrated that
our training pipeline allows improving the performance of multi-domain
translation over finetuning in configurations with 2, 3, and 4 domains by up to
2 points in BLEU.
| 2,020 | Computation and Language |
HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and
Textual Data | Existing question answering datasets focus on dealing with homogeneous
information, based either only on text or KB/Table information alone. However,
as human knowledge is distributed over heterogeneous forms, using homogeneous
information alone might lead to severe coverage problems. To fill in the gap,
we present HybridQA https://github.com/wenhuchen/HybridQA, a new large-scale
question-answering dataset that requires reasoning on heterogeneous
information. Each question is aligned with a Wikipedia table and multiple
free-form corpora linked with the entities in the table. The questions are
designed to aggregate both tabular information and text information, i.e., lack
of either form would render the question unanswerable. We test with three
different models: 1) a table-only model. 2) text-only model. 3) a hybrid model
that combines heterogeneous information to find the answer. The experimental
results show that the EM scores obtained by two baselines are below 20\%, while
the hybrid model can achieve an EM over 40\%. This gap suggests the necessity
to aggregate heterogeneous information in HybridQA. However, the hybrid model's
score is still far behind human performance. Hence, HybridQA can serve as a
challenging benchmark to study question answering with heterogeneous
information.
| 2,021 | Computation and Language |
Neural Data-to-Text Generation with Dynamic Content Planning | Neural data-to-text generation models have achieved significant advancement
in recent years. However, these models have two shortcomings: the generated
texts tend to miss some vital information, and they often generate descriptions
that are not consistent with the structured input data. To alleviate these
problems, we propose a Neural data-to-text generation model with Dynamic
content Planning, named NDP for abbreviation. The NDP can utilize the
previously generated text to dynamically select the appropriate entry from the
given structured data. We further design a reconstruction mechanism with a
novel objective function that can reconstruct the whole entry of the used data
sequentially from the hidden states of the decoder, which aids the accuracy of
the generated text. Empirical results show that the NDP achieves superior
performance over the state-of-the-art on ROTOWIRE dataset, in terms of relation
generation (RG), content selection (CS), content ordering (CO) and BLEU
metrics. The human evaluation result shows that the texts generated by the
proposed NDP are better than the corresponding ones generated by NCP in most of
time. And using the proposed reconstruction mechanism, the fidelity of the
generated text can be further improved significantly.
| 2,020 | Computation and Language |
Non-Autoregressive Machine Translation with Latent Alignments | This paper presents two strong methods, CTC and Imputer, for
non-autoregressive machine translation that model latent alignments with
dynamic programming. We revisit CTC for machine translation and demonstrate
that a simple CTC model can achieve state-of-the-art for single-step
non-autoregressive machine translation, contrary to what prior work indicates.
In addition, we adapt the Imputer model for non-autoregressive machine
translation and demonstrate that Imputer with just 4 generation steps can match
the performance of an autoregressive Transformer baseline. Our latent alignment
models are simpler than many existing non-autoregressive translation baselines;
for example, we do not require target length prediction or re-scoring with an
autoregressive model. On the competitive WMT'14 En$\rightarrow$De task, our CTC
model achieves 25.7 BLEU with a single generation step, while Imputer achieves
27.5 BLEU with 2 generation steps, and 28.0 BLEU with 4 generation steps. This
compares favourably to the autoregressive Transformer baseline at 27.8 BLEU.
| 2,020 | Computation and Language |
The Right Tool for the Job: Matching Model and Instance Complexities | As NLP models become larger, executing a trained model requires significant
computational resources incurring monetary and environmental costs. To better
respect a given inference budget, we propose a modification to contextual
representation fine-tuning which, during inference, allows for an early (and
fast) "exit" from neural network calculations for simple instances, and late
(and accurate) exit for hard instances. To achieve this, we add classifiers to
different layers of BERT and use their calibrated confidence scores to make
early exit decisions. We test our proposed modification on five different
datasets in two tasks: three text classification datasets and two natural
language inference benchmarks. Our method presents a favorable speed/accuracy
tradeoff in almost all cases, producing models which are up to five times
faster than the state of the art, while preserving their accuracy. Our method
also requires almost no additional training resources (in either time or
parameters) compared to the baseline BERT model. Finally, our method alleviates
the need for costly retraining of multiple models at different levels of
efficiency; we allow users to control the inference speed/accuracy tradeoff
using a single trained model, by setting a single variable at inference time.
We publicly release our code.
| 2,020 | Computation and Language |
Paraphrase Augmented Task-Oriented Dialog Generation | Neural generative models have achieved promising performance on dialog
generation tasks if given a huge data set. However, the lack of high-quality
dialog data and the expensive data annotation process greatly limit their
application in real-world settings. We propose a paraphrase augmented response
generation (PARG) framework that jointly trains a paraphrase model and a
response generation model to improve the dialog generation performance. We also
design a method to automatically construct paraphrase training data set based
on dialog state and dialog act labels. PARG is applicable to various dialog
generation models, such as TSCP (Lei et al., 2018) and DAMD (Zhang et al.,
2019). Experimental results show that the proposed framework improves these
state-of-the-art dialog models further on CamRest676 and MultiWOZ. PARG also
significantly outperforms other data augmentation methods in dialog generation
tasks, especially under low resource settings.
| 2,020 | Computation and Language |
TriggerNER: Learning with Entity Triggers as Explanations for Named
Entity Recognition | Training neural models for named entity recognition (NER) in a new domain
often requires additional human annotations (e.g., tens of thousands of labeled
instances) that are usually expensive and time-consuming to collect. Thus, a
crucial research question is how to obtain supervision in a cost-effective way.
In this paper, we introduce "entity triggers," an effective proxy of human
explanations for facilitating label-efficient learning of NER models. An entity
trigger is defined as a group of words in a sentence that helps to explain why
humans would recognize an entity in the sentence.
We crowd-sourced 14k entity triggers for two well-studied NER datasets. Our
proposed model, Trigger Matching Network, jointly learns trigger
representations and soft matching module with self-attention such that can
generalize to unseen sentences easily for tagging. Our framework is
significantly more cost-effective than the traditional neural NER frameworks.
Experiments show that using only 20% of the trigger-annotated sentences results
in a comparable performance as using 70% of conventional annotated sentences.
| 2,020 | Computation and Language |
LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from
Explanation | Successfully training a deep neural network demands a huge corpus of labeled
data. However, each label only provides limited information to learn from and
collecting the requisite number of labels involves massive human effort. In
this work, we introduce LEAN-LIFE, a web-based, Label-Efficient AnnotatioN
framework for sequence labeling and classification tasks, with an easy-to-use
UI that not only allows an annotator to provide the needed labels for a task,
but also enables LearnIng From Explanations for each labeling decision. Such
explanations enable us to generate useful additional labeled data from
unlabeled instances, bolstering the pool of available training data. On three
popular NLP tasks (named entity recognition, relation extraction, sentiment
analysis), we find that using this enhanced supervision allows our models to
surpass competitive baseline F1 scores by more than 5-10 percentage points,
while using 2X times fewer labeled instances. Our framework is the first to
utilize this enhanced supervision technique and does so for three important
tasks -- thus providing improved annotation recommendations to users and an
ability to build datasets of (data, label, explanation) triples instead of the
regular (data, label) pair.
| 2,020 | Computation and Language |
Suicidal Ideation and Mental Disorder Detection with Attentive Relation
Networks | Mental health is a critical issue in modern society, and mental disorders
could sometimes turn to suicidal ideation without effective treatment. Early
detection of mental disorders and suicidal ideation from social content
provides a potential way for effective social intervention. However,
classifying suicidal ideation and other mental disorders is challenging as they
share similar patterns in language usage and sentimental polarity. This paper
enhances text representation with lexicon-based sentiment scores and latent
topics and proposes using relation networks to detect suicidal ideation and
mental disorders with related risk indicators. The relation module is further
equipped with the attention mechanism to prioritize more critical relational
features. Through experiments on three real-world datasets, our model
outperforms most of its counterparts.
| 2,021 | Computation and Language |
Recognizing Long Grammatical Sequences Using Recurrent Networks
Augmented With An External Differentiable Stack | Recurrent neural networks (RNNs) are a widely used deep architecture for
sequence modeling, generation, and prediction. Despite success in applications
such as machine translation and voice recognition, these stateful models have
several critical shortcomings. Specifically, RNNs generalize poorly over very
long sequences, which limits their applicability to many important temporal
processing and time series forecasting problems. For example, RNNs struggle in
recognizing complex context free languages (CFLs), never reaching 100% accuracy
on training. One way to address these shortcomings is to couple an RNN with an
external, differentiable memory structure, such as a stack. However,
differentiable memories in prior work have neither been extensively studied on
CFLs nor tested on sequences longer than those seen in training. The few
efforts that have studied them have shown that continuous differentiable memory
structures yield poor generalization for complex CFLs, making the RNN less
interpretable. In this paper, we improve the memory-augmented RNN with
important architectural and state updating mechanisms that ensure that the
model learns to properly balance the use of its latent states with external
memory. Our improved RNN models exhibit better generalization performance and
are able to classify long strings generated by complex hierarchical context
free grammars (CFGs). We evaluate our models on CGGs, including the Dyck
languages, as well as on the Penn Treebank language modelling task, and achieve
stable, robust performance across these benchmarks. Furthermore, we show that
only our memory-augmented networks are capable of retaining memory for a longer
duration up to strings of length 160.
| 2,020 | Computation and Language |
Towards Instance-Level Parser Selection for Cross-Lingual Transfer of
Dependency Parsers | Current methods of cross-lingual parser transfer focus on predicting the best
parser for a low-resource target language globally, that is, "at treebank
level". In this work, we propose and argue for a novel cross-lingual transfer
paradigm: instance-level parser selection (ILPS), and present a
proof-of-concept study focused on instance-level selection in the framework of
delexicalized parser transfer. We start from an empirical observation that
different source parsers are the best choice for different Universal POS
sequences in the target language. We then propose to predict the best parser at
the instance level. To this end, we train a supervised regression model, based
on the Transformer architecture, to predict parser accuracies for individual
POS-sequences. We compare ILPS against two strong single-best parser selection
baselines (SBPS): (1) a model that compares POS n-gram distributions between
the source and target languages (KL) and (2) a model that selects the source
based on the similarity between manually created language vectors encoding
syntactic properties of languages (L2V). The results from our extensive
evaluation, coupling 42 source parsers and 20 diverse low-resource test
languages, show that ILPS outperforms KL and L2V on 13/20 and 14/20 test
languages, respectively. Further, we show that by predicting the best parser
"at the treebank level" (SBPS), using the aggregation of predictions from our
instance-level model, we outperform the same baselines on 17/20 and 16/20 test
languages.
| 2,020 | Computation and Language |
Null It Out: Guarding Protected Attributes by Iterative Nullspace
Projection | The ability to control for the kinds of information encoded in neural
representation has a variety of use cases, especially in light of the challenge
of interpreting these models. We present Iterative Null-space Projection
(INLP), a novel method for removing information from neural representations.
Our method is based on repeated training of linear classifiers that predict a
certain property we aim to remove, followed by projection of the
representations on their null-space. By doing so, the classifiers become
oblivious to that target property, making it hard to linearly separate the data
according to it. While applicable for multiple uses, we evaluate our method on
bias and fairness use-cases, and show that our method is able to mitigate bias
in word embeddings, as well as to increase fairness in a setting of multi-class
classification.
| 2,020 | Computation and Language |
Generate, Delete and Rewrite: A Three-Stage Framework for Improving
Persona Consistency of Dialogue Generation | Maintaining a consistent personality in conversations is quite natural for
human beings, but is still a non-trivial task for machines. The persona-based
dialogue generation task is thus introduced to tackle the
personality-inconsistent problem by incorporating explicit persona text into
dialogue generation models. Despite the success of existing persona-based
models on generating human-like responses, their one-stage decoding framework
can hardly avoid the generation of inconsistent persona words. In this work, we
introduce a three-stage framework that employs a generate-delete-rewrite
mechanism to delete inconsistent words from a generated response prototype and
further rewrite it to a personality-consistent one. We carry out evaluations by
both human and automatic metrics. Experiments on the Persona-Chat dataset show
that our approach achieves good performance.
| 2,020 | Computation and Language |
Do sequence-to-sequence VAEs learn global features of sentences? | Autoregressive language models are powerful and relatively easy to train.
However, these models are usually trained without explicit conditioning labels
and do not offer easy ways to control global aspects such as sentiment or topic
during generation. Bowman & al. (2016) adapted the Variational Autoencoder
(VAE) for natural language with the sequence-to-sequence architecture and
claimed that the latent vector was able to capture such global features in an
unsupervised manner. We question this claim. We measure which words benefit
most from the latent information by decomposing the reconstruction loss per
position in the sentence. Using this method, we find that VAEs are prone to
memorizing the first words and the sentence length, producing local features of
limited usefulness. To alleviate this, we investigate alternative architectures
based on bag-of-words assumptions and language model pretraining. These
variants learn latent variables that are more global, i.e., more predictive of
topic or sentiment labels. Moreover, using reconstructions, we observe that
they decrease memorization: the first word and the sentence length are not
recovered as accurately than with the baselines, consequently yielding more
diverse reconstructions.
| 2,021 | Computation and Language |
Cross-lingual Contextualized Topic Models with Zero-shot Learning | Many data sets (e.g., reviews, forums, news, etc.) exist parallelly in
multiple languages. They all cover the same content, but the linguistic
differences make it impossible to use traditional, bag-of-word-based topic
models. Models have to be either single-language or suffer from a huge, but
extremely sparse vocabulary. Both issues can be addressed by transfer learning.
In this paper, we introduce a zero-shot cross-lingual topic model. Our model
learns topics on one language (here, English), and predicts them for unseen
documents in different languages (here, Italian, French, German, and
Portuguese). We evaluate the quality of the topic predictions for the same
document in different languages. Our results show that the transferred topics
are coherent and stable across languages, which suggests exciting future
research directions.
| 2,021 | Computation and Language |
Kvistur 2.0: a BiLSTM Compound Splitter for Icelandic | In this paper, we present a character-based BiLSTM model for splitting
Icelandic compound words, and show how varying amounts of training data affects
the performance of the model. Compounding is highly productive in Icelandic,
and new compounds are constantly being created. This results in a large number
of out-of-vocabulary (OOV) words, negatively impacting the performance of many
NLP tools. Our model is trained on a dataset of 2.9 million unique word forms
and their constituent structures from the Database of Icelandic Morphology. The
model learns how to split compound words into two parts and can be used to
derive the constituent structure of any word form. Knowing the constituent
structure of a word form makes it possible to generate the optimal split for a
given task, e.g., a full split for subword tokenization, or, in the case of
part-of-speech tagging, splitting an OOV word until the largest known
morphological head is found. The model outperforms other previously published
methods when evaluated on a corpus of manually split word forms. This method
has been integrated into Kvistur, an Icelandic compound word analyzer.
| 2,020 | Computation and Language |
Classification Benchmarks for Under-resourced Bengali Language based on
Multichannel Convolutional-LSTM Network | Exponential growths of social media and micro-blogging sites not only provide
platforms for empowering freedom of expressions and individual voices but also
enables people to express anti-social behaviour like online harassment,
cyberbullying, and hate speech. Numerous works have been proposed to utilize
these data for social and anti-social behaviours analysis, document
characterization, and sentiment analysis by predicting the contexts mostly for
highly resourced languages such as English. However, there are languages that
are under-resources, e.g., South Asian languages like Bengali, Tamil, Assamese,
Telugu that lack of computational resources for the NLP tasks. In this paper,
we provide several classification benchmarks for Bengali, an under-resourced
language. We prepared three datasets of expressing hate, commonly used topics,
and opinions for hate speech detection, document classification, and sentiment
analysis, respectively. We built the largest Bengali word embedding models to
date based on 250 million articles, which we call BengFastText. We perform
three different experiments, covering document classification, sentiment
analysis, and hate speech detection. We incorporate word embeddings into a
Multichannel Convolutional-LSTM (MConv-LSTM) network for predicting different
types of hate speech, document classification, and sentiment analysis.
Experiments demonstrate that BengFastText can capture the semantics of words
from respective contexts correctly. Evaluations against several baseline
embedding models, e.g., Word2Vec and GloVe yield up to 92.30%, 82.25%, and
90.45% F1-scores in case of document classification, sentiment analysis, and
hate speech detection, respectively during 5-fold cross-validation tests.
| 2,020 | Computation and Language |
Bridging Anaphora Resolution as Question Answering | Most previous studies on bridging anaphora resolution (Poesio et al., 2004;
Hou et al., 2013b; Hou, 2018a) use the pairwise model to tackle the problem and
assume that the gold mention information is given. In this paper, we cast
bridging anaphora resolution as question answering based on context. This
allows us to find the antecedent for a given anaphor without knowing any gold
mention information (except the anaphor itself). We present a question
answering framework (BARQA) for this task, which leverages the power of
transfer learning. Furthermore, we propose a novel method to generate a large
amount of "quasi-bridging" training data. We show that our model pre-trained on
this dataset and fine-tuned on a small amount of in-domain dataset achieves new
state-of-the-art results for bridging anaphora resolution on two bridging
corpora (ISNotes (Markert et al., 2012) and BASHI (Roesiger, 2018)).
| 2,020 | Computation and Language |
How recurrent networks implement contextual processing in sentiment
analysis | Neural networks have a remarkable capacity for contextual processing--using
recent or nearby inputs to modify processing of current input. For example, in
natural language, contextual processing is necessary to correctly interpret
negation (e.g. phrases such as "not bad"). However, our ability to understand
how networks process context is limited. Here, we propose general methods for
reverse engineering recurrent neural networks (RNNs) to identify and elucidate
contextual processing. We apply these methods to understand RNNs trained on
sentiment classification. This analysis reveals inputs that induce contextual
effects, quantifies the strength and timescale of these effects, and identifies
sets of these inputs with similar properties. Additionally, we analyze
contextual effects related to differential processing of the beginning and end
of documents. Using the insights learned from the RNNs we improve baseline
Bag-of-Words models with simple extensions that incorporate contextual
modification, recovering greater than 90% of the RNN's performance increase
over the baseline. This work yields a new understanding of how RNNs process
contextual information, and provides tools that should provide similar insight
more broadly.
| 2,020 | Computation and Language |
SongNet: Rigid Formats Controlled Text Generation | Neural text generation has made tremendous progress in various tasks. One
common characteristic of most of the tasks is that the texts are not restricted
to some rigid formats when generating. However, we may confront some special
text paradigms such as Lyrics (assume the music score is given), Sonnet, SongCi
(classical Chinese poetry of the Song dynasty), etc. The typical
characteristics of these texts are in three folds: (1) They must comply fully
with the rigid predefined formats. (2) They must obey some rhyming schemes. (3)
Although they are restricted to some formats, the sentence integrity must be
guaranteed. To the best of our knowledge, text generation based on the
predefined rigid formats has not been well investigated. Therefore, we propose
a simple and elegant framework named SongNet to tackle this problem. The
backbone of the framework is a Transformer-based auto-regressive language
model. Sets of symbols are tailor-designed to improve the modeling performance
especially on format, rhyme, and sentence integrity. We improve the attention
mechanism to impel the model to capture some future information on the format.
A pre-training and fine-tuning framework is designed to further improve the
generation quality. Extensive experiments conducted on two collected corpora
demonstrate that our proposed framework generates significantly better results
in terms of both automatic metrics and the human evaluation.
| 2,021 | Computation and Language |
AlloVera: A Multilingual Allophone Database | We introduce a new resource, AlloVera, which provides mappings from 218
allophones to phonemes for 14 languages. Phonemes are contrastive phonological
units, and allophones are their various concrete realizations, which are
predictable from phonological context. While phonemic representations are
language specific, phonetic representations (stated in terms of (allo)phones)
are much closer to a universal (language-independent) transcription. AlloVera
allows the training of speech recognition models that output phonetic
transcriptions in the International Phonetic Alphabet (IPA), regardless of the
input language. We show that a "universal" allophone model, Allosaurus, built
with AlloVera, outperforms "universal" phonemic models and language-specific
models on a speech-transcription task. We explore the implications of this
technology (and related technologies) for the documentation of endangered and
minority languages. We further explore other applications for which AlloVera
will be suitable as it grows, including phonological typology.
| 2,020 | Computation and Language |
Active Sentence Learning by Adversarial Uncertainty Sampling in Discrete
Space | Active learning for sentence understanding aims at discovering informative
unlabeled data for annotation and therefore reducing the demand for labeled
data. We argue that the typical uncertainty sampling method for active learning
is time-consuming and can hardly work in real-time, which may lead to
ineffective sample selection. We propose adversarial uncertainty sampling in
discrete space (AUSDS) to retrieve informative unlabeled samples more
efficiently. AUSDS maps sentences into latent space generated by the popular
pre-trained language models, and discover informative unlabeled text samples
for annotation via adversarial attack. The proposed approach is extremely
efficient compared with traditional uncertainty sampling with more than 10x
speedup. Experimental results on five datasets show that AUSDS outperforms
strong baselines on effectiveness.
| 2,020 | Computation and Language |
Enriching the Transformer with Linguistic Factors for Low-Resource
Machine Translation | Introducing factors, that is to say, word features such as linguistic
information referring to the source tokens, is known to improve the results of
neural machine translation systems in certain settings, typically in recurrent
architectures. This study proposes enhancing the current state-of-the-art
neural machine translation architecture, the Transformer, so that it allows to
introduce external knowledge. In particular, our proposed modification, the
Factored Transformer, uses linguistic factors that insert additional knowledge
into the machine translation system. Apart from using different kinds of
features, we study the effect of different architectural configurations.
Specifically, we analyze the performance of combining words and features at the
embedding level or at the encoder level, and we experiment with two different
combination strategies. With the best-found configuration, we show improvements
of 0.8 BLEU over the baseline Transformer in the IWSLT German-to-English task.
Moreover, we experiment with the more challenging FLoRes English-to-Nepali
benchmark, which includes both extremely low-resourced and very distant
languages, and obtain an improvement of 1.2 BLEU.
| 2,020 | Computation and Language |
Dialogue-Based Relation Extraction | We present the first human-annotated dialogue-based relation extraction (RE)
dataset DialogRE, aiming to support the prediction of relation(s) between two
arguments that appear in a dialogue. We further offer DialogRE as a platform
for studying cross-sentence RE as most facts span multiple sentences. We argue
that speaker-related information plays a critical role in the proposed task,
based on an analysis of similarities and differences between dialogue-based and
traditional RE tasks. Considering the timeliness of communication in a
dialogue, we design a new metric to evaluate the performance of RE methods in a
conversational setting and investigate the performance of several
representative RE methods on DialogRE. Experimental results demonstrate that a
speaker-aware extension on the best-performing model leads to gains in both the
standard and conversational evaluation settings. DialogRE is available at
https://dataset.org/dialogre/.
| 2,020 | Computation and Language |
Neural Approaches for Data Driven Dependency Parsing in Sanskrit | Data-driven approaches for dependency parsing have been of great interest in
Natural Language Processing for the past couple of decades. However, Sanskrit
still lacks a robust purely data-driven dependency parser, probably with an
exception to Krishna (2019). This can primarily be attributed to the lack of
availability of task-specific labelled data and the morphologically rich nature
of the language. In this work, we evaluate four different data-driven machine
learning models, originally proposed for different languages, and compare their
performances on Sanskrit data. We experiment with 2 graph based and 2
transition based parsers. We compare the performance of each of the models in a
low-resource setting, with 1,500 sentences for training. Further, since our
focus is on the learning power of each of the models, we do not incorporate any
Sanskrit specific features explicitly into the models, and rather use the
default settings in each of the paper for obtaining the feature functions. In
this work, we analyse the performance of the parsers using both an in-domain
and an out-of-domain test dataset. We also investigate the impact of word
ordering in which the sentences are provided as input to these systems, by
parsing verses and their corresponding prose order (anvaya) sentences.
| 2,020 | Computation and Language |
Fast and Accurate Deep Bidirectional Language Representations for
Unsupervised Learning | Even though BERT achieves successful performance improvements in various
supervised learning tasks, applying BERT for unsupervised tasks still holds a
limitation that it requires repetitive inference for computing contextual
language representations. To resolve the limitation, we propose a novel deep
bidirectional language model called Transformer-based Text Autoencoder (T-TA).
The T-TA computes contextual language representations without repetition and
has benefits of the deep bidirectional architecture like BERT. In run-time
experiments on CPU environments, the proposed T-TA performs over six times
faster than the BERT-based model in the reranking task and twelve times faster
in the semantic similarity task. Furthermore, the T-TA shows competitive or
even better accuracies than those of BERT on the above tasks.
| 2,020 | Computation and Language |
Show Us the Way: Learning to Manage Dialog from Demonstrations | We present our submission to the End-to-End Multi-Domain Dialog Challenge
Track of the Eighth Dialog System Technology Challenge. Our proposed dialog
system adopts a pipeline architecture, with distinct components for Natural
Language Understanding, Dialog State Tracking, Dialog Management and Natural
Language Generation. At the core of our system is a reinforcement learning
algorithm which uses Deep Q-learning from Demonstrations to learn a dialog
policy with the help of expert examples. We find that demonstrations are
essential to training an accurate dialog policy where both state and action
spaces are large. Evaluation of our Dialog Management component shows that our
approach is effective - beating supervised and reinforcement learning
baselines.
| 2,020 | Computation and Language |
Batch Clustering for Multilingual News Streaming | Nowadays, digital news articles are widely available, published by various
editors and often written in different languages. This large volume of diverse
and unorganized information makes human reading very difficult or almost
impossible. This leads to a need for algorithms able to arrange high amount of
multilingual news into stories. To this purpose, we extend previous works on
Topic Detection and Tracking, and propose a new system inspired from newsLens.
We process articles per batch, looking for monolingual local topics which are
then linked across time and languages. Here, we introduce a novel "replaying"
strategy to link monolingual local topics into stories. Besides, we propose new
fine tuned multilingual embedding using SBERT to create crosslingual stories.
Our system gives monolingual state-of-the-art results on dataset of Spanish and
German news and crosslingual state-of-the-art results on English, Spanish and
German news.
| 2,020 | Computation and Language |
Probing Linguistic Features of Sentence-Level Representations in Neural
Relation Extraction | Despite the recent progress, little is known about the features captured by
state-of-the-art neural relation extraction (RE) models. Common methods encode
the source sentence, conditioned on the entity mentions, before classifying the
relation. However, the complexity of the task makes it difficult to understand
how encoder architecture and supporting linguistic knowledge affect the
features learned by the encoder. We introduce 14 probing tasks targeting
linguistic properties relevant to RE, and we use them to study representations
learned by more than 40 different encoder architecture and linguistic feature
combinations trained on two datasets, TACRED and SemEval 2010 Task 8. We find
that the bias induced by the architecture and the inclusion of linguistic
features are clearly expressed in the probing task performance. For example,
adding contextualized word representations greatly increases performance on
probing tasks with a focus on named entity and part-of-speech information, and
yields better results in RE. In contrast, entity masking improves RE, but
considerably lowers performance on entity type related probing tasks.
| 2,020 | Computation and Language |
Too Many Claims to Fact-Check: Prioritizing Political Claims Based on
Check-Worthiness | The massive amount of misinformation spreading on the Internet on a daily
basis has enormous negative impacts on societies. Therefore, we need automated
systems helping fact-checkers in the combat against misinformation. In this
paper, we propose a model prioritizing the claims based on their
check-worthiness. We use BERT model with additional features including
domain-specific controversial topics, word embeddings, and others. In our
experiments, we show that our proposed model outperforms all state-of-the-art
models in both test collections of CLEF Check That! Lab in 2018 and 2019. We
also conduct a qualitative analysis to shed light-detecting check-worthy
claims. We suggest requesting rationales behind judgments are needed to
understand subjective nature of the task and problematic labels.
| 2,021 | Computation and Language |
Highway Transformer: Self-Gating Enhanced Self-Attentive Networks | Self-attention mechanisms have made striking state-of-the-art (SOTA) progress
in various sequence learning tasks, standing on the multi-headed dot product
attention by attending to all the global contexts at different locations.
Through a pseudo information highway, we introduce a gated component
self-dependency units (SDU) that incorporates LSTM-styled gating units to
replenish internal semantic importance within the multi-dimensional latent
space of individual representations. The subsidiary content-based SDU gates
allow for the information flow of modulated latent embeddings through skipped
connections, leading to a clear margin of convergence speed with gradient
descent algorithms. We may unveil the role of gating mechanism to aid in the
context-based Transformer modules, with hypothesizing that SDU gates,
especially on shallow layers, could push it faster to step towards suboptimal
points during the optimization process.
| 2,020 | Computation and Language |
Women worry about family, men about the economy: Gender differences in
emotional responses to COVID-19 | Among the critical challenges around the COVID-19 pandemic is dealing with
the potentially detrimental effects on people's mental health. Designing
appropriate interventions and identifying the concerns of those most at risk
requires methods that can extract worries, concerns and emotional responses
from text data. We examine gender differences and the effect of document length
on worries about the ongoing COVID-19 situation. Our findings suggest that i)
short texts do not offer as adequate insights into psychological processes as
longer texts. We further find ii) marked gender differences in topics
concerning emotional responses. Women worried more about their loved ones and
severe health concerns while men were more occupied with effects on the economy
and society. This paper adds to the understanding of general gender differences
in language found elsewhere, and shows that the current unique circumstances
likely amplified these effects. We close this paper with a call for more
high-quality datasets due to the limitations of Tweet-sized data.
| 2,020 | Computation and Language |
DSTC8-AVSD: Multimodal Semantic Transformer Network with Retrieval Style
Word Generator | Audio Visual Scene-aware Dialog (AVSD) is the task of generating a response
for a question with a given scene, video, audio, and the history of previous
turns in the dialog. Existing systems for this task employ the transformers or
recurrent neural network-based architecture with the encoder-decoder framework.
Even though these techniques show superior performance for this task, they have
significant limitations: the model easily overfits only to memorize the
grammatical patterns; the model follows the prior distribution of the
vocabularies in a dataset. To alleviate the problems, we propose a Multimodal
Semantic Transformer Network. It employs a transformer-based architecture with
an attention-based word embedding layer that generates words by querying word
embeddings. With this design, our model keeps considering the meaning of the
words at the generation stage. The empirical results demonstrate the
superiority of our proposed model that outperforms most of the previous works
for the AVSD task.
| 2,020 | Computation and Language |
Belief Propagation for Maximum Coverage on Weighted Bipartite Graph and
Application to Text Summarization | We study text summarization from the viewpoint of maximum coverage problem.
In graph theory, the task of text summarization is regarded as maximum coverage
problem on bipartite graph with weighted nodes. In recent study,
belief-propagation based algorithm for maximum coverage on unweighted graph was
proposed using the idea of statistical mechanics. We generalize it to weighted
graph for text summarization. Then we apply our algorithm to weighted biregular
random graph for verification of maximum coverage performance. We also apply it
to bipartite graph representing real document in open text dataset, and check
the performance of text summarization. As a result, our algorithm exhibits
better performance than greedy-type algorithm in some setting of text
summarization.
| 2,020 | Computation and Language |
Natural Language Processing with Deep Learning for Medical Adverse Event
Detection from Free-Text Medical Narratives: A Case Study of Detecting Total
Hip Replacement Dislocation | Accurate and timely detection of medical adverse events (AEs) from free-text
medical narratives is challenging. Natural language processing (NLP) with deep
learning has already shown great potential for analyzing free-text data, but
its application for medical AE detection has been limited. In this study we
proposed deep learning based NLP (DL-NLP) models for efficient and accurate hip
dislocation AE detection following total hip replacement from standard
(radiology notes) and non-standard (follow-up telephone notes) free-text
medical narratives. We benchmarked these proposed models with a wide variety of
traditional machine learning based NLP (ML-NLP) models, and also assessed the
accuracy of International Classification of Diseases (ICD) and Current
Procedural Terminology (CPT) codes in capturing these hip dislocation AEs in a
multi-center orthopaedic registry. All DL-NLP models out-performed all of the
ML-NLP models, with a convolutional neural network (CNN) model achieving the
best overall performance (Kappa = 0.97 for radiology notes, and Kappa = 1.00
for follow-up telephone notes). On the other hand, the ICD/CPT codes of the
patients who sustained a hip dislocation AE were only 75.24% accurate, showing
the potential of the proposed model to be used in largescale orthopaedic
registries for accurate and efficient hip dislocation AE detection to improve
the quality of care and patient outcome.
| 2,020 | Computation and Language |
Towards an Interoperable Ecosystem of AI and LT Platforms: A Roadmap for
the Implementation of Different Levels of Interoperability | With regard to the wider area of AI/LT platform interoperability, we
concentrate on two core aspects: (1) cross-platform search and discovery of
resources and services; (2) composition of cross-platform service workflows. We
devise five different levels (of increasing complexity) of platform
interoperability that we suggest to implement in a wider federation of AI/LT
platforms. We illustrate the approach using the five emerging AI/LT platforms
AI4EU, ELG, Lynx, QURATOR and SPEAKER.
| 2,020 | Computation and Language |
Unsupervised Discovery of Implicit Gender Bias | Despite their prevalence in society, social biases are difficult to identify,
primarily because human judgements in this domain can be unreliable. We take an
unsupervised approach to identifying gender bias against women at a comment
level and present a model that can surface text likely to contain bias. Our
main challenge is forcing the model to focus on signs of implicit bias, rather
than other artifacts in the data. Thus, our methodology involves reducing the
influence of confounds through propensity matching and adversarial learning.
Our analysis shows how biased comments directed towards female politicians
contain mixed criticisms, while comments directed towards other female public
figures focus on appearance and sexualization. Ultimately, our work offers a
way to capture subtle biases in various domains without relying on subjective
human judgements.
| 2,020 | Computation and Language |
Exploring the Combination of Contextual Word Embeddings and Knowledge
Graph Embeddings | ``Classical'' word embeddings, such as Word2Vec, have been shown to capture
the semantics of words based on their distributional properties. However, their
ability to represent the different meanings that a word may have is limited.
Such approaches also do not explicitly encode relations between entities, as
denoted by words. Embeddings of knowledge bases (KB) capture the explicit
relations between entities denoted by words, but are not able to directly
capture the syntagmatic properties of these words. To our knowledge, recent
research have focused on representation learning that augment the strengths of
one with the other. In this work, we begin exploring another approach using
contextual and KB embeddings jointly at the same level and propose two tasks --
an entity typing and a relation typing task -- that evaluate the performance of
contextual and KB embeddings. We also evaluated a concatenated model of
contextual and KB embeddings with these two tasks, and obtain conclusive
results on the first task. We hope our work may contribute as a basis for
models and datasets that develop in the direction of this approach.
| 2,020 | Computation and Language |
Can You Put it All Together: Evaluating Conversational Agents' Ability
to Blend Skills | Being engaging, knowledgeable, and empathetic are all desirable general
qualities in a conversational agent. Previous work has introduced tasks and
datasets that aim to help agents to learn those qualities in isolation and
gauge how well they can express them. But rather than being specialized in one
single quality, a good open-domain conversational agent should be able to
seamlessly blend them all into one cohesive conversational flow. In this work,
we investigate several ways to combine models trained towards isolated
capabilities, ranging from simple model aggregation schemes that require
minimal additional training, to various forms of multi-task training that
encompass several skills at all training stages. We further propose a new
dataset, BlendedSkillTalk, to analyze how these capabilities would mesh
together in a natural conversation, and compare the performance of different
architectures and training schemes. Our experiments show that multi-tasking
over several tasks that focus on particular capabilities results in better
blended conversation performance compared to models trained on a single skill,
and that both unified or two-stage approaches perform well if they are
constructed to avoid unwanted bias in skill selection or are fine-tuned on our
new task.
| 2,020 | Computation and Language |
A Formal Hierarchy of RNN Architectures | We develop a formal hierarchy of the expressive capacity of RNN
architectures. The hierarchy is based on two formal properties: space
complexity, which measures the RNN's memory, and rational recurrence, defined
as whether the recurrent update can be described by a weighted finite-state
machine. We place several RNN variants within this hierarchy. For example, we
prove the LSTM is not rational, which formally separates it from the related
QRNN (Bradbury et al., 2016). We also show how these models' expressive
capacity is expanded by stacking multiple layers or composing them with
different pooling functions. Our results build on the theory of "saturated"
RNNs (Merrill, 2019). While formally extending these findings to unsaturated
RNNs is left to future work, we hypothesize that the practical learnable
capacity of unsaturated RNNs obeys a similar hierarchy. Experimental findings
from training unsaturated networks on formal languages support this conjecture.
| 2,020 | Computation and Language |
Exclusive Hierarchical Decoding for Deep Keyphrase Generation | Keyphrase generation (KG) aims to summarize the main ideas of a document into
a set of keyphrases. A new setting is recently introduced into this problem, in
which, given a document, the model needs to predict a set of keyphrases and
simultaneously determine the appropriate number of keyphrases to produce.
Previous work in this setting employs a sequential decoding process to generate
keyphrases. However, such a decoding method ignores the intrinsic hierarchical
compositionality existing in the keyphrase set of a document. Moreover,
previous work tends to generate duplicated keyphrases, which wastes time and
computing resources. To overcome these limitations, we propose an exclusive
hierarchical decoding framework that includes a hierarchical decoding process
and either a soft or a hard exclusion mechanism. The hierarchical decoding
process is to explicitly model the hierarchical compositionality of a keyphrase
set. Both the soft and the hard exclusion mechanisms keep track of
previously-predicted keyphrases within a window size to enhance the diversity
of the generated keyphrases. Extensive experiments on multiple KG benchmark
datasets demonstrate the effectiveness of our method to generate less
duplicated and more accurate keyphrases.
| 2,020 | Computation and Language |
A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep
Contextual Word Embeddings and Hierarchical Attention | The Web has become the main platform where people express their opinions
about entities of interest and their associated aspects. Aspect-Based Sentiment
Analysis (ABSA) aims to automatically compute the sentiment towards these
aspects from opinionated text. In this paper we extend the state-of-the-art
Hybrid Approach for Aspect-Based Sentiment Analysis (HAABSA) method in two
directions. First we replace the non-contextual word embeddings with deep
contextual word embeddings in order to better cope with the word semantics in a
given text. Second, we use hierarchical attention by adding an extra attention
layer to the HAABSA high-level representations in order to increase the method
flexibility in modeling the input data. Using two standard datasets (SemEval
2015 and SemEval 2016) we show that the proposed extensions improve the
accuracy of the built model for ABSA.
| 2,020 | Computation and Language |
Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation | Question Generation (QG) is fundamentally a simple syntactic transformation;
however, many aspects of semantics influence what questions are good to form.
We implement this observation by developing SynQG, a set of transparent
syntactic rules leveraging universal dependencies, shallow semantic parsing,
lexical resources, and custom rules which transform declarative sentences into
question-answer pairs. We utilize PropBank argument descriptions and VerbNet
state predicates to incorporate shallow semantic content, which helps generate
questions of a descriptive nature and produce inferential and semantically
richer questions than existing systems. In order to improve syntactic fluency
and eliminate grammatically incorrect questions, we employ back-translation
over the output of these syntactic rules. A set of crowd-sourced evaluations
shows that our system can generate a larger number of highly grammatical and
relevant questions than previous QG systems and that back-translation
drastically improves grammaticality at a slight cost of generating irrelevant
questions.
| 2,022 | Computation and Language |
SimAlign: High Quality Word Alignments without Parallel Training Data
using Static and Contextualized Embeddings | Word alignments are useful for tasks like statistical and neural machine
translation (NMT) and cross-lingual annotation projection. Statistical word
aligners perform well, as do methods that extract alignments jointly with
translations in NMT. However, most approaches require parallel training data,
and quality decreases as less training data is available. We propose word
alignment methods that require no parallel data. The key idea is to leverage
multilingual word embeddings, both static and contextualized, for word
alignment. Our multilingual embeddings are created from monolingual data only
without relying on any parallel data or dictionaries. We find that alignments
created from embeddings are superior for four and comparable for two language
pairs compared to those produced by traditional statistical aligners, even with
abundant parallel data; e.g., contextualized embeddings achieve a word
alignment F1 for English-German that is 5 percentage points higher than
eflomal, a high-quality statistical aligner, trained on 100k parallel
sentences.
| 2,021 | Computation and Language |
Enhancing Pharmacovigilance with Drug Reviews and Social Media | This paper explores whether the use of drug reviews and social media could be
leveraged as potential alternative sources for pharmacovigilance of adverse
drug reactions (ADRs). We examined the performance of BERT alongside two
variants that are trained on biomedical papers, BioBERT7, and clinical notes,
Clinical BERT8. A variety of 8 different BERT models were fine-tuned and
compared across three different tasks in order to evaluate their relative
performance to one another in the ADR tasks. The tasks include sentiment
classification of drug reviews, presence of ADR in twitter postings, and named
entity recognition of ADRs in twitter postings. BERT demonstrates its
flexibility with high performance across all three different pharmacovigilance
related tasks.
| 2,020 | Computation and Language |
BanFakeNews: A Dataset for Detecting Fake News in Bangla | Observing the damages that can be done by the rapid propagation of fake news
in various sectors like politics and finance, automatic identification of fake
news using linguistic analysis has drawn the attention of the research
community. However, such methods are largely being developed for English where
low resource languages remain out of the focus. But the risks spawned by fake
and manipulative news are not confined by languages. In this work, we propose
an annotated dataset of ~50K news that can be used for building automated fake
news detection systems for a low resource language like Bangla. Additionally,
we provide an analysis of the dataset and develop a benchmark system with state
of the art NLP techniques to identify Bangla fake news. To create this system,
we explore traditional linguistic features and neural network based methods. We
expect this dataset will be a valuable resource for building technologies to
prevent the spreading of fake news and contribute in research with low resource
languages.
| 2,020 | Computation and Language |
Pattern Learning for Detecting Defect Reports and Improvement Requests
in App Reviews | Online reviews are an important source of feedback for understanding
customers. In this study, we follow novel approaches that target this absence
of actionable insights by classifying reviews as defect reports and requests
for improvement. Unlike traditional classification methods based on expert
rules, we reduce the manual labour by employing a supervised system that is
capable of learning lexico-semantic patterns through genetic programming.
Additionally, we experiment with a distantly-supervised SVM that makes use of
noisy labels generated by patterns. Using a real-world dataset of app reviews,
we show that the automatically learned patterns outperform the manually created
ones, to be generated. Also the distantly-supervised SVM models are not far
behind the pattern-based solutions, showing the usefulness of this approach
when the amount of annotated data is limited.
| 2,020 | Computation and Language |
Extractive Summarization as Text Matching | This paper creates a paradigm shift with regard to the way we build neural
extractive summarization systems. Instead of following the commonly used
framework of extracting sentences individually and modeling the relationship
between sentences, we formulate the extractive summarization task as a semantic
text matching problem, in which a source document and candidate summaries will
be (extracted from the original text) matched in a semantic space. Notably,
this paradigm shift to semantic matching framework is well-grounded in our
comprehensive analysis of the inherent gap between sentence-level and
summary-level extractors based on the property of the dataset.
Besides, even instantiating the framework with a simple form of a matching
model, we have driven the state-of-the-art extractive result on CNN/DailyMail
to a new level (44.41 in ROUGE-1). Experiments on the other five datasets also
show the effectiveness of the matching framework. We believe the power of this
matching-based summarization framework has not been fully exploited. To
encourage more instantiations in the future, we have released our codes,
processed dataset, as well as generated summaries in
https://github.com/maszhongming/MatchSum.
| 2,020 | Computation and Language |
Knowledge-graph based Proactive Dialogue Generation with Improved
Meta-Learning | Knowledge graph-based dialogue systems can narrow down knowledge candidates
for generating informative and diverse responses with the use of prior
information, e.g., triple attributes or graph paths. However, most current
knowledge graph (KG) cover incomplete domain-specific knowledge. To overcome
this drawback, we propose a knowledge graph based proactive dialogue generation
model (KgDg) with three components, improved model-agnostic meta-learning
algorithm (MAML), knowledge selection in knowledge triplets embedding, and
knowledge aware proactive response generator. For knowledge triplets embedding
and selection, we formulate it as a problem of sentence embedding to better
capture semantic information. Our improved MAML algorithm is capable of
learning general features from a limited number of knowledge graphs, which can
also quickly adapt to dialogue generation with unseen knowledge triplets.
Extensive experiments are conducted on a knowledge aware dialogue dataset
(DuConv). The results show that KgDg adapts both fast and well to knowledge
graph-based dialogue generation and outperforms state-of-the-art baseline.
| 2,020 | Computation and Language |
A Chinese Corpus for Fine-grained Entity Typing | Fine-grained entity typing is a challenging task with wide applications.
However, most existing datasets for this task are in English. In this paper, we
introduce a corpus for Chinese fine-grained entity typing that contains 4,800
mentions manually labeled through crowdsourcing. Each mention is annotated with
free-form entity types. To make our dataset useful in more possible scenarios,
we also categorize all the fine-grained types into 10 general types. Finally,
we conduct experiments with some neural models whose structures are typical in
fine-grained entity typing and show how well they perform on our dataset. We
also show the possibility of improving Chinese fine-grained entity typing
through cross-lingual transfer learning.
| 2,020 | Computation and Language |
Dynamic Knowledge Graph-based Dialogue Generation with Improved
Adversarial Meta-Learning | Knowledge graph-based dialogue systems are capable of generating more
informative responses and can implement sophisticated reasoning mechanisms.
However, these models do not take into account the sparseness and
incompleteness of knowledge graph (KG)and current dialogue models cannot be
applied to dynamic KG. This paper proposes a dynamic Knowledge graph-based
dialogue generation method with improved adversarial Meta-Learning (KDAD). KDAD
formulates dynamic knowledge triples as a problem of adversarial attack and
incorporates the objective of quickly adapting to dynamic knowledge-aware
dialogue generation. We train a knowledge graph-based dialog model with
improved ADML using minimal training samples. The model can initialize the
parameters and adapt to previous unseen knowledge so that training can be
quickly completed based on only a few knowledge triples. We show that our model
significantly outperforms other baselines. We evaluate and demonstrate that our
method adapts extremely fast and well to dynamic knowledge graph-based dialogue
generation.
| 2,020 | Computation and Language |
The Cost of Training NLP Models: A Concise Overview | We review the cost of training large-scale language models, and the drivers
of these costs. The intended audience includes engineers and scientists
budgeting their model-training experiments, as well as non-practitioners trying
to make sense of the economics of modern-day Natural Language Processing (NLP).
| 2,020 | Computation and Language |
Adversarial Training for Large Neural Language Models | Generalization and robustness are both key desiderata for designing machine
learning methods. Adversarial training can enhance robustness, but past work
often finds it hurts generalization. In natural language processing (NLP),
pre-training large neural language models such as BERT have demonstrated
impressive gain in generalization for a variety of tasks, with further
improvement from adversarial fine-tuning. However, these models are still
vulnerable to adversarial attacks. In this paper, we show that adversarial
pre-training can improve both generalization and robustness. We propose a
general algorithm ALUM (Adversarial training for large neural LangUage Models),
which regularizes the training objective by applying perturbations in the
embedding space that maximizes the adversarial loss. We present the first
comprehensive study of adversarial training in all stages, including
pre-training from scratch, continual pre-training on a well-trained model, and
task-specific fine-tuning. ALUM obtains substantial gains over BERT on a wide
range of NLP tasks, in both regular and adversarial scenarios. Even for models
that have been well trained on extremely large text corpora, such as RoBERTa,
ALUM can still produce significant gains from continual pre-training, whereas
conventional non-adversarial methods can not. ALUM can be further combined with
task-specific fine-tuning to attain additional gains. The ALUM code is publicly
available at https://github.com/namisan/mt-dnn.
| 2,020 | Computation and Language |
Incorporating External Knowledge through Pre-training for Natural
Language to Code Generation | Open-domain code generation aims to generate code in a general-purpose
programming language (such as Python) from natural language (NL) intents.
Motivated by the intuition that developers usually retrieve resources on the
web when writing code, we explore the effectiveness of incorporating two
varieties of external knowledge into NL-to-code generation: automatically mined
NL-code pairs from the online programming QA forum StackOverflow and
programming language API documentation. Our evaluations show that combining the
two sources with data augmentation and retrieval-based data re-sampling
improves the current state-of-the-art by up to 2.2% absolute BLEU score on the
code generation testbed CoNaLa. The code and resources are available at
https://github.com/neulab/external-knowledge-codegen.
| 2,020 | Computation and Language |
Gated Convolutional Bidirectional Attention-based Model for Off-topic
Spoken Response Detection | Off-topic spoken response detection, the task aiming at predicting whether a
response is off-topic for the corresponding prompt, is important for an
automated speaking assessment system. In many real-world educational
applications, off-topic spoken response detectors are required to achieve high
recall for off-topic responses not only on seen prompts but also on prompts
that are unseen during training. In this paper, we propose a novel approach for
off-topic spoken response detection with high off-topic recall on both seen and
unseen prompts. We introduce a new model, Gated Convolutional Bidirectional
Attention-based Model (GCBiA), which applies bi-attention mechanism and
convolutions to extract topic words of prompts and key-phrases of responses,
and introduces gated unit and residual connections between major layers to
better represent the relevance of responses and prompts. Moreover, a new
negative sampling method is proposed to augment training data. Experiment
results demonstrate that our novel approach can achieve significant
improvements in detecting off-topic responses with extremely high on-topic
recall, for both seen and unseen prompts.
| 2,020 | Computation and Language |
Taming the Expressiveness and Programmability of Graph Analytical
Queries | Graph database has enjoyed a boom in the last decade, and graph queries
accordingly gain a lot of attentions from both the academia and industry. We
focus on analytical queries in this paper. While analyzing existing
domain-specific languages (DSLs) for analytical queries regarding the
perspectives of completeness, expressiveness and programmability, we find out
that none of existing work has achieved a satisfactory coverage of these
perspectives. Motivated by this, we propose the \flash DSL, which is named
after the three primitive operators Filter, LocAl and PuSH. We prove that
\flash is Turing complete (completeness), and show that it achieves both good
expressiveness and programmability for analytical queries. We provide an
implementation of \flash based on code generation, and compare it with native
C++ codes and existing DSL using representative queries. The experiment results
demonstrate \flash's expressiveness, and its capability of programming complex
algorithms that achieve satisfactory runtime.
| 2,020 | Computation and Language |
Adaptation of a Lexical Organization for Social Engineering Detection
and Response Generation | We present a paradigm for extensible lexicon development based on Lexical
Conceptual Structure to support social engineering detection and response
generation. We leverage the central notions of ask (elicitation of behaviors
such as providing access to money) and framing (risk/reward implied by the
ask). We demonstrate improvements in ask/framing detection through refinements
to our lexical organization and show that response generation qualitatively
improves as ask/framing detection performance improves. The paradigm presents a
systematic and efficient approach to resource adaptation for improved
task-specific performance.
| 2,020 | Computation and Language |
The State and Fate of Linguistic Diversity and Inclusion in the NLP
World | Language technologies contribute to promoting multilingualism and linguistic
diversity around the world. However, only a very small number of the over 7000
languages of the world are represented in the rapidly evolving language
technologies and applications. In this paper we look at the relation between
the types of languages, resources, and their representation in NLP conferences
to understand the trajectory that different languages have followed over time.
Our quantitative investigation underlines the disparity between languages,
especially in terms of their resources, and calls into question the "language
agnostic" status of current models and systems. Through this paper, we attempt
to convince the ACL community to prioritise the resolution of the predicaments
highlighted here, so that no language is left behind.
| 2,021 | Computation and Language |
Compositionality and Generalization in Emergent Languages | Natural language allows us to refer to novel composite concepts by combining
expressions denoting their parts according to systematic rules, a property
known as \emph{compositionality}. In this paper, we study whether the language
emerging in deep multi-agent simulations possesses a similar ability to refer
to novel primitive combinations, and whether it accomplishes this feat by
strategies akin to human-language compositionality. Equipped with new ways to
measure compositionality in emergent languages inspired by disentanglement in
representation learning, we establish three main results. First, given
sufficiently large input spaces, the emergent language will naturally develop
the ability to refer to novel composite concepts. Second, there is no
correlation between the degree of compositionality of an emergent language and
its ability to generalize. Third, while compositionality is not necessary for
generalization, it provides an advantage in terms of language transmission: The
more compositional a language is, the more easily it will be picked up by new
learners, even when the latter differ in architecture from the original agents.
We conclude that compositionality does not arise from simple generalization
pressure, but if an emergent language does chance upon it, it will be more
likely to survive and thrive.
| 2,020 | Computation and Language |
Variational Inference for Learning Representations of Natural Language
Edits | Document editing has become a pervasive component of the production of
information, with version control systems enabling edits to be efficiently
stored and applied. In light of this, the task of learning distributed
representations of edits has been recently proposed. With this in mind, we
propose a novel approach that employs variational inference to learn a
continuous latent space of vector representations to capture the underlying
semantic information with regard to the document editing process. We achieve
this by introducing a latent variable to explicitly model the aforementioned
features. This latent variable is then combined with a document representation
to guide the generation of an edited version of this document. Additionally, to
facilitate standardized automatic evaluation of edit representations, which has
heavily relied on direct human input thus far, we also propose a suite of
downstream tasks, PEER, specifically designed to measure the quality of edit
representations in the context of natural language processing.
| 2,021 | Computation and Language |
CheXbert: Combining Automatic Labelers and Expert Annotations for
Accurate Radiology Report Labeling Using BERT | The extraction of labels from radiology text reports enables large-scale
training of medical imaging models. Existing approaches to report labeling
typically rely either on sophisticated feature engineering based on medical
domain knowledge or manual annotations by experts. In this work, we introduce a
BERT-based approach to medical image report labeling that exploits both the
scale of available rule-based systems and the quality of expert annotations. We
demonstrate superior performance of a biomedically pretrained BERT model first
trained on annotations of a rule-based labeler and then finetuned on a small
set of expert annotations augmented with automated backtranslation. We find
that our final model, CheXbert, is able to outperform the previous best
rules-based labeler with statistical significance, setting a new SOTA for
report labeling on one of the largest datasets of chest x-rays.
| 2,020 | Computation and Language |
On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond | Variational autoencoders (VAEs) combine latent variables with amortized
variational inference, whose optimization usually converges into a trivial
local optimum termed posterior collapse, especially in text modeling. By
tracking the optimization dynamics, we observe the encoder-decoder
incompatibility that leads to poor parameterizations of the data manifold. We
argue that the trivial local optimum may be avoided by improving the encoder
and decoder parameterizations since the posterior network is part of a
transition map between them. To this end, we propose Coupled-VAE, which couples
a VAE model with a deterministic autoencoder with the same structure and
improves the encoder and decoder parameterizations via encoder weight sharing
and decoder signal matching. We apply the proposed Coupled-VAE approach to
various VAE models with different regularization, posterior family, decoder
structure, and optimization strategy. Experiments on benchmark datasets (i.e.,
PTB, Yelp, and Yahoo) show consistently improved results in terms of
probability estimation and richness of the latent space. We also generalize our
method to conditional language modeling and propose Coupled-CVAE, which largely
improves the diversity of dialogue generation on the Switchboard dataset.
| 2,020 | Computation and Language |
A Study of Cross-Lingual Ability and Language-specific Information in
Multilingual BERT | Recently, multilingual BERT works remarkably well on cross-lingual transfer
tasks, superior to static non-contextualized word embeddings. In this work, we
provide an in-depth experimental study to supplement the existing literature of
cross-lingual ability. We compare the cross-lingual ability of
non-contextualized and contextualized representation model with the same data.
We found that datasize and context window size are crucial factors to the
transferability. We also observe the language-specific information in
multilingual BERT. By manipulating the latent representations, we can control
the output languages of multilingual BERT, and achieve unsupervised token
translation. We further show that based on the observation, there is a
computationally cheap but effective approach to improve the cross-lingual
ability of multilingual BERT.
| 2,020 | Computation and Language |
Learning Geometric Word Meta-Embeddings | We propose a geometric framework for learning meta-embeddings of words from
different embedding sources. Our framework transforms the embeddings into a
common latent space, where, for example, simple averaging of different
embeddings (of a given word) is more amenable. The proposed latent space arises
from two particular geometric transformations - the orthogonal rotations and
the Mahalanobis metric scaling. Empirical results on several word similarity
and word analogy benchmarks illustrate the efficacy of the proposed framework.
| 2,020 | Computation and Language |
MPNet: Masked and Permuted Pre-training for Language Understanding | BERT adopts masked language modeling (MLM) for pre-training and is one of the
most successful pre-training models. Since BERT neglects dependency among
predicted tokens, XLNet introduces permuted language modeling (PLM) for
pre-training to address this problem. However, XLNet does not leverage the full
position information of a sentence and thus suffers from position discrepancy
between pre-training and fine-tuning. In this paper, we propose MPNet, a novel
pre-training method that inherits the advantages of BERT and XLNet and avoids
their limitations. MPNet leverages the dependency among predicted tokens
through permuted language modeling (vs. MLM in BERT), and takes auxiliary
position information as input to make the model see a full sentence and thus
reducing the position discrepancy (vs. PLM in XLNet). We pre-train MPNet on a
large-scale dataset (over 160GB text corpora) and fine-tune on a variety of
down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet
outperforms MLM and PLM by a large margin, and achieves better results on these
tasks compared with previous state-of-the-art pre-trained methods (e.g., BERT,
XLNet, RoBERTa) under the same model setting. The code and the pre-trained
models are available at: https://github.com/microsoft/MPNet.
| 2,020 | Computation and Language |
PHINC: A Parallel Hinglish Social Media Code-Mixed Corpus for Machine
Translation | Code-mixing is the phenomenon of using more than one language in a sentence.
It is a very frequently observed pattern of communication on social media
platforms. Flexibility to use multiple languages in one text message might help
to communicate efficiently with the target audience. But, it adds to the
challenge of processing and understanding natural language to a much larger
extent. This paper presents a parallel corpus of the 13,738 code-mixed
English-Hindi sentences and their corresponding translation in English. The
translations of sentences are done manually by the annotators. We are releasing
the parallel corpus to facilitate future research opportunities in code-mixed
machine translation. The annotated corpus is available at
https://doi.org/10.5281/zenodo.3605597.
| 2,020 | Computation and Language |
StereoSet: Measuring stereotypical bias in pretrained language models | A stereotype is an over-generalized belief about a particular group of
people, e.g., Asians are good at math or Asians are bad drivers. Such beliefs
(biases) are known to hurt target groups. Since pretrained language models are
trained on large real world data, they are known to capture stereotypical
biases. In order to assess the adverse effects of these models, it is important
to quantify the bias captured in them. Existing literature on quantifying bias
evaluates pretrained language models on a small set of artificially constructed
bias-assessing sentences. We present StereoSet, a large-scale natural dataset
in English to measure stereotypical biases in four domains: gender, profession,
race, and religion. We evaluate popular models like BERT, GPT-2, RoBERTa, and
XLNet on our dataset and show that these models exhibit strong stereotypical
biases. We also present a leaderboard with a hidden test set to track the bias
of future language models at https://stereoset.mit.edu
| 2,020 | Computation and Language |
Grounding Conversations with Improvised Dialogues | Effective dialogue involves grounding, the process of establishing mutual
knowledge that is essential for communication between people. Modern dialogue
systems are not explicitly trained to build common ground, and therefore
overlook this important aspect of communication. Improvisational theater
(improv) intrinsically contains a high proportion of dialogue focused on
building common ground, and makes use of the yes-and principle, a strong
grounding speech act, to establish coherence and an actionable objective
reality. We collect a corpus of more than 26,000 yes-and turns, transcribing
them from improv dialogues and extracting them from larger, but more sparsely
populated movie script dialogue corpora, via a bootstrapped classifier. We
fine-tune chit-chat dialogue systems with our corpus to encourage more
grounded, relevant conversation and confirm these findings with human
evaluations.
| 2,020 | Computation and Language |
An Automated Pipeline for Character and Relationship Extraction from
Readers' Literary Book Reviews on Goodreads.com | Reader reviews of literary fiction on social media, especially those in
persistent, dedicated forums, create and are in turn driven by underlying
narrative frameworks. In their comments about a novel, readers generally
include only a subset of characters and their relationships, thus offering a
limited perspective on that work. Yet in aggregate, these reviews capture an
underlying narrative framework comprised of different actants (people, places,
things), their roles, and interactions that we label the "consensus narrative
framework". We represent this framework in the form of an actant-relationship
story graph. Extracting this graph is a challenging computational problem,
which we pose as a latent graphical model estimation problem. Posts and reviews
are viewed as samples of sub graphs/networks of the hidden narrative framework.
Inspired by the qualitative narrative theory of Greimas, we formulate a
graphical generative Machine Learning (ML) model where nodes represent actants,
and multi-edges and self-loops among nodes capture context-specific
relationships. We develop a pipeline of interlocking automated methods to
extract key actants and their relationships, and apply it to thousands of
reviews and comments posted on Goodreads.com. We manually derive the ground
truth narrative framework from SparkNotes, and then use word embedding tools to
compare relationships in ground truth networks with our extracted networks. We
find that our automated methodology generates highly accurate consensus
narrative frameworks: for our four target novels, with approximately 2900
reviews per novel, we report average coverage/recall of important relationships
of > 80% and an average edge detection rate of >89\%. These extracted narrative
frameworks can generate insight into how people (or classes of people) read and
how they recount what they have read to others.
| 2,020 | Computation and Language |
The Panacea Threat Intelligence and Active Defense Platform | We describe Panacea, a system that supports natural language processing (NLP)
components for active defenses against social engineering attacks. We deploy a
pipeline of human language technology, including Ask and Framing Detection,
Named Entity Recognition, Dialogue Engineering, and Stylometry. Panacea
processes modern message formats through a plug-in architecture to accommodate
innovative approaches for message analysis, knowledge representation and
dialogue generation. The novelty of the Panacea system is that uses NLP for
cyber defense and engages the attacker using bots to elicit evidence to
attribute to the attacker and to waste the attacker's time and resources.
| 2,020 | Computation and Language |
Word Embedding-based Text Processing for Comprehensive Summarization and
Distinct Information Extraction | In this paper, we propose two automated text processing frameworks
specifically designed to analyze online reviews. The objective of the first
framework is to summarize the reviews dataset by extracting essential sentence.
This is performed by converting sentences into numerical vectors and clustering
them using a community detection algorithm based on their similarity levels.
Afterwards, a correlation score is measured for each sentence to determine its
importance level in each cluster and assign it as a tag for that community. The
second framework is based on a question-answering neural network model trained
to extract answers to multiple different questions. The collected answers are
effectively clustered to find multiple distinct answers to a single question
that might be asked by a customer. The proposed frameworks are shown to be more
comprehensive than existing reviews processing solutions.
| 2,020 | Computation and Language |
Learning Goal-oriented Dialogue Policy with Opposite Agent Awareness | Most existing approaches for goal-oriented dialogue policy learning used
reinforcement learning, which focuses on the target agent policy and simply
treat the opposite agent policy as part of the environment. While in real-world
scenarios, the behavior of an opposite agent often exhibits certain patterns or
underlies hidden policies, which can be inferred and utilized by the target
agent to facilitate its own decision making. This strategy is common in human
mental simulation by first imaging a specific action and the probable results
before really acting it. We therefore propose an opposite behavior aware
framework for policy learning in goal-oriented dialogues. We estimate the
opposite agent's policy from its behavior and use this estimation to improve
the target agent by regarding it as part of the target policy. We evaluate our
model on both cooperative and competitive dialogue tasks, showing superior
performance over state-of-the-art baselines.
| 2,020 | Computation and Language |
Train No Evil: Selective Masking for Task-Guided Pre-Training | Recently, pre-trained language models mostly follow the
pre-train-then-fine-tuning paradigm and have achieved great performance on
various downstream tasks. However, since the pre-training stage is typically
task-agnostic and the fine-tuning stage usually suffers from insufficient
supervised data, the models cannot always well capture the domain-specific and
task-specific patterns. In this paper, we propose a three-stage framework by
adding a task-guided pre-training stage with selective masking between general
pre-training and fine-tuning. In this stage, the model is trained by masked
language modeling on in-domain unsupervised data to learn domain-specific
patterns and we propose a novel selective masking strategy to learn
task-specific patterns. Specifically, we design a method to measure the
importance of each token in sequences and selectively mask the important
tokens. Experimental results on two sentiment analysis tasks show that our
method can achieve comparable or even better performance with less than 50% of
computation cost, which indicates our method is both effective and efficient.
The source code of this paper can be obtained from
https://github.com/thunlp/SelectiveMasking.
| 2,020 | Computation and Language |
Neural Abstractive Summarization with Structural Attention | Attentional, RNN-based encoder-decoder architectures have achieved impressive
performance on abstractive summarization of news articles. However, these
methods fail to account for long term dependencies within the sentences of a
document. This problem is exacerbated in multi-document summarization tasks
such as summarizing the popular opinion in threads present in community
question answering (CQA) websites such as Yahoo! Answers and Quora. These
threads contain answers which often overlap or contradict each other. In this
work, we present a hierarchical encoder based on structural attention to model
such inter-sentence and inter-document dependencies. We set the popular
pointer-generator architecture and some of the architectures derived from it as
our baselines and show that they fail to generate good summaries in a
multi-document setting. We further illustrate that our proposed model achieves
significant improvement over the baselines in both single and multi-document
summarization settings -- in the former setting, it beats the best baseline by
1.31 and 7.8 ROUGE-1 points on CNN and CQA datasets, respectively; in the
latter setting, the performance is further improved by 1.6 ROUGE-1 points on
the CQA dataset.
| 2,020 | Computation and Language |
Keyphrase Generation with Cross-Document Attention | Keyphrase generation aims to produce a set of phrases summarizing the
essentials of a given document. Conventional methods normally apply an
encoder-decoder architecture to generate the output keyphrases for an input
document, where they are designed to focus on each current document so they
inevitably omit crucial corpus-level information carried by other similar
documents, i.e., the cross-document dependency and latent topics. In this
paper, we propose CDKGen, a Transformer-based keyphrase generator, which
expands the Transformer to global attention with cross-document attention
networks to incorporate available documents as references so as to generate
better keyphrases with the guidance of topic information. On top of the
proposed Transformer + cross-document attention architecture, we also adopt a
copy mechanism to enhance our model via selecting appropriate words from
documents to deal with out-of-vocabulary words in keyphrases. Experiment
results on five benchmark datasets illustrate the validity and effectiveness of
our model, which achieves the state-of-the-art performance on all datasets.
Further analyses confirm that the proposed model is able to generate keyphrases
consistent with references while keeping sufficient diversity. The code of
CDKGen is available at https://github.com/SVAIGBA/CDKGen.
| 2,022 | Computation and Language |
Making Monolingual Sentence Embeddings Multilingual using Knowledge
Distillation | We present an easy and efficient method to extend existing sentence embedding
models to new languages. This allows to create multilingual versions from
previously monolingual models. The training is based on the idea that a
translated sentence should be mapped to the same location in the vector space
as the original sentence. We use the original (monolingual) model to generate
sentence embeddings for the source language and then train a new system on
translated sentences to mimic the original model. Compared to other methods for
training multilingual sentence embeddings, this approach has several
advantages: It is easy to extend existing models with relatively few samples to
new languages, it is easier to ensure desired properties for the vector space,
and the hardware requirements for training is lower. We demonstrate the
effectiveness of our approach for 50+ languages from various language families.
Code to extend sentence embeddings models to more than 400 languages is
publicly available.
| 2,020 | Computation and Language |
Knowledge-Driven Distractor Generation for Cloze-style Multiple Choice
Questions | In this paper, we propose a novel configurable framework to automatically
generate distractive choices for open-domain cloze-style multiple-choice
questions, which incorporates a general-purpose knowledge base to effectively
create a small distractor candidate set, and a feature-rich learning-to-rank
model to select distractors that are both plausible and reliable. Experimental
results on datasets across four domains show that our framework yields
distractors that are more plausible and reliable than previous methods. This
dataset can also be used as a benchmark for distractor generation in the
future.
| 2,020 | Computation and Language |
Considering Likelihood in NLP Classification Explanations with Occlusion
and Language Modeling | Recently, state-of-the-art NLP models gained an increasing syntactic and
semantic understanding of language, and explanation methods are crucial to
understand their decisions. Occlusion is a well established method that
provides explanations on discrete language data, e.g. by removing a language
unit from an input and measuring the impact on a model's decision. We argue
that current occlusion-based methods often produce invalid or syntactically
incorrect language data, neglecting the improved abilities of recent NLP
models. Furthermore, gradient-based explanation methods disregard the discrete
distribution of data in NLP. Thus, we propose OLM: a novel explanation method
that combines occlusion and language models to sample valid and syntactically
correct replacements with high likelihood, given the context of the original
input. We lay out a theoretical foundation that alleviates these weaknesses of
other explanation methods in NLP and provide results that underline the
importance of considering data likelihood in occlusion-based explanation.
| 2,020 | Computation and Language |
Contextual Neural Machine Translation Improves Translation of Cataphoric
Pronouns | The advent of context-aware NMT has resulted in promising improvements in the
overall translation quality and specifically in the translation of discourse
phenomena such as pronouns. Previous works have mainly focused on the use of
past sentences as context with a focus on anaphora translation. In this work,
we investigate the effect of future sentences as context by comparing the
performance of a contextual NMT model trained with the future context to the
one trained with the past context. Our experiments and evaluation, using
generic and pronoun-focused automatic metrics, show that the use of future
context not only achieves significant improvements over the context-agnostic
Transformer, but also demonstrates comparable and in some cases improved
performance over its counterpart trained on past context. We also perform an
evaluation on a targeted cataphora test suite and report significant gains over
the context-agnostic Transformer in terms of BLEU.
| 2,020 | Computation and Language |
Relabel the Noise: Joint Extraction of Entities and Relations via
Cooperative Multiagents | Distant supervision based methods for entity and relation extraction have
received increasing popularity due to the fact that these methods require light
human annotation efforts. In this paper, we consider the problem of
\textit{shifted label distribution}, which is caused by the inconsistency
between the noisy-labeled training set subject to external knowledge graph and
the human-annotated test set, and exacerbated by the pipelined
entity-then-relation extraction manner with noise propagation. We propose a
joint extraction approach to address this problem by re-labeling noisy
instances with a group of cooperative multiagents. To handle noisy instances in
a fine-grained manner, each agent in the cooperative group evaluates the
instance by calculating a continuous confidence score from its own perspective;
To leverage the correlations between these two extraction tasks, a confidence
consensus module is designed to gather the wisdom of all agents and
re-distribute the noisy training set with confidence-scored labels. Further,
the confidences are used to adjust the training losses of extractors.
Experimental results on two real-world datasets verify the benefits of
re-labeling noisy instance, and show that the proposed model significantly
outperforms the state-of-the-art entity and relation extraction methods.
| 2,020 | Computation and Language |
DIET: Lightweight Language Understanding for Dialogue Systems | Large-scale pre-trained language models have shown impressive results on
language understanding benchmarks like GLUE and SuperGLUE, improving
considerably over other pre-training methods like distributed representations
(GloVe) and purely supervised approaches. We introduce the Dual Intent and
Entity Transformer (DIET) architecture, and study the effectiveness of
different pre-trained representations on intent and entity prediction, two
common dialogue language understanding tasks. DIET advances the state of the
art on a complex multi-domain NLU dataset and achieves similarly high
performance on other simpler datasets. Surprisingly, we show that there is no
clear benefit to using large pre-trained models for this task, and in fact DIET
improves upon the current state of the art even in a purely supervised setup
without any pre-trained embeddings. Our best performing model outperforms
fine-tuning BERT and is about six times faster to train.
| 2,020 | Computation and Language |
Learning to Encode Evolutionary Knowledge for Automatic Commenting Long
Novels | Static knowledge graph has been incorporated extensively into
sequence-to-sequence framework for text generation. While effectively
representing structured context, static knowledge graph failed to represent
knowledge evolution, which is required in modeling dynamic events. In this
paper, an automatic commenting task is proposed for long novels, which involves
understanding context of more than tens of thousands of words. To model the
dynamic storyline, especially the transitions of the characters and their
relations, Evolutionary Knowledge Graph(EKG) is proposed and learned within a
multi-task framework. Given a specific passage to comment, sequential modeling
is used to incorporate historical and future embedding for context
representation. Further, a graph-to-sequence model is designed to utilize the
EKG for comment generation. Extensive experimental results show that our
EKG-based method is superior to several strong baselines on both automatic and
human evaluations.
| 2,020 | Computation and Language |
BERT-ATTACK: Adversarial Attack Against BERT Using BERT | Adversarial attacks for discrete data (such as texts) have been proved
significantly more challenging than continuous data (such as images) since it
is difficult to generate adversarial samples with gradient-based methods.
Current successful attack methods for texts usually adopt heuristic replacement
strategies on the character or word level, which remains challenging to find
the optimal solution in the massive space of possible combinations of
replacements while preserving semantic consistency and language fluency. In
this paper, we propose \textbf{BERT-Attack}, a high-quality and effective
method to generate adversarial samples using pre-trained masked language models
exemplified by BERT. We turn BERT against its fine-tuned models and other deep
neural models in downstream tasks so that we can successfully mislead the
target models to predict incorrectly. Our method outperforms state-of-the-art
attack strategies in both success rate and perturb percentage, while the
generated adversarial samples are fluent and semantically preserved. Also, the
cost of calculation is low, thus possible for large-scale generations. The code
is available at https://github.com/LinyangLee/BERT-Attack.
| 2,020 | Computation and Language |
Adaptive Interaction Fusion Networks for Fake News Detection | The majority of existing methods for fake news detection universally focus on
learning and fusing various features for detection. However, the learning of
various features is independent, which leads to a lack of cross-interaction
fusion between features on social media, especially between posts and comments.
Generally, in fake news, there are emotional associations and semantic
conflicts between posts and comments. How to represent and fuse the
cross-interaction between both is a key challenge. In this paper, we propose
Adaptive Interaction Fusion Networks (AIFN) to fulfill cross-interaction fusion
among features for fake news detection. In AIFN, to discover semantic
conflicts, we design gated adaptive interaction networks (GAIN) to capture
adaptively similar semantics and conflicting semantics between posts and
comments. To establish feature associations, we devise semantic-level fusion
self-attention networks (SFSN) to enhance semantic correlations and fusion
among features. Extensive experiments on two real-world datasets, i.e.,
RumourEval and PHEME, demonstrate that AIFN achieves the state-of-the-art
performance and boosts accuracy by more than 2.05% and 1.90%, respectively.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.