Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model
Compression | BERT is a cutting-edge language representation model pre-trained by a large
corpus, which achieves superior performances on various natural language
understanding tasks. However, a major blocking issue of applying BERT to online
services is that it is memory-intensive and leads to unsatisfactory latency of
user requests, raising the necessity of model compression. Existing solutions
leverage the knowledge distillation framework to learn a smaller model that
imitates the behaviors of BERT. However, the training procedure of knowledge
distillation is expensive itself as it requires sufficient training data to
imitate the teacher model. In this paper, we address this issue by proposing a
hybrid solution named LadaBERT (Lightweight adaptation of BERT through hybrid
model compression), which combines the advantages of different model
compression methods, including weight pruning, matrix factorization and
knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various
public datasets while the training overheads can be reduced by an order of
magnitude.
| 2,020 | Computation and Language |
Putting a Spin on Language: A Quantum Interpretation of Unary
Connectives for Linguistic Applications | Extended versions of the Lambek Calculus currently used in computational
linguistics rely on unary modalities to allow for the controlled application of
structural rules affecting word order and phrase structure. These controlled
structural operations give rise to derivational ambiguities that are missed by
the original Lambek Calculus or its pregroup simplification. Proposals for
compositional interpretation of extended Lambek Calculus in the compact closed
category of FVect and linear maps have been made, but in these proposals the
syntax-semantics mapping ignores the control modalities, effectively
restricting their role to the syntax. Our aim is to turn the modalities into
first-class citizens of the vectorial interpretation. Building on the
directional density matrix semantics, we extend the interpretation of the type
system with an extra spin density matrix space. The interpretation of proofs
then results in ambiguous derivations being tensored with orthogonal spin
states. Our method introduces a way of simultaneously representing co-existing
interpretations of ambiguous utterances, and provides a uniform framework for
the integration of lexical and derivational ambiguity.
| 2,021 | Computation and Language |
Generating Counter Narratives against Online Hate Speech: Data and
Strategies | Recently research has started focusing on avoiding undesired effects that
come with content moderation, such as censorship and overblocking, when dealing
with hatred online. The core idea is to directly intervene in the discussion
with textual responses that are meant to counter the hate content and prevent
it from further spreading. Accordingly, automation strategies, such as natural
language generation, are beginning to be investigated. Still, they suffer from
the lack of sufficient amount of quality data and tend to produce
generic/repetitive responses. Being aware of the aforementioned limitations, we
present a study on how to collect responses to hate effectively, employing
large scale unsupervised language models such as GPT-2 for the generation of
silver data, and the best annotation strategies/neural architectures that can
be used for data filtering before expert validation/post-editing.
| 2,020 | Computation and Language |
Measuring Emotions in the COVID-19 Real World Worry Dataset | The COVID-19 pandemic is having a dramatic impact on societies and economies
around the world. With various measures of lockdowns and social distancing in
place, it becomes important to understand emotional responses on a large scale.
In this paper, we present the first ground truth dataset of emotional responses
to COVID-19. We asked participants to indicate their emotions and express these
in text. This resulted in the Real World Worry Dataset of 5,000 texts (2,500
short + 2,500 long texts). Our analyses suggest that emotional responses
correlated with linguistic measures. Topic modeling further revealed that
people in the UK worry about their family and the economic situation.
Tweet-sized texts functioned as a call for solidarity, while longer texts shed
light on worries and concerns. Using predictive modeling approaches, we were
able to approximate the emotional responses of participants from text within
14% of their actual value. We encourage others to use the dataset and improve
how we can use automated methods to learn about emotional responses and worries
about an urgent problem.
| 2,020 | Computation and Language |
Asking and Answering Questions to Evaluate the Factual Consistency of
Summaries | Practical applications of abstractive summarization models are limited by
frequent factual inconsistencies with respect to their input. Existing
automatic evaluation metrics for summarization are largely insensitive to such
errors. We propose an automatic evaluation protocol called QAGS (pronounced
"kags") that is designed to identify factual inconsistencies in a generated
summary. QAGS is based on the intuition that if we ask questions about a
summary and its source, we will receive similar answers if the summary is
factually consistent with the source. To evaluate QAGS, we collect human
judgments of factual consistency on model-generated summaries for the
CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018)
summarization datasets. QAGS has substantially higher correlations with these
judgments than other automatic evaluation metrics. Also, QAGS offers a natural
form of interpretability: The answers and questions generated while computing
QAGS indicate which tokens of a summary are inconsistent and why. We believe
QAGS is a promising tool in automatically generating usable and factually
consistent text.
| 2,020 | Computation and Language |
Error correction and extraction in request dialogs | We propose a dialog system utility component that gets the last two
utterances of a user and can detect whether the last utterance is an error
correction of the second last utterance. If yes, it corrects the second last
utterance according to the error correction in the last utterance and outputs
the extracted pairs of reparandum and repair entity. This component offers two
advantages, learning the concept of corrections to avoid collecting corrections
for every new domain and extracting reparandum and repair pairs, which offers
the possibility to learn out of it.
For the error correction one sequence labeling and two sequence to sequence
approaches are presented. For the error correction detection these three error
correction approaches can also be used and in addition, we present a sequence
classification approach. One error correction detection and one error
correction approach can be combined to a pipeline or the error correction
approaches can be trained and used end-to-end to avoid two components. We
modified the EPIC-KITCHENS-100 dataset to evaluate the approaches for
correcting entity phrases in request dialogs. For error correction detection
and correction, we got an accuracy of 96.40 % on synthetic validation data and
an accuracy of 77.81 % on human-created real-world test data.
| 2,023 | Computation and Language |
The Spotify Podcast Dataset | Podcasts are a relatively new form of audio media. Episodes appear on a
regular cadence, and come in many different formats and levels of formality.
They can be formal news journalism or conversational chat; fiction or
non-fiction. They are rapidly growing in popularity and yet have been
relatively little studied. As an audio format, podcasts are more varied in
style and production types than, say, broadcast news, and contain many more
genres than typically studied in video research. The medium is therefore a rich
domain with many research avenues for the IR and NLP communities. We present
the Spotify Podcast Dataset, a set of approximately 100K podcast episodes
comprised of raw audio files along with accompanying ASR transcripts. This
represents over 47,000 hours of transcribed audio, and is an order of magnitude
larger than previous speech-to-text corpora.
| 2,020 | Computation and Language |
Severing the Edge Between Before and After: Neural Architectures for
Temporal Ordering of Events | In this paper, we propose a neural architecture and a set of training methods
for ordering events by predicting temporal relations. Our proposed models
receive a pair of events within a span of text as input and they identify
temporal relations (Before, After, Equal, Vague) between them. Given that a key
challenge with this task is the scarcity of annotated data, our models rely on
either pretrained representations (i.e. RoBERTa, BERT or ELMo), transfer and
multi-task learning (by leveraging complementary datasets), and self-training
techniques. Experiments on the MATRES dataset of English documents establish a
new state-of-the-art on this task.
| 2,020 | Computation and Language |
Conversation Learner -- A Machine Teaching Tool for Building Dialog
Managers for Task-Oriented Dialog Systems | Traditionally, industry solutions for building a task-oriented dialog system
have relied on helping dialog authors define rule-based dialog managers,
represented as dialog flows. While dialog flows are intuitively interpretable
and good for simple scenarios, they fall short of performance in terms of the
flexibility needed to handle complex dialogs. On the other hand, purely
machine-learned models can handle complex dialogs, but they are considered to
be black boxes and require large amounts of training data. In this
demonstration, we showcase Conversation Learner, a machine teaching tool for
building dialog managers. It combines the best of both approaches by enabling
dialog authors to create a dialog flow using familiar tools, converting the
dialog flow into a parametric model (e.g., neural networks), and allowing
dialog authors to improve the dialog manager (i.e., the parametric model) over
time by leveraging user-system dialog logs as training data through a machine
teaching interface.
| 2,020 | Computation and Language |
Pruning and Sparsemax Methods for Hierarchical Attention Networks | This paper introduces and evaluates two novel Hierarchical Attention Network
models [Yang et al., 2016] - i) Hierarchical Pruned Attention Networks, which
remove the irrelevant words and sentences from the classification process in
order to reduce potential noise in the document classification accuracy and ii)
Hierarchical Sparsemax Attention Networks, which replace the Softmax function
used in the attention mechanism with the Sparsemax [Martins and Astudillo,
2016], capable of better handling importance distributions where a lot of words
or sentences have very low probabilities. Our empirical evaluation on the IMDB
Review for sentiment analysis datasets shows both approaches to be able to
match the results obtained by the current state-of-the-art (without, however,
any significant benefits). All our source code is made available
athttps://github.com/jmribeiro/dsl-project.
| 2,020 | Computation and Language |
Calibrating Structured Output Predictors for Natural Language Processing | We address the problem of calibrating prediction confidence for output
entities of interest in natural language processing (NLP) applications. It is
important that NLP applications such as named entity recognition and question
answering produce calibrated confidence scores for their predictions,
especially if the system is to be deployed in a safety-critical domain such as
healthcare. However, the output space of such structured prediction models is
often too large to adapt binary or multi-class calibration methods directly. In
this study, we propose a general calibration scheme for output entities of
interest in neural-network based structured prediction models. Our proposed
method can be used with any binary class calibration scheme and a neural
network model. Additionally, we show that our calibration method can also be
used as an uncertainty-aware, entity-specific decoding step to improve the
performance of the underlying model at no additional training cost or data
requirements. We show that our method outperforms current calibration
techniques for named-entity-recognition, part-of-speech and question answering.
We also improve our model's performance from our decoding step across several
tasks and benchmark datasets. Our method improves the calibration and model
performance on out-of-domain test scenarios as well.
| 2,020 | Computation and Language |
On Optimal Transformer Depth for Low-Resource Language Translation | Transformers have shown great promise as an approach to Neural Machine
Translation (NMT) for low-resource languages. However, at the same time,
transformer models remain difficult to optimize and require careful tuning of
hyper-parameters to be useful in this setting. Many NMT toolkits come with a
set of default hyper-parameters, which researchers and practitioners often
adopt for the sake of convenience and avoiding tuning. These configurations,
however, have been optimized for large-scale machine translation data sets with
several millions of parallel sentences for European languages like English and
French. In this work, we find that the current trend in the field to use very
large models is detrimental for low-resource languages, since it makes training
more difficult and hurts overall performance, confirming previous observations.
We see our work as complementary to the Masakhane project ("Masakhane" means
"We Build Together" in isiZulu.) In this spirit, low-resource NMT systems are
now being built by the community who needs them the most. However, many in the
community still have very limited access to the type of computational resources
required for building extremely large models promoted by industrial research.
Therefore, by showing that transformer models perform well (and often best) at
low-to-moderate depth, we hope to convince fellow researchers to devote less
computational resources, as well as time, to exploring overly large models
during the development of these systems.
| 2,020 | Computation and Language |
Improving Readability for Automatic Speech Recognition Transcription | Modern Automatic Speech Recognition (ASR) systems can achieve high
performance in terms of recognition accuracy. However, a perfectly accurate
transcript still can be challenging to read due to grammatical errors,
disfluency, and other errata common in spoken communication. Many downstream
tasks and human readers rely on the output of the ASR system; therefore, errors
introduced by the speaker and ASR system alike will be propagated to the next
task in the pipeline. In this work, we propose a novel NLP task called ASR
post-processing for readability (APR) that aims to transform the noisy ASR
output into a readable text for humans and downstream tasks while maintaining
the semantic meaning of the speaker. In addition, we describe a method to
address the lack of task-specific data by synthesizing examples for the APR
task using the datasets collected for Grammatical Error Correction (GEC)
followed by text-to-speech (TTS) and ASR. Furthermore, we propose metrics
borrowed from similar tasks to evaluate performance on the APR task. We compare
fine-tuned models based on several open-sourced and adapted pre-trained models
with the traditional pipeline method. Our results suggest that finetuned models
improve the performance on the APR task significantly, hinting at the potential
benefits of using APR systems. We hope that the read, understand, and rewrite
approach of our work can serve as a basis that many NLP tasks and human readers
can benefit from.
| 2,020 | Computation and Language |
PANDORA Talks: Personality and Demographics on Reddit | Personality and demographics are important variables in social sciences,
while in NLP they can aid in interpretability and removal of societal biases.
However, datasets with both personality and demographic labels are scarce. To
address this, we present PANDORA, the first large-scale dataset of Reddit
comments labeled with three personality models (including the well-established
Big 5 model) and demographics (age, gender, and location) for more than 10k
users. We showcase the usefulness of this dataset on three experiments, where
we leverage the more readily available data from other personality models to
predict the Big 5 traits, analyze gender classification biases arising from
psycho-demographic variables, and carry out a confirmatory and exploratory
analysis based on psychological theories. Finally, we present benchmark
prediction models for all personality and demographic variables.
| 2,021 | Computation and Language |
A Multilingual Study of Multi-Sentence Compression using Word
Vertex-Labeled Graphs and Integer Linear Programming | Multi-Sentence Compression (MSC) aims to generate a short sentence with the
key information from a cluster of similar sentences. MSC enables summarization
and question-answering systems to generate outputs combining fully formed
sentences from one or several documents. This paper describes an Integer Linear
Programming method for MSC using a vertex-labeled graph to select different
keywords, with the goal of generating more informative sentences while
maintaining their grammaticality. Our system is of good quality and outperforms
the state of the art for evaluations led on news datasets in three languages:
French, Portuguese and Spanish. We led both automatic and manual evaluations to
determine the informativeness and the grammaticality of compressions for each
dataset. In additional tests, which take advantage of the fact that the length
of compressions can be modulated, we still improve ROUGE scores with shorter
output sentences.
| 2,020 | Computation and Language |
Recommendation Chart of Domains for Cross-Domain Sentiment
Analysis:Findings of A 20 Domain Study | Cross-domain sentiment analysis (CDSA) helps to address the problem of data
scarcity in scenarios where labelled data for a domain (known as the target
domain) is unavailable or insufficient. However, the decision to choose a
domain (known as the source domain) to leverage from is, at best, intuitive. In
this paper, we investigate text similarity metrics to facilitate source domain
selection for CDSA. We report results on 20 domains (all possible pairs) using
11 similarity metrics. Specifically, we compare CDSA performance with these
metrics for different domain-pairs to enable the selection of a suitable source
domain, given a target domain. These metrics include two novel metrics for
evaluating domain adaptability to help source domain selection of labelled data
and utilize word and sentence-based embeddings as metrics for unlabelled data.
The goal of our experiments is a recommendation chart that gives the K best
source domains for CDSA for a given target domain. We show that the best K
source domains returned by our similarity metrics have a precision of over 50%,
for varying values of K.
| 2,020 | Computation and Language |
Injecting Numerical Reasoning Skills into Language Models | Large pre-trained language models (LMs) are known to encode substantial
amounts of linguistic information. However, high-level reasoning skills, such
as numerical reasoning, are difficult to learn from a language-modeling
objective only. Consequently, existing models for numerical reasoning have used
specialized architectures with limited flexibility. In this work, we show that
numerical reasoning is amenable to automatic data generation, and thus one can
inject this skill into pre-trained LMs, by generating large amounts of data,
and training in a multi-task setup. We show that pre-training our model,
GenBERT, on this data, dramatically improves performance on DROP (49.3
$\rightarrow$ 72.3 F1), reaching performance that matches state-of-the-art
models of comparable size, while using a simple and general-purpose
encoder-decoder architecture. Moreover, GenBERT generalizes well to math word
problem datasets, while maintaining high performance on standard RC tasks. Our
approach provides a general recipe for injecting skills into large pre-trained
LMs, whenever the skill is amenable to automatic data augmentation.
| 2,020 | Computation and Language |
MuTual: A Dataset for Multi-Turn Dialogue Reasoning | Non-task oriented dialogue systems have achieved great success in recent
years due to largely accessible conversation data and the development of deep
learning techniques. Given a context, current systems are able to yield a
relevant and fluent response, but sometimes make logical mistakes because of
weak reasoning capabilities. To facilitate the conversation reasoning research,
we introduce MuTual, a novel dataset for Multi-Turn dialogue Reasoning,
consisting of 8,860 manually annotated dialogues based on Chinese student
English listening comprehension exams. Compared to previous benchmarks for
non-task oriented dialogue systems, MuTual is much more challenging since it
requires a model that can handle various reasoning problems. Empirical results
show that state-of-the-art methods only reach 71%, which is far behind the
human performance of 94%, indicating that there is ample room for improving
reasoning ability. MuTual is available at https://github.com/Nealcly/MuTual.
| 2,020 | Computation and Language |
Reducing Gender Bias in Neural Machine Translation as a Domain
Adaptation Problem | Training data for NLP tasks often exhibits gender bias in that fewer
sentences refer to women than to men. In Neural Machine Translation (NMT)
gender bias has been shown to reduce translation quality, particularly when the
target language has grammatical gender. The recent WinoMT challenge set allows
us to measure this effect directly (Stanovsky et al, 2019).
Ideally we would reduce system bias by simply debiasing all data prior to
training, but achieving this effectively is itself a challenge. Rather than
attempt to create a `balanced' dataset, we use transfer learning on a small set
of trusted, gender-balanced examples. This approach gives strong and consistent
improvements in gender debiasing with much less computational cost than
training from scratch.
A known pitfall of transfer learning on new domains is `catastrophic
forgetting', which we address both in adaptation and in inference. During
adaptation we show that Elastic Weight Consolidation allows a performance
trade-off between general translation quality and bias reduction. During
inference we propose a lattice-rescoring scheme which outperforms all systems
evaluated in Stanovsky et al (2019) on WinoMT with no degradation of general
test set BLEU, and we show this scheme can be applied to remove gender bias in
the output of `black box` online commercial MT systems. We demonstrate our
approach translating from English into three languages with varied linguistic
properties and data availability.
| 2,020 | Computation and Language |
Self-Training for Unsupervised Neural Machine Translation in Unbalanced
Training Data Scenarios | Unsupervised neural machine translation (UNMT) that relies solely on massive
monolingual corpora has achieved remarkable results in several translation
tasks. However, in real-world scenarios, massive monolingual corpora do not
exist for some extremely low-resource languages such as Estonian, and UNMT
systems usually perform poorly when there is not adequate training corpus for
one language. In this paper, we first define and analyze the unbalanced
training data scenario for UNMT. Based on this scenario, we propose UNMT
self-training mechanisms to train a robust UNMT system and improve its
performance in this case. Experimental results on several language pairs show
that the proposed methods substantially outperform conventional UNMT systems.
| 2,021 | Computation and Language |
Interpretability Analysis for Named Entity Recognition to Understand
System Predictions and How They Can Improve | Named Entity Recognition systems achieve remarkable performance on domains
such as English news. It is natural to ask: What are these models actually
learning to achieve this? Are they merely memorizing the names themselves? Or
are they capable of interpreting the text and inferring the correct entity type
from the linguistic context? We examine these questions by contrasting the
performance of several variants of LSTM-CRF architectures for named entity
recognition, with some provided only representations of the context as
features. We also perform similar experiments for BERT. We find that context
representations do contribute to system performance, but that the main factor
driving high performance is learning the name tokens themselves. We enlist
human annotators to evaluate the feasibility of inferring entity types from the
context alone and find that, while people are not able to infer the entity type
either for the majority of the errors made by the context-only system, there is
some room for improvement. A system should be able to recognize any name in a
predictive context correctly and our experiments indicate that current systems
may be further improved by such capability.
| 2,021 | Computation and Language |
Global Public Health Surveillance using Media Reports: Redesigning GPHIN | Global public health surveillance relies on reporting structures and
transmission of trustworthy health reports. But in practice, these processes
may not always be fast enough, or are hindered by procedural, technical, or
political barriers. GPHIN, the Global Public Health Intelligence Network, was
designed in the late 1990s to scour mainstream news for health events, as that
travels faster and more freely. This paper outlines the next generation of
GPHIN, which went live in 2017, and reports on design decisions underpinning
its new functions and innovations.
| 2,020 | Computation and Language |
BLEURT: Learning Robust Metrics for Text Generation | Text generation has made significant advances in the last few years. Yet,
evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU
and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a
learned evaluation metric based on BERT that can model human judgments with a
few thousand possibly biased training examples. A key aspect of our approach is
a novel pre-training scheme that uses millions of synthetic examples to help
the model generalize. BLEURT provides state-of-the-art results on the last
three years of the WMT Metrics shared task and the WebNLG Competition dataset.
In contrast to a vanilla BERT-based approach, it yields superior results even
when the training data is scarce and out-of-distribution.
| 2,020 | Computation and Language |
Translation Artifacts in Cross-lingual Transfer Learning | Both human and machine translation play a central role in cross-lingual
transfer learning: many multilingual datasets have been created through
professional translation services, and using machine translation to translate
either the test set or the training set is a widely used transfer technique. In
this paper, we show that such translation process can introduce subtle
artifacts that have a notable impact in existing cross-lingual models. For
instance, in natural language inference, translating the premise and the
hypothesis independently can reduce the lexical overlap between them, which
current models are highly sensitive to. We show that some previous findings in
cross-lingual transfer learning need to be reconsidered in the light of this
phenomenon. Based on the gained insights, we also improve the state-of-the-art
in XNLI for the translate-test and zero-shot approaches by 4.3 and 2.8 points,
respectively.
| 2,021 | Computation and Language |
FST Morphology for the Endangered Skolt Sami Language | We present advances in the development of a FST-based morphological analyzer
and generator for Skolt Sami. Like other minority Uralic languages, Skolt Sami
exhibits a rich morphology, on the one hand, and there is little golden
standard material for it, on the other. This makes NLP approaches for its study
difficult without a solid morphological analysis. The language is severely
endangered and the work presented in this paper forms a part of a greater whole
in its revitalization efforts. Furthermore, we intersperse our description with
facilitation and description practices not well documented in the
infrastructure. Currently, the analyzer covers over 30,000 Skolt Sami words in
148 inflectional paradigms and over 12 derivational forms.
| 2,020 | Computation and Language |
More Bang for Your Buck: Natural Perturbation for Robust Question
Answering | While recent models have achieved human-level scores on many NLP datasets, we
observe that they are considerably sensitive to small changes in input. As an
alternative to the standard approach of addressing this issue by constructing
training sets of completely new examples, we propose doing so via minimal
perturbation of examples. Specifically, our approach involves first collecting
a set of seed examples and then applying human-driven natural perturbations (as
opposed to rule-based machine perturbations), which often change the gold label
as well. Local perturbations have the advantage of being relatively easier (and
hence cheaper) to create than writing out completely new examples. To evaluate
the impact of this phenomenon, we consider a recent question-answering dataset
(BoolQ) and study the benefit of our approach as a function of the perturbation
cost ratio, the relative cost of perturbing an existing question vs. creating a
new one from scratch. We find that when natural perturbations are moderately
cheaper to create, it is more effective to train models using them: such models
exhibit higher robustness and better generalization, while retaining
performance on the original BoolQ dataset.
| 2,020 | Computation and Language |
Probing Neural Language Models for Human Tacit Assumptions | Humans carry stereotypic tacit assumptions (STAs) (Prince, 1978), or
propositional beliefs about generic concepts. Such associations are crucial for
understanding natural language. We construct a diagnostic set of word
prediction prompts to evaluate whether recent neural contextualized language
models trained on large text corpora capture STAs. Our prompts are based on
human responses in a psychological study of conceptual associations. We find
models to be profoundly effective at retrieving concepts given associated
properties. Our results demonstrate empirical evidence that stereotypic
conceptual representations are captured in neural models derived from
semi-supervised linguistic exposure.
| 2,020 | Computation and Language |
An In-depth Walkthrough on Evolution of Neural Machine Translation | Neural Machine Translation (NMT) methodologies have burgeoned from using
simple feed-forward architectures to the state of the art; viz. BERT model. The
use cases of NMT models have been broadened from just language translations to
conversational agents (chatbots), abstractive text summarization, image
captioning, etc. which have proved to be a gem in their respective
applications. This paper aims to study the major trends in Neural Machine
Translation, the state of the art models in the domain and a high level
comparison between them.
| 2,020 | Computation and Language |
Dense Passage Retrieval for Open-Domain Question Answering | Open-domain question answering relies on efficient passage retrieval to
select candidate contexts, where traditional sparse vector space models, such
as TF-IDF or BM25, are the de facto method. In this work, we show that
retrieval can be practically implemented using dense representations alone,
where embeddings are learned from a small number of questions and passages by a
simple dual-encoder framework. When evaluated on a wide range of open-domain QA
datasets, our dense retriever outperforms a strong Lucene-BM25 system largely
by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our
end-to-end QA system establish new state-of-the-art on multiple open-domain QA
benchmarks.
| 2,020 | Computation and Language |
Designing Precise and Robust Dialogue Response Evaluators | Automatic dialogue response evaluator has been proposed as an alternative to
automated metrics and human evaluation. However, existing automatic evaluators
achieve only moderate correlation with human judgement and they are not robust.
In this work, we propose to build a reference-free evaluator and exploit the
power of semi-supervised training and pretrained (masked) language models.
Experimental results demonstrate that the proposed evaluator achieves a strong
correlation (> 0.6) with human judgement and generalizes robustly to diverse
responses and corpora. We open-source the code and data in
https://github.com/ZHAOTING/dialog-processing.
| 2,020 | Computation and Language |
Scalable Multilingual Frontend for TTS | This paper describes progress towards making a Neural Text-to-Speech (TTS)
Frontend that works for many languages and can be easily extended to new
languages. We take a Machine Translation (MT) inspired approach to constructing
the frontend, and model both text normalization and pronunciation on a sentence
level by building and using sequence-to-sequence (S2S) models. We experimented
with training normalization and pronunciation as separate S2S models and with
training a single S2S model combining both functions.
For our language-independent approach to pronunciation we do not use a
lexicon. Instead all pronunciations, including context-based pronunciations,
are captured in the S2S model. We also present a language-independent chunking
and splicing technique that allows us to process arbitrary-length sentences.
Models for 18 languages were trained and evaluated. Many of the accuracy
measurements are above 99%. We also evaluated the models in the context of
end-to-end synthesis against our current production system.
| 2,020 | Computation and Language |
Identifying Distributional Perspective Differences from Colingual Groups | Perspective differences exist among different cultures or languages. A lack
of mutual understanding among different groups about their perspectives on
specific values or events may lead to uninformed decisions or biased opinions.
Automatically understanding the group perspectives can provide essential
background for many downstream applications of natural language processing
techniques. In this paper, we study colingual groups and use language corpora
as a proxy to identify their distributional perspectives. We present a novel
computational approach to learn shared understandings, and benchmark our method
by building culturally-aware models for the English, Chinese, and Japanese
languages. On a held out set of diverse topics including marriage, corruption,
democracy, our model achieves high correlation with human judgements regarding
intra-group values and inter-group differences.
| 2,021 | Computation and Language |
Generating Multilingual Voices Using Speaker Space Translation Based on
Bilingual Speaker Data | We present progress towards bilingual Text-to-Speech which is able to
transform a monolingual voice to speak a second language while preserving
speaker voice quality. We demonstrate that a bilingual speaker embedding space
contains a separate distribution for each language and that a simple transform
in speaker space generated by the speaker embedding can be used to control the
degree of accent of a synthetic voice in a language. The same transform can be
applied even to monolingual speakers.
In our experiments speaker data from an English-Spanish (Mexican) bilingual
speaker was used, and the goal was to enable English speakers to speak Spanish
and Spanish speakers to speak English. We found that the simple transform was
sufficient to convert a voice from one language to the other with a high degree
of naturalness. In one case the transformed voice outperformed a native
language voice in listening tests. Experiments further indicated that the
transform preserved many of the characteristics of the original voice. The
degree of accent present can be controlled and naturalness is relatively
consistent across a range of accent values.
| 2,020 | Computation and Language |
Negation Detection for Clinical Text Mining in Russian | Developing predictive modeling in medicine requires additional features from
unstructured clinical texts. In Russia, there are no instruments for natural
language processing to cope with problems of medical records. This paper is
devoted to a module of negation detection. The corpus-free machine learning
method is based on gradient boosting classifier is used to detect whether a
disease is denied, not mentioned or presented in the text. The detector
classifies negations for five diseases and shows average F-score from 0.81 to
0.93. The benefits of negation detection have been demonstrated by predicting
the presence of surgery for patients with the acute coronary syndrome.
| 2,020 | Computation and Language |
Automated Spelling Correction for Clinical Text Mining in Russian | The main goal of this paper is to develop a spell checker module for clinical
text in Russian. The described approach combines string distance measure
algorithms with technics of machine learning embedding methods. Our overall
precision is 0.86, lexical precision - 0.975 and error precision is 0.74. We
develop spell checker as a part of medical text mining tool regarding the
problems of misspelling, negation, experiencer and temporality detection.
| 2,020 | Computation and Language |
Style-transfer and Paraphrase: Looking for a Sensible Semantic
Similarity Metric | The rapid development of such natural language processing tasks as style
transfer, paraphrase, and machine translation often calls for the use of
semantic similarity metrics. In recent years a lot of methods to measure the
semantic similarity of two short texts were developed. This paper provides a
comprehensive analysis for more than a dozen of such methods. Using a new
dataset of fourteen thousand sentence pairs human-labeled according to their
semantic similarity, we demonstrate that none of the metrics widely used in the
literature is close enough to human judgment in these tasks. A number of
recently proposed metrics provide comparable results, yet Word Mover Distance
is shown to be the most reasonable solution to measure semantic similarity in
reformulated texts at the moment.
| 2,022 | Computation and Language |
Minimum Latency Training Strategies for Streaming Sequence-to-Sequence
ASR | Recently, a few novel streaming attention-based sequence-to-sequence (S2S)
models have been proposed to perform online speech recognition with linear-time
decoding complexity. However, in these models, the decisions to generate tokens
are delayed compared to the actual acoustic boundaries since their
unidirectional encoders lack future information. This leads to an inevitable
latency during inference. To alleviate this issue and reduce latency, we
propose several strategies during training by leveraging external hard
alignments extracted from the hybrid model. We investigate to utilize the
alignments in both the encoder and the decoder. On the encoder side, (1)
multi-task learning and (2) pre-training with the framewise classification task
are studied. On the decoder side, we (3) remove inappropriate alignment paths
beyond an acceptable latency during the alignment marginalization, and (4)
directly minimize the differentiable expected latency loss. Experiments on the
Cortana voice search task demonstrate that our proposed methods can
significantly reduce the latency, and even improve the recognition accuracy in
certain cases on the decoder side. We also present some analysis to understand
the behaviors of streaming S2S models.
| 2,020 | Computation and Language |
A New Dataset for Natural Language Inference from Code-mixed
Conversations | Natural Language Inference (NLI) is the task of inferring the logical
relationship, typically entailment or contradiction, between a premise and
hypothesis. Code-mixing is the use of more than one language in the same
conversation or utterance, and is prevalent in multilingual communities all
over the world. In this paper, we present the first dataset for code-mixed NLI,
in which both the premises and hypotheses are in code-mixed Hindi-English. We
use data from Hindi movies (Bollywood) as premises, and crowd-source hypotheses
from Hindi-English bilinguals. We conduct a pilot annotation study and describe
the final annotation protocol based on observations from the pilot. Currently,
the data collected consists of 400 premises in the form of code-mixed
conversation snippets and 2240 code-mixed hypotheses. We conduct an extensive
analysis to infer the linguistic phenomena commonly observed in the dataset
obtained. We evaluate the dataset using a standard mBERT-based pipeline for NLI
and report results.
| 2,020 | Computation and Language |
Overestimation of Syntactic Representationin Neural Language Models | With the advent of powerful neural language models over the last few years,
research attention has increasingly focused on what aspects of language they
represent that make them so successful. Several testing methodologies have been
developed to probe models' syntactic representations. One popular method for
determining a model's ability to induce syntactic structure trains a model on
strings generated according to a template then tests the model's ability to
distinguish such strings from superficially similar ones with different syntax.
We illustrate a fundamental problem with this approach by reproducing positive
results from a recent paper with two non-syntactic baseline language models: an
n-gram model and an LSTM model trained on scrambled inputs.
| 2,020 | Computation and Language |
Molweni: A Challenge Multiparty Dialogues-based Machine Reading
Comprehension Dataset with Discourse Structure | Research into the area of multiparty dialog has grown considerably over
recent years. We present the Molweni dataset, a machine reading comprehension
(MRC) dataset with discourse structure built over multiparty dialog. Molweni's
source samples from the Ubuntu Chat Corpus, including 10,000 dialogs comprising
88,303 utterances. We annotate 30,066 questions on this corpus, including both
answerable and unanswerable questions. Molweni also uniquely contributes
discourse dependency annotations in a modified Segmented Discourse
Representation Theory (SDRT; Asher et al., 2016) style for all of its
multiparty dialogs, contributing large-scale (78,245 annotated discourse
relations) data to bear on the task of multiparty dialog discourse parsing. Our
experiments show that Molweni is a challenging dataset for current MRC models:
BERT-wwm, a current, strong SQuAD 2.0 performer, achieves only 67.7% F1 on
Molweni's questions, a 20+% significant drop as compared against its SQuAD 2.0
performance.
| 2,020 | Computation and Language |
Towards Automatic Generation of Questions from Long Answers | Automatic question generation (AQG) has broad applicability in domains such
as tutoring systems, conversational agents, healthcare literacy, and
information retrieval. Existing efforts at AQG have been limited to short
answer lengths of up to two or three sentences. However, several real-world
applications require question generation from answers that span several
sentences. Therefore, we propose a novel evaluation benchmark to assess the
performance of existing AQG systems for long-text answers. We leverage the
large-scale open-source Google Natural Questions dataset to create the
aforementioned long-answer AQG benchmark. We empirically demonstrate that the
performance of existing AQG methods significantly degrades as the length of the
answer increases. Transformer-based methods outperform other existing AQG
methods on long answers in terms of automatic as well as human evaluation.
However, we still observe degradation in the performance of our best performing
models with increasing sentence length, suggesting that long answer QA is a
challenging benchmark task for future research.
| 2,020 | Computation and Language |
Beyond Fine-tuning: Few-Sample Sentence Embedding Transfer | Fine-tuning (FT) pre-trained sentence embedding models on small datasets has
been shown to have limitations. In this paper we show that concatenating the
embeddings from the pre-trained model with those from a simple sentence
embedding model trained only on the target data, can improve over the
performance of FT for few-sample tasks. To this end, a linear classifier is
trained on the combined embeddings, either by freezing the embedding model
weights or training the classifier and embedding models end-to-end. We perform
evaluation on seven small datasets from NLP tasks and show that our approach
with end-to-end training outperforms FT with negligible computational overhead.
Further, we also show that sophisticated combination techniques like CCA and
KCCA do not work as well in practice as concatenation. We provide theoretical
analysis to explain this empirical observation.
| 2,020 | Computation and Language |
Rapidly Deploying a Neural Search Engine for the COVID-19 Open Research
Dataset: Preliminary Thoughts and Lessons Learned | We present the Neural Covidex, a search engine that exploits the latest
neural ranking architectures to provide information access to the COVID-19 Open
Research Dataset curated by the Allen Institute for AI. This web application
exists as part of a suite of tools that we have developed over the past few
weeks to help domain experts tackle the ongoing global pandemic. We hope that
improved information access capabilities to the scientific literature can
inform evidence-based decision making and insight generation. This paper
describes our initial efforts and offers a few thoughts about lessons we have
learned along the way.
| 2,020 | Computation and Language |
One Model to Recognize Them All: Marginal Distillation from NER Models
with Different Tag Sets | Named entity recognition (NER) is a fundamental component in the modern
language understanding pipeline. Public NER resources such as annotated data
and model services are available in many domains. However, given a particular
downstream application, there is often no single NER resource that supports all
the desired entity types, so users must leverage multiple resources with
different tag sets. This paper presents a marginal distillation (MARDI)
approach for training a unified NER model from resources with disjoint or
heterogeneous tag sets. In contrast to recent works, MARDI merely requires
access to pre-trained models rather than the original training datasets. This
flexibility makes it easier to work with sensitive domains like healthcare and
finance. Furthermore, our approach is general enough to integrate with
different NER architectures, including local models (e.g., BiLSTM) and global
models (e.g., CRF). Experiments on two benchmark datasets show that MARDI
performs on par with a strong marginal CRF baseline, while being more flexible
in the form of required NER resources. MARDI also sets a new state of the art
on the progressive NER task. MARDI significantly outperforms the
start-of-the-art model on the task of progressive NER.
| 2,020 | Computation and Language |
Longformer: The Long-Document Transformer | Transformer-based models are unable to process long sequences due to their
self-attention operation, which scales quadratically with the sequence length.
To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process
documents of thousands of tokens or longer. Longformer's attention mechanism is
a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work
on long-sequence transformers, we evaluate Longformer on character-level
language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a
variety of downstream tasks. Our pretrained Longformer consistently outperforms
RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop
and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a
Longformer variant for supporting long document generative sequence-to-sequence
tasks, and demonstrate its effectiveness on the arXiv summarization dataset.
| 2,020 | Computation and Language |
On the Language Neutrality of Pre-trained Multilingual Representations | Multilingual contextual embeddings, such as multilingual BERT and
XLM-RoBERTa, have proved useful for many multi-lingual tasks. Previous work
probed the cross-linguality of the representations indirectly using zero-shot
transfer learning on morphological and syntactic tasks. We instead investigate
the language-neutrality of multilingual contextual embeddings directly and with
respect to lexical semantics. Our results show that contextual embeddings are
more language-neutral and, in general, more informative than aligned static
word-type embeddings, which are explicitly trained for language neutrality.
Contextual embeddings are still only moderately language-neutral by default, so
we propose two simple methods for achieving stronger language neutrality:
first, by unsupervised centering of the representation for each language and
second, by fitting an explicit projection on small parallel data. Besides, we
show how to reach state-of-the-art accuracy on language identification and
match the performance of statistical methods for word alignment of parallel
sentences without using parallel data.
| 2,020 | Computation and Language |
Joint translation and unit conversion for end-to-end localization | A variety of natural language tasks require processing of textual data which
contains a mix of natural language and formal languages such as mathematical
expressions. In this paper, we take unit conversions as an example and propose
a data augmentation technique which leads to models learning both translation
and conversion tasks as well as how to adequately switch between them for
end-to-end localization.
| 2,020 | Computation and Language |
Improving Disfluency Detection by Self-Training a Self-Attentive Model | Self-attentive neural syntactic parsers using contextualized word embeddings
(e.g. ELMo or BERT) currently produce state-of-the-art results in joint parsing
and disfluency detection in speech transcripts. Since the contextualized word
embeddings are pre-trained on a large amount of unlabeled data, using
additional unlabeled data to train a neural model might seem redundant.
However, we show that self-training - a semi-supervised technique for
incorporating unlabeled data - sets a new state-of-the-art for the
self-attentive parser on disfluency detection, demonstrating that self-training
provides benefits orthogonal to the pre-trained contextualized word
representations. We also show that ensembling self-trained parsers provides
further gains for disfluency detection.
| 2,020 | Computation and Language |
DeepSentiPers: Novel Deep Learning Models Trained Over Proposed
Augmented Persian Sentiment Corpus | This paper focuses on how to extract opinions over each Persian
sentence-level text. Deep learning models provided a new way to boost the
quality of the output. However, these architectures need to feed on big
annotated data as well as an accurate design. To best of our knowledge, we do
not merely suffer from lack of well-annotated Persian sentiment corpus, but
also a novel model to classify the Persian opinions in terms of both multiple
and binary classification. So in this work, first we propose two novel deep
learning architectures comprises of bidirectional LSTM and CNN. They are a part
of a deep hierarchy designed precisely and also able to classify sentences in
both cases. Second, we suggested three data augmentation techniques for the
low-resources Persian sentiment corpus. Our comprehensive experiments on three
baselines and two different neural word embedding methods show that our data
augmentation methods and intended models successfully address the aims of the
research.
| 2,020 | Computation and Language |
You Impress Me: Dialogue Generation via Mutual Persona Perception | Despite the continuing efforts to improve the engagingness and consistency of
chit-chat dialogue systems, the majority of current work simply focus on
mimicking human-like responses, leaving understudied the aspects of modeling
understanding between interlocutors. The research in cognitive science,
instead, suggests that understanding is an essential signal for a high-quality
chit-chat conversation. Motivated by this, we propose P^2 Bot, a
transmitter-receiver based framework with the aim of explicitly modeling
understanding. Specifically, P^2 Bot incorporates mutual persona perception to
enhance the quality of personalized dialogue generation. Experiments on a large
public dataset, Persona-Chat, demonstrate the effectiveness of our approach,
with a considerable boost over the state-of-the-art baselines across both
automatic metrics and human evaluations.
| 2,020 | Computation and Language |
Annotating Social Determinants of Health Using Active Learning, and
Characterizing Determinants Using Neural Event Extraction | Social determinants of health (SDOH) affect health outcomes, and knowledge of
SDOH can inform clinical decision-making. Automatically extracting SDOH
information from clinical text requires data-driven information extraction
models trained on annotated corpora that are heterogeneous and frequently
include critical SDOH. This work presents a new corpus with SDOH annotations, a
novel active learning framework, and the first extraction results on the new
corpus. The Social History Annotation Corpus (SHAC) includes 4,480 social
history sections with detailed annotation for 12 SDOH characterizing the
status, extent, and temporal information of 18K distinct events. We introduce a
novel active learning framework that selects samples for annotation using a
surrogate text classification task as a proxy for a more complex event
extraction task. The active learning framework successfully increases the
frequency of health risk factors and improves automatic extraction of these
events over undirected annotation. An event extraction model trained on SHAC
achieves high extraction performance for substance use status (0.82-0.93 F1),
employment status (0.81-0.86 F1), and living status type (0.81-0.93 F1) on data
from three institutions.
| 2,021 | Computation and Language |
End to End Chinese Lexical Fusion Recognition with Sememe Knowledge | In this paper, we present Chinese lexical fusion recognition, a new task
which could be regarded as one kind of coreference recognition. First, we
introduce the task in detail, showing the relationship with coreference
recognition and differences from the existing tasks. Second, we propose an
end-to-end joint model for the task, which exploits the state-of-the-art BERT
representations as encoder, and is further enhanced with the sememe knowledge
from HowNet by graph attention networks. We manually annotate a benchmark
dataset for the task and then conduct experiments on it. Results demonstrate
that our joint model is effective and competitive for the task. Detailed
analysis is offered for comprehensively understanding the new task and our
proposed model.
| 2,020 | Computation and Language |
Classifying Constructive Comments | We introduce the Constructive Comments Corpus (C3), comprised of 12,000
annotated news comments, intended to help build new tools for online
communities to improve the quality of their discussions. We define constructive
comments as high-quality comments that make a contribution to the conversation.
We explain the crowd worker annotation scheme and define a taxonomy of
sub-characteristics of constructiveness. The quality of the annotation scheme
and the resulting dataset is evaluated using measurements of inter-annotator
agreement, expert assessment of a sample, and by the constructiveness
sub-characteristics, which we show provide a proxy for the general
constructiveness concept. We provide models for constructiveness trained on C3
using both feature-based and a variety of deep learning approaches and
demonstrate that these models capture general rather than topic- or
domain-specific characteristics of constructiveness, through domain adaptation
experiments. We examine the role that length plays in our models, as comment
length could be easily gamed if models depend heavily upon this feature. By
examining the errors made by each model and their distribution by length, we
show that the best performing models are less correlated with comment
length.The constructiveness corpus and our experiments pave the way for a
moderation tool focused on promoting comments that make a contribution, rather
than only filtering out undesirable content.
| 2,020 | Computation and Language |
Unsupervised Commonsense Question Answering with Self-Talk | Natural language understanding involves reading between the lines with
implicit background knowledge. Current systems either rely on pre-trained
language models as the sole implicit source of world knowledge, or resort to
external knowledge bases (KBs) to incorporate additional relevant knowledge. We
propose an unsupervised framework based on self-talk as a novel alternative to
multiple-choice commonsense tasks. Inspired by inquiry-based discovery learning
(Bruner, 1961), our approach inquires language models with a number of
information seeking questions such as "$\textit{what is the definition of
...}$" to discover additional background knowledge. Empirical results
demonstrate that the self-talk procedure substantially improves the performance
of zero-shot language model baselines on four out of six commonsense
benchmarks, and competes with models that obtain knowledge from external KBs.
While our approach improves performance on several benchmarks, the self-talk
induced knowledge even when leading to correct answers is not always seen as
useful by human judges, raising interesting questions about the inner-workings
of pre-trained language models for commonsense reasoning.
| 2,020 | Computation and Language |
LAReQA: Language-agnostic answer retrieval from a multilingual pool | We present LAReQA, a challenging new benchmark for language-agnostic answer
retrieval from a multilingual candidate pool. Unlike previous cross-lingual
tasks, LAReQA tests for "strong" cross-lingual alignment, requiring
semantically related cross-language pairs to be closer in representation space
than unrelated same-language pairs. Building on multilingual BERT (mBERT), we
study different strategies for achieving strong alignment. We find that
augmenting training data via machine translation is effective, and improves
significantly over using mBERT out-of-the-box. Interestingly, the embedding
baseline that performs the best on LAReQA falls short of competing baselines on
zero-shot variants of our task that only target "weak" alignment. This finding
underscores our claim that languageagnostic retrieval is a substantively new
kind of cross-lingual evaluation.
| 2,020 | Computation and Language |
When Does Unsupervised Machine Translation Work? | Despite the reported success of unsupervised machine translation (MT), the
field has yet to examine the conditions under which these methods succeed, and
where they fail. We conduct an extensive empirical evaluation of unsupervised
MT using dissimilar language pairs, dissimilar domains, diverse datasets, and
authentic low-resource languages. We find that performance rapidly deteriorates
when source and target corpora are from different domains, and that random word
embedding initialization can dramatically affect downstream translation
performance. We additionally find that unsupervised MT performance declines
when source and target languages use different scripts, and observe very poor
performance on authentic low-resource language pairs. We advocate for extensive
empirical evaluation of unsupervised MT systems to highlight failure points and
encourage continued research on the most promising paradigms.
| 2,020 | Computation and Language |
Pre-training Text Representations as Meta Learning | Pre-training text representations has recently been shown to significantly
improve the state-of-the-art in many natural language processing tasks. The
central goal of pre-training is to learn text representations that are useful
for subsequent tasks. However, existing approaches are optimized by minimizing
a proxy objective, such as the negative log likelihood of language modeling. In
this work, we introduce a learning algorithm which directly optimizes model's
ability to learn text representations for effective learning of downstream
tasks. We show that there is an intrinsic connection between multi-task
pre-training and model-agnostic meta-learning with a sequence of meta-train
steps. The standard multi-task learning objective adopted in BERT is a special
case of our learning algorithm where the depth of meta-train is zero. We study
the problem in two settings: unsupervised pre-training and supervised
pre-training with different pre-training objects to verify the generality of
our approach.Experimental results show that our algorithm brings improvements
and learns better initializations for a variety of downstream tasks.
| 2,020 | Computation and Language |
Explaining Question Answering Models through Text Generation | Large pre-trained language models (LMs) have been shown to perform
surprisingly well when fine-tuned on tasks that require commonsense and world
knowledge. However, in end-to-end architectures, it is difficult to explain
what is the knowledge in the LM that allows it to make a correct prediction. In
this work, we propose a model for multi-choice question answering, where a
LM-based generator generates a textual hypothesis that is later used by a
classifier to answer the question. The hypothesis provides a window into the
information used by the fine-tuned LM that can be inspected by humans. A key
challenge in this setup is how to constrain the model to generate hypotheses
that are meaningful to humans. We tackle this by (a) joint training with a
simple similarity classifier that encourages meaningful hypotheses, and (b) by
adding loss functions that encourage natural text without repetitions. We show
on several tasks that our model reaches performance that is comparable to
end-to-end architectures, while producing hypotheses that elucidate the
knowledge used by the LM for answering the question.
| 2,020 | Computation and Language |
AMR Parsing via Graph-Sequence Iterative Inference | We propose a new end-to-end model that treats AMR parsing as a series of dual
decisions on the input sequence and the incrementally constructed graph. At
each time step, our model performs multiple rounds of attention, reasoning, and
composition that aim to answer two critical questions: (1) which part of the
input \textit{sequence} to abstract; and (2) where in the output \textit{graph}
to construct the new concept. We show that the answers to these two questions
are mutually causalities. We design a model based on iterative inference that
helps achieve better answers in both perspectives, leading to greatly improved
parsing accuracy. Our experimental results significantly outperform all
previously reported \textsc{Smatch} scores by large margins. Remarkably,
without the help of any large-scale pre-trained language model (e.g., BERT),
our model already surpasses previous state-of-the-art using BERT. With the help
of BERT, we can push the state-of-the-art results to 80.2\% on LDC2017T10 (AMR
2.0) and 75.4\% on LDC2014T12 (AMR 1.0).
| 2,020 | Computation and Language |
XtremeDistil: Multi-stage Distillation for Massive Multilingual Models | Deep and large pre-trained language models are the state-of-the-art for
various natural language processing tasks. However, the huge size of these
models could be a deterrent to use them in practice. Some recent and concurrent
works use knowledge distillation to compress these huge models into shallow
ones. In this work we study knowledge distillation with a focus on
multi-lingual Named Entity Recognition (NER). In particular, we study several
distillation strategies and propose a stage-wise optimization scheme leveraging
teacher internal representations that is agnostic of teacher architecture and
show that it outperforms strategies employed in prior works. Additionally, we
investigate the role of several factors like the amount of unlabeled data,
annotation resources, model architecture and inference latency to name a few.
We show that our approach leads to massive compression of MBERT-like teacher
models by upto 35x in terms of parameters and 51x in terms of latency for batch
inference while retaining 95% of its F1-score for NER over 41 languages.
| 2,020 | Computation and Language |
VGCN-BERT: Augmenting BERT with Graph Embedding for Text Classification | Much progress has been made recently on text classification with methods
based on neural networks. In particular, models using attention mechanism such
as BERT have shown to have the capability of capturing the contextual
information within a sentence or document. However, their ability of capturing
the global information about the vocabulary of a language is more limited. This
latter is the strength of Graph Convolutional Networks (GCN). In this paper, we
propose VGCN-BERT model which combines the capability of BERT with a Vocabulary
Graph Convolutional Network (VGCN). Local information and global information
interact through different layers of BERT, allowing them to influence mutually
and to build together a final representation for classification. In our
experiments on several text classification datasets, our approach outperforms
BERT and GCN alone, and achieve higher effectiveness than that reported in
previous studies.
| 2,020 | Computation and Language |
Integrated Eojeol Embedding for Erroneous Sentence Classification in
Korean Chatbots | This paper attempts to analyze the Korean sentence classification system for
a chatbot. Sentence classification is the task of classifying an input sentence
based on predefined categories. However, spelling or space error contained in
the input sentence causes problems in morphological analysis and tokenization.
This paper proposes a novel approach of Integrated Eojeol (Korean syntactic
word separated by space) Embedding to reduce the effect that poorly analyzed
morphemes may make on sentence classification. It also proposes two noise
insertion methods that further improve classification performance. Our
evaluation results indicate that the proposed system classifies erroneous
sentences more accurately than the baseline system by 17%p.0
| 2,021 | Computation and Language |
Aspect and Opinion Aware Abstractive Review Summarization with
Reinforced Hard Typed Decoder | In this paper, we study abstractive review summarization.Observing that
review summaries often consist of aspect words, opinion words and context
words, we propose a two-stage reinforcement learning approach, which first
predicts the output word type from the three types, and then leverages the
predicted word type to generate the final word distribution.Experimental
results on two Amazon product review datasets demonstrate that our method can
consistently outperform several strong baseline approaches based on ROUGE
scores.
| 2,020 | Computation and Language |
Reinforced Curriculum Learning on Pre-trained Neural Machine Translation
Models | The competitive performance of neural machine translation (NMT) critically
relies on large amounts of training data. However, acquiring high-quality
translation pairs requires expert knowledge and is costly. Therefore, how to
best utilize a given dataset of samples with diverse quality and
characteristics becomes an important yet understudied question in NMT.
Curriculum learning methods have been introduced to NMT to optimize a model's
performance by prescribing the data input order, based on heuristics such as
the assessment of noise and difficulty levels. However, existing methods
require training from scratch, while in practice most NMT models are
pre-trained on big data already. Moreover, as heuristics, they do not
generalize well. In this paper, we aim to learn a curriculum for improving a
pre-trained NMT model by re-selecting influential data samples from the
original training set and formulate this task as a reinforcement learning
problem. Specifically, we propose a data selection framework based on
Deterministic Actor-Critic, in which a critic network predicts the expected
change of model performance due to a certain sample, while an actor network
learns to select the best sample out of a random batch of samples presented to
it. Experiments on several translation datasets show that our method can
further improve the performance of NMT when original batch training reaches its
ceiling, without using additional new training data, and significantly
outperforms several strong baseline methods.
| 2,020 | Computation and Language |
Generating Fact Checking Explanations | Most existing work on automated fact checking is concerned with predicting
the veracity of claims based on metadata, social network spread, language used
in claims, and, more recently, evidence supporting or denying claims. A crucial
piece of the puzzle that is still missing is to understand how to automate the
most elaborate part of the process -- generating justifications for verdicts on
claims. This paper provides the first study of how these explanations can be
generated automatically based on available claim context, and how this task can
be modelled jointly with veracity prediction. Our results indicate that
optimising both objectives at the same time, rather than training them
separately, improves the performance of a fact checking system. The results of
a manual evaluation further suggest that the informativeness, coverage and
overall quality of the generated explanations are also improved in the
multi-task model.
| 2,020 | Computation and Language |
ProFormer: Towards On-Device LSH Projection Based Transformers | At the heart of text based neural models lay word representations, which are
powerful but occupy a lot of memory making it challenging to deploy to devices
with memory constraints such as mobile phones, watches and IoT. To surmount
these challenges, we introduce ProFormer -- a projection based transformer
architecture that is faster and lighter making it suitable to deploy to memory
constraint devices and preserve user privacy. We use LSH projection layer to
dynamically generate word representations on-the-fly without embedding lookup
tables leading to significant memory footprint reduction from O(V.d) to O(T),
where V is the vocabulary size, d is the embedding dimension size and T is the
dimension of the LSH projection representation.
We also propose a local projection attention (LPA) layer, which uses
self-attention to transform the input sequence of N LSH word projections into a
sequence of N/K representations reducing the computations quadratically by
O(K^2). We evaluate ProFormer on multiple text classification tasks and
observed improvements over prior state-of-the-art on-device approaches for
short text classification and comparable performance for long text
classification tasks. In comparison with a 2-layer BERT model, ProFormer
reduced the embedding memory footprint from 92.16 MB to 1.3 KB and requires 16
times less computation overhead, which is very impressive making it the fastest
and smallest on-device model.
| 2,021 | Computation and Language |
Unified Multi-Criteria Chinese Word Segmentation with BERT | Multi-Criteria Chinese Word Segmentation (MCCWS) aims at finding word
boundaries in a Chinese sentence composed of continuous characters while
multiple segmentation criteria exist. The unified framework has been widely
used in MCCWS and shows its effectiveness. Besides, the pre-trained BERT
language model has been also introduced into the MCCWS task in a multi-task
learning framework. In this paper, we combine the superiority of the unified
framework and pretrained language model, and propose a unified MCCWS model
based on BERT. Moreover, we augment the unified BERT-based MCCWS model with the
bigram features and an auxiliary criterion classification task. Experiments on
eight datasets with diverse criteria demonstrate that our methods could achieve
new state-of-the-art results for MCCWS.
| 2,020 | Computation and Language |
Neural Machine Translation: Challenges, Progress and Future | Machine translation (MT) is a technique that leverages computers to translate
human languages automatically. Nowadays, neural machine translation (NMT) which
models direct mapping between source and target languages with deep neural
networks has achieved a big breakthrough in translation performance and become
the de facto paradigm of MT. This article makes a review of NMT framework,
discusses the challenges in NMT, introduces some exciting recent progresses and
finally looks forward to some potential future research trends. In addition, we
maintain the state-of-the-art methods for various NMT tasks at the website
https://github.com/ZNLP/SOTA-MT.
| 2,020 | Computation and Language |
MLR: A Two-stage Conversational Query Rewriting Model with Multi-task
Learning | Conversational context understanding aims to recognize the real intention of
user from the conversation history, which is critical for building the dialogue
system. However, the multi-turn conversation understanding in open domain is
still quite challenging, which requires the system extracting the important
information and resolving the dependencies in contexts among a variety of open
topics. In this paper, we propose the conversational query rewriting model -
MLR, which is a Multi-task model on sequence Labeling and query Rewriting. MLR
reformulates the multi-turn conversational queries into a single turn query,
which conveys the true intention of users concisely and alleviates the
difficulty of the multi-turn dialogue modeling. In the model, we formulate the
query rewriting as a sequence generation problem and introduce word category
information via the auxiliary word category label predicting task. To train our
model, we construct a new Chinese query rewriting dataset and conduct
experiments on it. The experimental results show that our model outperforms
compared models, and prove the effectiveness of the word category information
in improving the rewriting performance.
| 2,020 | Computation and Language |
Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness | We explore the task of improving persona consistency of dialogue agents.
Recent models tackling consistency often train with additional Natural Language
Inference (NLI) labels or attach trained extra modules to the generative agent
for maintaining consistency. However, such additional labels and training can
be demanding. Also, we find even the best-performing persona-based agents are
insensitive to contradictory words. Inspired by social cognition and
pragmatics, we endow existing dialogue agents with public self-consciousness on
the fly through an imaginary listener. Our approach, based on the Rational
Speech Acts framework (Frank and Goodman, 2012), can enforce dialogue agents to
refrain from uttering contradiction. We further extend the framework by
learning the distractor selection, which has been usually done manually or
randomly. Results on Dialogue NLI (Welleck et al., 2019) and PersonaChat (Zhang
et al., 2018) dataset show that our approach reduces contradiction and improves
consistency of existing dialogue models. Moreover, we show that it can be
generalized to improve context-consistency beyond persona in dialogues.
| 2,020 | Computation and Language |
From Machine Reading Comprehension to Dialogue State Tracking: Bridging
the Gap | Dialogue state tracking (DST) is at the heart of task-oriented dialogue
systems. However, the scarcity of labeled data is an obstacle to building
accurate and robust state tracking systems that work across a variety of
domains. Existing approaches generally require some dialogue data with state
information and their ability to generalize to unknown domains is limited. In
this paper, we propose using machine reading comprehension (RC) in state
tracking from two perspectives: model architectures and datasets. We divide the
slot types in dialogue state into categorical or extractive to borrow the
advantages from both multiple-choice and span-based reading comprehension
models. Our method achieves near the current state-of-the-art in joint goal
accuracy on MultiWOZ 2.1 given full training data. More importantly, by
leveraging machine reading comprehension datasets, our method outperforms the
existing approaches by many a large margin in few-shot scenarios when the
availability of in-domain data is limited. Lastly, even without any state
tracking data, i.e., zero-shot scenario, our proposed approach achieves greater
than 90% average slot accuracy in 12 out of 30 slots in MultiWOZ 2.1.
| 2,020 | Computation and Language |
ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation
Networks | In this paper, we present ArCOV-19, an Arabic COVID-19 Twitter dataset that
spans one year, covering the period from 27th of January 2020 till 31st of
January 2021. ArCOV-19 is the first publicly-available Arabic Twitter dataset
covering COVID-19 pandemic that includes about 2.7M tweets alongside the
propagation networks of the most-popular subset of them (i.e., most-retweeted
and -liked). The propagation networks include both retweets and conversational
threads (i.e., threads of replies). ArCOV-19 is designed to enable research
under several domains including natural language processing, information
retrieval, and social computing. Preliminary analysis shows that ArCOV-19
captures rising discussions associated with the first reported cases of the
disease as they appeared in the Arab world. In addition to the source tweets
and propagation networks, we also release the search queries and
language-independent crawler used to collect the tweets to encourage the
curation of similar datasets.
| 2,021 | Computation and Language |
Frequency-Guided Word Substitutions for Detecting Textual Adversarial
Examples | Recent efforts have shown that neural text processing models are vulnerable
to adversarial examples, but the nature of these examples is poorly understood.
In this work, we show that adversarial attacks against CNN, LSTM and
Transformer-based classification models perform word substitutions that are
identifiable through frequency differences between replaced words and their
corresponding substitutions. Based on these findings, we propose
frequency-guided word substitutions (FGWS), a simple algorithm exploiting the
frequency properties of adversarial word substitutions for the detection of
adversarial examples. FGWS achieves strong performance by accurately detecting
adversarial examples on the SST-2 and IMDb sentiment datasets, with F1
detection scores of up to 91.4% against RoBERTa-based classification models. We
compare our approach against a recently proposed perturbation discrimination
framework and show that we outperform it by up to 13.0% F1.
| 2,021 | Computation and Language |
Keyword Assisted Topic Models | In recent years, fully automated content analysis based on probabilistic
topic models has become popular among social scientists because of their
scalability. The unsupervised nature of the models makes them suitable for
exploring topics in a corpus without prior knowledge. However, researchers find
that these models often fail to measure specific concepts of substantive
interest by inadvertently creating multiple topics with similar content and
combining distinct themes into a single topic. In this paper, we empirically
demonstrate that providing a small number of keywords can substantially enhance
the measurement performance of topic models. An important advantage of the
proposed keyword assisted topic model (keyATM) is that the specification of
keywords requires researchers to label topics prior to fitting a model to the
data. This contrasts with a widespread practice of post-hoc topic
interpretation and adjustments that compromises the objectivity of empirical
findings. In our application, we find that keyATM provides more interpretable
results, has better document classification performance, and is less sensitive
to the number of topics than the standard topic models. Finally, we show that
keyATM can also incorporate covariates and model time trends. An open-source
software package is available for implementing the proposed methodology.
| 2,023 | Computation and Language |
Punctuation Prediction in Spontaneous Conversations: Can We Mitigate ASR
Errors with Retrofitted Word Embeddings? | Automatic Speech Recognition (ASR) systems introduce word errors, which often
confuse punctuation prediction models, turning punctuation restoration into a
challenging task. These errors usually take the form of homonyms. We show how
retrofitting of the word embeddings on the domain-specific data can mitigate
ASR errors. Our main contribution is a method for better alignment of homonym
embeddings and the validation of the presented method on the punctuation
prediction task. We record the absolute improvement in punctuation prediction
accuracy between 6.2% (for question marks) to 9% (for periods) when compared
with the state-of-the-art model.
| 2,020 | Computation and Language |
CLUE: A Chinese Language Understanding Evaluation Benchmark | The advent of natural language understanding (NLU) benchmarks for English,
such as GLUE and SuperGLUE allows new NLU models to be evaluated across a
diverse set of tasks. These comprehensive benchmarks have facilitated a broad
range of research and applications in natural language processing (NLP). The
problem, however, is that most such benchmarks are limited to English, which
has made it difficult to replicate many of the successes in English NLU for
other languages. To help remedy this issue, we introduce the first large-scale
Chinese Language Understanding Evaluation (CLUE) benchmark. CLUE is an
open-ended, community-driven project that brings together 9 tasks spanning
several well-established single-sentence/sentence-pair classification tasks, as
well as machine reading comprehension, all on original Chinese text. To
establish results on these tasks, we report scores using an exhaustive set of
current state-of-the-art pre-trained Chinese models (9 in total). We also
introduce a number of supplementary datasets and additional tools to help
facilitate further progress on Chinese NLU. Our benchmark is released at
https://www.CLUEbenchmarks.com
| 2,020 | Computation and Language |
A Simple Approach to Learning Unsupervised Multilingual Embeddings | Recent progress on unsupervised learning of cross-lingual embeddings in
bilingual setting has given impetus to learning a shared embedding space for
several languages without any supervision. A popular framework to solve the
latter problem is to jointly solve the following two sub-problems: 1) learning
unsupervised word alignment between several pairs of languages, and 2) learning
how to map the monolingual embeddings of every language to a shared
multilingual space. In contrast, we propose a simple, two-stage framework in
which we decouple the above two sub-problems and solve them separately using
existing techniques. The proposed approach obtains surprisingly good
performance in various tasks such as bilingual lexicon induction, cross-lingual
word similarity, multilingual document classification, and multilingual
dependency parsing. When distant languages are involved, the proposed solution
illustrates robustness and outperforms existing unsupervised multilingual word
embedding approaches. Overall, our experimental results encourage development
of multi-stage models for such challenging problems.
| 2,020 | Computation and Language |
Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks | Knowledge graph (KG) question generation (QG) aims to generate natural
language questions from KGs and target answers. Previous works mostly focus on
a simple setting which is to generate questions from a single KG triple. In
this work, we focus on a more realistic setting where we aim to generate
questions from a KG subgraph and target answers. In addition, most of previous
works built on either RNN-based or Transformer based models to encode a
linearized KG sugraph, which totally discards the explicit structure
information of a KG subgraph. To address this issue, we propose to apply a
bidirectional Graph2Seq model to encode the KG subgraph. Furthermore, we
enhance our RNN decoder with node-level copying mechanism to allow directly
copying node attributes from the KG subgraph to the output question. Both
automatic and human evaluation results demonstrate that our model achieves new
state-of-the-art scores, outperforming existing methods by a significant margin
on two QG benchmarks. Experimental results also show that our QG model can
consistently benefit the Question Answering (QA) task as a mean of data
augmentation.
| 2,023 | Computation and Language |
BLEU might be Guilty but References are not Innocent | The quality of automatic metrics for machine translation has been
increasingly called into question, especially for high-quality systems. This
paper demonstrates that, while choice of metric is important, the nature of the
references is also critical. We study different methods to collect references
and compare their value in automated evaluation by reporting correlation with
human evaluation for a variety of systems and metrics. Motivated by the finding
that typical references exhibit poor diversity, concentrating around
translationese language, we develop a paraphrasing task for linguists to
perform on existing reference translations, which counteracts this bias. Our
method yields higher correlation with human judgment not only for the
submissions of WMT 2019 English to German, but also for Back-translation and
APE augmented MT output, which have been shown to have low correlation with
automatic metrics using standard references. We demonstrate that our
methodology improves correlation with all modern evaluation metrics we look at,
including embedding-based methods. To complete this picture, we reveal that
multi-reference BLEU does not improve the correlation for high quality output,
and present an alternative multi-reference formulation that is more effective.
| 2,020 | Computation and Language |
Adversarial Augmentation Policy Search for Domain and Cross-Lingual
Generalization in Reading Comprehension | Reading comprehension models often overfit to nuances of training datasets
and fail at adversarial evaluation. Training with adversarially augmented
dataset improves robustness against those adversarial attacks but hurts
generalization of the models. In this work, we present several effective
adversaries and automated data augmentation policy search methods with the goal
of making reading comprehension models more robust to adversarial evaluation,
but also improving generalization to the source domain as well as new domains
and languages. We first propose three new methods for generating QA
adversaries, that introduce multiple points of confusion within the context,
show dependence on insertion location of the distractor, and reveal the
compounding effect of mixing adversarial strategies with syntactic and semantic
paraphrasing methods. Next, we find that augmenting the training datasets with
uniformly sampled adversaries improves robustness to the adversarial attacks
but leads to decline in performance on the original unaugmented dataset. We
address this issue via RL and more efficient Bayesian policy search methods for
automatically learning the best augmentation policy combinations of the
transformation probability for each adversary in a large search space. Using
these learned policies, we show that adversarial training can lead to
significant improvements in in-domain, out-of-domain, and cross-lingual
(German, Russian, Turkish) generalization.
| 2,020 | Computation and Language |
Pretrained Transformers Improve Out-of-Distribution Robustness | Although pretrained Transformers such as BERT achieve high accuracy on
in-distribution examples, do they generalize to new distributions? We
systematically measure out-of-distribution (OOD) generalization for seven NLP
datasets by constructing a new robustness benchmark with realistic distribution
shifts. We measure the generalization of previous models including bag-of-words
models, ConvNets, and LSTMs, and we show that pretrained Transformers'
performance declines are substantially smaller. Pretrained transformers are
also more effective at detecting anomalous or OOD examples, while many previous
models are frequently worse than chance. We examine which factors affect
robustness, finding that larger models are not necessarily more robust,
distillation can be harmful, and more diverse pretraining data can enhance
robustness. Finally, we show where future work can improve OOD robustness.
| 2,020 | Computation and Language |
AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for
Extractive Document Summarization | Redundancy-aware extractive summarization systems score the redundancy of the
sentences to be included in a summary either jointly with their salience
information or separately as an additional sentence scoring step. Previous work
shows the efficacy of jointly scoring and selecting sentences with neural
sequence generation models. It is, however, not well-understood if the gain is
due to better encoding techniques or better redundancy reduction approaches.
Similarly, the contribution of salience versus diversity components on the
created summary is not studied well. Building on the state-of-the-art encoding
methods for summarization, we present two adaptive learning models: AREDSUM-SEQ
that jointly considers salience and novelty during sentence selection; and a
two-step AREDSUM-CTX that scores salience first, then learns to balance
salience and redundancy, enabling the measurement of the impact of each aspect.
Empirical results on CNN/DailyMail and NYT50 datasets show that by modeling
diversity explicitly in a separate step, AREDSUM-CTX achieves significantly
better performance than AREDSUM-SEQ as well as state-of-the-art extractive
summarization baselines.
| 2,021 | Computation and Language |
PoKi: A Large Dataset of Poems by Children | Child language studies are crucial in improving our understanding of child
well-being; especially in determining the factors that impact happiness, the
sources of anxiety, techniques of emotion regulation, and the mechanisms to
cope with stress. However, much of this research is stymied by the lack of
availability of large child-written texts. We present a new corpus of
child-written text, PoKi, which includes about 62 thousand poems written by
children from grades 1 to 12. PoKi is especially useful in studying child
language because it comes with information about the age of the child authors
(their grade). We analyze the words in PoKi along several emotion dimensions
(valence, arousal, dominance) and discrete emotions (anger, fear, sadness,
joy). We use non-parametric regressions to model developmental differences from
early childhood to late-adolescence. Results show decreases in valence that are
especially pronounced during mid-adolescence, while arousal and dominance
peaked during adolescence. Gender differences in the developmental trajectory
of emotions are also observed. Our results support and extend the current state
of emotion development research.
| 2,020 | Computation and Language |
A Divide-and-Conquer Approach to the Summarization of Long Documents | We present a novel divide-and-conquer method for the neural summarization of
long documents. Our method exploits the discourse structure of the document and
uses sentence similarity to split the problem into an ensemble of smaller
summarization problems. In particular, we break a long document and its summary
into multiple source-target pairs, which are used for training a model that
learns to summarize each part of the document separately. These partial
summaries are then combined in order to produce a final complete summary. With
this approach we can decompose the problem of long document summarization into
smaller and simpler problems, reducing computational complexity and creating
more training examples, which at the same time contain less noise in the target
summaries compared to the standard approach. We demonstrate that this approach
paired with different summarization models, including sequence-to-sequence RNNs
and Transformers, can lead to improved summarization performance. Our best
models achieve results that are on par with the state-of-the-art in two two
publicly available datasets of academic articles.
| 2,020 | Computation and Language |
Reverse Engineering Configurations of Neural Text Generation Models | This paper seeks to develop a deeper understanding of the fundamental
properties of neural text generations models. The study of artifacts that
emerge in machine generated text as a result of modeling choices is a nascent
research area. Previously, the extent and degree to which these artifacts
surface in generated text has not been well studied. In the spirit of better
understanding generative text models and their artifacts, we propose the new
task of distinguishing which of several variants of a given model generated a
piece of text, and we conduct an extensive suite of diagnostic tests to observe
whether modeling choices (e.g., sampling methods, top-$k$ probabilities, model
architectures, etc.) leave detectable artifacts in the text they generate. Our
key finding, which is backed by a rigorous set of experiments, is that such
artifacts are present and that different modeling choices can be inferred by
observing the generated text alone. This suggests that neural text generators
may be more sensitive to various modeling choices than previously thought.
| 2,020 | Computation and Language |
Robustly Pre-trained Neural Model for Direct Temporal Relation
Extraction | Background: Identifying relationships between clinical events and temporal
expressions is a key challenge in meaningfully analyzing clinical text for use
in advanced AI applications. While previous studies exist, the state-of-the-art
performance has significant room for improvement.
Methods: We studied several variants of BERT (Bidirectional Encoder
Representations using Transformers) some involving clinical domain
customization and the others involving improved architecture and/or training
strategies. We evaluated these methods using a direct temporal relations
dataset which is a semantically focused subset of the 2012 i2b2 temporal
relations challenge dataset.
Results: Our results show that RoBERTa, which employs better pre-training
strategies including using 10x larger corpus, has improved overall F measure by
0.0864 absolute score (on the 1.00 scale) and thus reducing the error rate by
24% relative to the previous state-of-the-art performance achieved with an SVM
(support vector machine) model.
Conclusion: Modern contextual language modeling neural networks, pre-trained
on a large corpus, achieve impressive performance even on highly-nuanced
clinical temporal relation tasks.
| 2,020 | Computation and Language |
Cascade Neural Ensemble for Identifying Scientifically Sound Articles | Background: A significant barrier to conducting systematic reviews and
meta-analysis is efficiently finding scientifically sound relevant articles.
Typically, less than 1% of articles match this requirement which leads to a
highly imbalanced task. Although feature-engineered and early neural networks
models were studied for this task, there is an opportunity to improve the
results.
Methods: We framed the problem of filtering articles as a classification
task, and trained and tested several ensemble architectures of SciBERT, a
variant of BERT pre-trained on scientific articles, on a manually annotated
dataset of about 50K articles from MEDLINE. Since scientifically sound articles
are identified through a multi-step process we proposed a novel cascade
ensemble analogous to the selection process. We compared the performance of the
cascade ensemble with a single integrated model and other types of ensembles as
well as with results from previous studies.
Results: The cascade ensemble architecture achieved 0.7505 F measure, an
impressive 49.1% error rate reduction, compared to a CNN model that was
previously proposed and evaluated on a selected subset of the 50K articles. On
the full dataset, the cascade ensemble achieved 0.7639 F measure, resulting in
an error rate reduction of 19.7% compared to the best performance reported in a
previous study that used the full dataset.
Conclusion: Pre-trained contextual encoder neural networks (e.g. SciBERT)
perform better than the models studied previously and manually created search
filters in filtering for scientifically sound relevant articles. The superior
performance achieved by the cascade ensemble is a significant result that
generalizes beyond this task and the dataset, and is analogous to query
optimization in IR and databases.
| 2,020 | Computation and Language |
Cross-Lingual Semantic Role Labeling with High-Quality Translated
Training Corpus | Many efforts of research are devoted to semantic role labeling (SRL) which is
crucial for natural language understanding. Supervised approaches have achieved
impressing performances when large-scale corpora are available for
resource-rich languages such as English. While for the low-resource languages
with no annotated SRL dataset, it is still challenging to obtain competitive
performances. Cross-lingual SRL is one promising way to address the problem,
which has achieved great advances with the help of model transferring and
annotation projection. In this paper, we propose a novel alternative based on
corpus translation, constructing high-quality training datasets for the target
languages from the source gold-standard SRL annotations. Experimental results
on Universal Proposition Bank show that the translation-based method is highly
effective, and the automatic pseudo datasets can improve the target-language
SRL performances significantly.
| 2,020 | Computation and Language |
Quantifying Community Characteristics of Maternal Mortality Using Social
Media | While most mortality rates have decreased in the US, maternal mortality has
increased and is among the highest of any OECD nation. Extensive public health
research is ongoing to better understand the characteristics of communities
with relatively high or low rates. In this work, we explore the role that
social media language can play in providing insights into such community
characteristics. Analyzing pregnancy-related tweets generated in US counties,
we reveal a diverse set of latent topics including Morning Sickness, Celebrity
Pregnancies, and Abortion Rights. We find that rates of mentioning these topics
on Twitter predicts maternal mortality rates with higher accuracy than standard
socioeconomic and risk variables such as income, race, and access to
health-care, holding even after reducing the analysis to six topics chosen for
their interpretability and connections to known risk factors. We then
investigate psychological dimensions of community language, finding the use of
less trustful, more stressed, and more negative affective language is
significantly associated with higher mortality rates, while trust and negative
affect also explain a significant portion of racial disparities in maternal
mortality. We discuss the potential for these insights to inform actionable
health interventions at the community-level.
| 2,020 | Computation and Language |
Code Completion using Neural Attention and Byte Pair Encoding | In this paper, we aim to do code completion based on implementing a Neural
Network from Li et. al.. Our contribution is that we use an encoding that is
in-between character and word encoding called Byte Pair Encoding (BPE). We use
this on the source code files treating them as natural text without first going
through the abstract syntax tree (AST). We have implemented two models: an
attention-enhanced LSTM and a pointer network, where the pointer network was
originally introduced to solve out of vocabulary problems. We are interested to
see if BPE can replace the need for the pointer network for code completion.
| 2,020 | Computation and Language |
Speech Translation and the End-to-End Promise: Taking Stock of Where We
Are | Over its three decade history, speech translation has experienced several
shifts in its primary research themes; moving from loosely coupled cascades of
speech recognition and machine translation, to exploring questions of tight
coupling, and finally to end-to-end models that have recently attracted much
attention. This paper provides a brief survey of these developments, along with
a discussion of the main challenges of traditional approaches which stem from
committing to intermediate representations from the speech recognizer, and from
training cascaded models separately towards different objectives.
Recent end-to-end modeling techniques promise a principled way of overcoming
these issues by allowing joint training of all model components and removing
the need for explicit intermediate representations. However, a closer look
reveals that many end-to-end models fall short of solving these issues, due to
compromises made to address data scarcity. This paper provides a unifying
categorization and nomenclature that covers both traditional and recent
approaches and that may help researchers by highlighting both trade-offs and
open research questions.
| 2,020 | Computation and Language |
Incorporating Uncertain Segmentation Information into Chinese NER for
Social Media Text | Chinese word segmentation is necessary to provide word-level information for
Chinese named entity recognition (NER) systems. However, segmentation error
propagation is a challenge for Chinese NER while processing colloquial data
like social media text. In this paper, we propose a model (UIcwsNN) that
specializes in identifying entities from Chinese social media text, especially
by leveraging ambiguous information of word segmentation. Such uncertain
information contains all the potential segmentation states of a sentence that
provides a channel for the model to infer deep word-level characteristics. We
propose a trilogy (i.e., candidate position embedding -> position selective
attention -> adaptive word convolution) to encode uncertain word segmentation
information and acquire appropriate word-level representation. Experiments
results on the social media corpus show that our model alleviates the
segmentation error cascading trouble effectively, and achieves a significant
performance improvement of more than 2% over previous state-of-the-art methods.
| 2,020 | Computation and Language |
Jointly Modeling Aspect and Sentiment with Dynamic Heterogeneous Graph
Neural Networks | Target-Based Sentiment Analysis aims to detect the opinion aspects (aspect
extraction) and the sentiment polarities (sentiment detection) towards them.
Both the previous pipeline and integrated methods fail to precisely model the
innate connection between these two objectives. In this paper, we propose a
novel dynamic heterogeneous graph to jointly model the two objectives in an
explicit way. Both the ordinary words and sentiment labels are treated as nodes
in the heterogeneous graph, so that the aspect words can interact with the
sentiment information. The graph is initialized with multiple types of
dependencies, and dynamically modified during real-time prediction. Experiments
on the benchmark datasets show that our model outperforms the state-of-the-art
models. Further analysis demonstrates that our model obtains significant
performance gain on the challenging instances under multiple-opinion aspects
and no-opinion aspect situations.
| 2,020 | Computation and Language |
Query-Variant Advertisement Text Generation with Association Knowledge | Online advertising is an important revenue source for many IT companies. In
the search advertising scenario, advertisement text that meets the need of the
search query would be more attractive to the user. However, the manual creation
of query-variant advertisement texts for massive items is expensive.
Traditional text generation methods tend to focus on the general searching
needs with high frequency while ignoring the diverse personalized searching
needs with low frequency. In this paper, we propose the query-variant
advertisement text generation task that aims to generate candidate
advertisement texts for different web search queries with various needs based
on queries and item keywords. To solve the problem of ignoring low-frequency
needs, we propose a dynamic association mechanism to expand the receptive field
based on external knowledge, which can obtain associated words to be added to
the input. These associated words can serve as bridges to transfer the ability
of the model from the familiar high-frequency words to the unfamiliar
low-frequency words. With association, the model can make use of various
personalized needs in queries and generate query-variant advertisement texts.
Both automatic and human evaluations show that our model can generate more
attractive advertisement text than baselines.
| 2,021 | Computation and Language |
Two halves of a meaningful text are statistically different | Which statistical features distinguish a meaningful text (possibly written in
an unknown system) from a meaningless set of symbols? Here we answer this
question by comparing features of the first half of a text to its second half.
This comparison can uncover hidden effects, because the halves have the same
values of many parameters (style, genre {\it etc}). We found that the first
half has more different words and more rare words than the second half. Also,
words in the first half are distributed less homogeneously over the text in the
sense of of the difference between the frequency and the inverse spatial
period. These differences hold for the significant majority of several hundred
relatively short texts we studied. The statistical significance is confirmed
via the Wilcoxon test. Differences disappear after random permutation of words
that destroys the linear structure of the text. The differences reveal a
temporal asymmetry in meaningful texts, which is confirmed by showing that
texts are much better compressible in their natural way (i.e. along the
narrative) than in the word-inverted form. We conjecture that these results
connect the semantic organization of a text (defined by the flow of its
narrative) to its statistical features.
| 2,021 | Computation and Language |
What's so special about BERT's layers? A closer look at the NLP pipeline
in monolingual and multilingual models | Peeking into the inner workings of BERT has shown that its layers resemble
the classical NLP pipeline, with progressively more complex tasks being
concentrated in later layers. To investigate to what extent these results also
hold for a language other than English, we probe a Dutch BERT-based model and
the multilingual BERT model for Dutch NLP tasks. In addition, through a deeper
analysis of part-of-speech tagging, we show that also within a given task,
information is spread over different parts of the network and the pipeline
might not be as neat as it seems. Each layer has different specialisations, so
that it may be more useful to combine information from different layers,
instead of selecting a single one based on the best overall performance.
| 2,020 | Computation and Language |
Multi-Ontology Refined Embeddings (MORE): A Hybrid Multi-Ontology and
Corpus-based Semantic Representation for Biomedical Concepts | Objective: Currently, a major limitation for natural language processing
(NLP) analyses in clinical applications is that a concept can be referenced in
various forms across different texts. This paper introduces Multi-Ontology
Refined Embeddings (MORE), a novel hybrid framework for incorporating domain
knowledge from multiple ontologies into a distributional semantic model,
learned from a corpus of clinical text.
Materials and Methods: We use the RadCore and MIMIC-III free-text datasets
for the corpus-based component of MORE. For the ontology-based part, we use the
Medical Subject Headings (MeSH) ontology and three state-of-the-art
ontology-based similarity measures. In our approach, we propose a new learning
objective, modified from the Sigmoid cross-entropy objective function.
Results and Discussion: We evaluate the quality of the generated word
embeddings using two established datasets of semantic similarities among
biomedical concept pairs. On the first dataset with 29 concept pairs, with the
similarity scores established by physicians and medical coders, MORE's
similarity scores have the highest combined correlation (0.633), which is 5.0%
higher than that of the baseline model and 12.4% higher than that of the best
ontology-based similarity measure.On the second dataset with 449 concept pairs,
MORE's similarity scores have a correlation of 0.481, with the average of four
medical residents' similarity ratings, and that outperforms the skip-gram model
by 8.1% and the best ontology measure by 6.9%.
| 2,020 | Computation and Language |
Multilingual Machine Translation: Closing the Gap between Shared and
Language-specific Encoder-Decoders | State-of-the-art multilingual machine translation relies on a universal
encoder-decoder, which requires retraining the entire system to add new
languages. In this paper, we propose an alternative approach that is based on
language-specific encoder-decoders, and can thus be more easily extended to new
languages by learning their corresponding modules. So as to encourage a common
interlingua representation, we simultaneously train the N initial languages.
Our experiments show that the proposed approach outperforms the universal
encoder-decoder by 3.28 BLEU points on average, and when adding new languages,
without the need to retrain the rest of the modules. All in all, our work
closes the gap between shared and language-specific encoder-decoders, advancing
toward modular multilingual machine translation systems that can be flexibly
extended in lifelong learning settings.
| 2,020 | Computation and Language |
Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation
with Semantic Fidelity | End-to-end neural data-to-text (D2T) generation has recently emerged as an
alternative to pipeline-based architectures. However, it has faced challenges
in generalizing to new domains and generating semantically consistent text. In
this work, we present DataTuner, a neural, end-to-end data-to-text generation
system that makes minimal assumptions about the data representation and the
target domain. We take a two-stage generation-reranking approach, combining a
fine-tuned language model with a semantic fidelity classifier. Each of our
components is learnt end-to-end without the need for dataset-specific
heuristics, entity delexicalization, or post-processing. We show that DataTuner
achieves state of the art results on the automated metrics across four major
D2T datasets (LDC2017T10, WebNLG, ViGGO, and Cleaned E2E), with a fluency
assessed by human annotators nearing or exceeding the human-written reference
texts. We further demonstrate that the model-based semantic fidelity scorer in
DataTuner is a better assessment tool compared to traditional, heuristic-based
measures. Our generated text has a significantly better semantic fidelity than
the state of the art across all four datasets
| 2,020 | Computation and Language |
Multi-source Attention for Unsupervised Domain Adaptation | Domain adaptation considers the problem of generalising a model learnt using
data from a particular source domain to a different target domain. Often it is
difficult to find a suitable single source to adapt from, and one must consider
multiple sources. Using an unrelated source can result in sub-optimal
performance, known as the \emph{negative transfer}. However, it is challenging
to select the appropriate source(s) for classifying a given target instance in
multi-source unsupervised domain adaptation (UDA). We model source-selection as
an attention-learning problem, where we learn attention over sources for a
given target instance. For this purpose, we first independently learn
source-specific classification models, and a relatedness map between sources
and target domains using pseudo-labelled target domain instances. Next, we
learn attention-weights over the sources for aggregating the predictions of the
source-specific models. Experimental results on cross-domain sentiment
classification benchmarks show that the proposed method outperforms prior
proposals in multi-source UDA.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.