Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Statistically Profiling Biases in Natural Language Reasoning Datasets
and Models
|
Recent work has indicated that many natural language understanding and
reasoning datasets contain statistical cues that may be taken advantaged of by
NLP models whose capability may thus be grossly overestimated. To discover the
potential weakness in the models, some human-designed stress tests have been
proposed but they are expensive to create and do not generalize to arbitrary
models. We propose a light-weight and general statistical profiling framework,
ICQ (I-See-Cue), which automatically identifies possible biases in any
multiple-choice NLU datasets without the need to create any additional test
cases, and further evaluates through blackbox testing the extent to which
models may exploit these biases.
| 2,021 |
Computation and Language
|
Efficient Retrieval Augmented Generation from Unstructured Knowledge for
Task-Oriented Dialog
|
This paper summarizes our work on the first track of the ninth Dialog System
Technology Challenge (DSTC 9), "Beyond Domain APIs: Task-oriented
Conversational Modeling with Unstructured Knowledge Access". The goal of the
task is to generate responses to user turns in a task-oriented dialog that
require knowledge from unstructured documents. The task is divided into three
subtasks: detection, selection and generation. In order to be compute
efficient, we formulate the selection problem in terms of hierarchical
classification steps. We achieve our best results with this model.
Alternatively, we employ siamese sequence embedding models, referred to as
Dense Knowledge Retrieval, to retrieve relevant documents. This method further
reduces the computation time by a factor of more than 100x at the cost of
degradation in R@1 of 5-6% compared to the first model. Then for either
approach, we use Retrieval Augmented Generation to generate responses based on
multiple selected snippets and we show how the method can be used to fine-tune
trained embeddings.
| 2,021 |
Computation and Language
|
Conversational Query Rewriting with Self-supervised Learning
|
Context modeling plays a critical role in building multi-turn dialogue
systems. Conversational Query Rewriting (CQR) aims to simplify the multi-turn
dialogue modeling into a single-turn problem by explicitly rewriting the
conversational query into a self-contained utterance. However, existing
approaches rely on massive supervised training data, which is labor-intensive
to annotate. And the detection of the omitted important information from
context can be further improved. Besides, intent consistency constraint between
contextual query and rewritten query is also ignored. To tackle these issues,
we first propose to construct a large-scale CQR dataset automatically via
self-supervised learning, which does not need human annotation. Then we
introduce a novel CQR model Teresa based on Transformer, which is enhanced by
self-attentive keywords detection and intent consistency constraint. Finally,
we conduct extensive experiments on two public datasets. Experimental results
demonstrate that our proposed model outperforms existing CQR baselines
significantly, and also prove the effectiveness of self-supervised learning on
improving the CQR performance.
| 2,021 |
Computation and Language
|
Bayesian Transformer Language Models for Speech Recognition
|
State-of-the-art neural language models (LMs) represented by Transformers are
highly complex. Their use of fixed, deterministic parameter estimates fail to
account for model uncertainty and lead to over-fitting and poor generalization
when given limited training data. In order to address these issues, this paper
proposes a full Bayesian learning framework for Transformer LM estimation.
Efficient variational inference based approaches are used to estimate the
latent parameter posterior distributions associated with different parts of the
Transformer model architecture including multi-head self-attention, feed
forward and embedding layers. Statistically significant word error rate (WER)
reductions up to 0.5\% absolute (3.18\% relative) and consistent perplexity
gains were obtained over the baseline Transformer LMs on state-of-the-art
Switchboard corpus trained LF-MMI factored TDNN systems with i-Vector speaker
adaptation. Performance improvements were also obtained on a cross domain LM
adaptation task requiring porting a Transformer LM trained on the Switchboard
and Fisher data to a low-resource DementiaBank elderly speech corpus.
| 2,021 |
Computation and Language
|
Broader terms curriculum mapping: Using natural language processing and
visual-supported communication to create representative program planning
experiences
|
Accreditation bodies call for curriculum development processes open to all
stakeholders, reflecting viewpoints of students, industry, university faculty
and society. However, communication difficulties between faculty and
non-faculty groups leave unexplored an immense collaboration potential. Using
classification of learning objectives, natural language processing, and data
visualization, this paper presents a method to deliver program plan
representations that are universal, self-explanatory, and empowering. A simple
example shows how the method contributes to representative program planning
experiences and a case study is used to confirm the method's accuracy and
utility.
| 2,022 |
Computation and Language
|
Learning Modality-Specific Representations with Self-Supervised
Multi-Task Learning for Multimodal Sentiment Analysis
|
Representation Learning is a significant and challenging task in multimodal
learning. Effective modality representations should contain two parts of
characteristics: the consistency and the difference. Due to the unified
multimodal annotation, existing methods are restricted in capturing
differentiated information. However, additional uni-modal annotations are high
time- and labor-cost. In this paper, we design a label generation module based
on the self-supervised learning strategy to acquire independent unimodal
supervisions. Then, joint training the multi-modal and uni-modal tasks to learn
the consistency and difference, respectively. Moreover, during the training
stage, we design a weight-adjustment strategy to balance the learning progress
among different subtasks. That is to guide the subtasks to focus on samples
with a larger difference between modality supervisions. Last, we conduct
extensive experiments on three public multimodal baseline datasets. The
experimental results validate the reliability and stability of auto-generated
unimodal supervisions. On MOSI and MOSEI datasets, our method surpasses the
current state-of-the-art methods. On the SIMS dataset, our method achieves
comparable performance than human-annotated unimodal labels. The full codes are
available at https://github.com/thuiar/Self-MM.
| 2,021 |
Computation and Language
|
NewsBERT: Distilling Pre-trained Language Model for Intelligent News
Application
|
Pre-trained language models (PLMs) like BERT have made great progress in NLP.
News articles usually contain rich textual information, and PLMs have the
potentials to enhance news text modeling for various intelligent news
applications like news recommendation and retrieval. However, most existing
PLMs are in huge size with hundreds of millions of parameters. Many online news
applications need to serve millions of users with low latency tolerance, which
poses huge challenges to incorporating PLMs in these scenarios. Knowledge
distillation techniques can compress a large PLM into a much smaller one and
meanwhile keeps good performance. However, existing language models are
pre-trained and distilled on general corpus like Wikipedia, which has some gaps
with the news domain and may be suboptimal for news intelligence. In this
paper, we propose NewsBERT, which can distill PLMs for efficient and effective
news intelligence. In our approach, we design a teacher-student joint learning
and distillation framework to collaboratively learn both teacher and student
models, where the student model can learn from the learning experience of the
teacher model. In addition, we propose a momentum distillation method by
incorporating the gradients of teacher model into the update of student model
to better transfer useful knowledge learned by the teacher model. Extensive
experiments on two real-world datasets with three tasks show that NewsBERT can
effectively improve the model performance in various intelligent news
applications with much smaller models.
| 2,021 |
Computation and Language
|
BembaSpeech: A Speech Recognition Corpus for the Bemba Language
|
We present a preprocessed, ready-to-use automatic speech recognition corpus,
BembaSpeech, consisting over 24 hours of read speech in the Bemba language, a
written but low-resourced language spoken by over 30% of the population in
Zambia. To assess its usefulness for training and testing ASR systems for
Bemba, we train an end-to-end Bemba ASR system by fine-tuning a pre-trained
DeepSpeech English model on the training portion of the BembaSpeech corpus. Our
best model achieves a word error rate (WER) of 54.78%. The results show that
the corpus can be used for building ASR systems for Bemba. The corpus and
models are publicly released at https://github.com/csikasote/BembaSpeech.
| 2,021 |
Computation and Language
|
Leveraging cross-platform data to improve automated hate speech
detection
|
Hate speech is increasingly prevalent online, and its negative outcomes
include increased prejudice, extremism, and even offline hate crime. Automatic
detection of online hate speech can help us to better understand these impacts.
However, while the field has recently progressed through advances in natural
language processing, challenges still remain. In particular, most existing
approaches for hate speech detection focus on a single social media platform in
isolation. This limits both the use of these models and their validity, as the
nature of language varies from platform to platform. Here we propose a new
cross-platform approach to detect hate speech which leverages multiple datasets
and classification models from different platforms and trains a superlearner
that can combine existing and novel training data to improve detection and
increase model applicability. We demonstrate how this approach outperforms
existing models, and achieves good performance when tested on messages from
novel social media platforms not included in the original training data.
| 2,021 |
Computation and Language
|
Bootstrapping Relation Extractors using Syntactic Search by Examples
|
The advent of neural-networks in NLP brought with it substantial improvements
in supervised relation extraction. However, obtaining a sufficient quantity of
training data remains a key challenge. In this work we propose a process for
bootstrapping training datasets which can be performed quickly by
non-NLP-experts. We take advantage of search engines over syntactic-graphs
(Such as Shlain et al. (2020)) which expose a friendly by-example syntax. We
use these to obtain positive examples by searching for sentences that are
syntactically similar to user input examples. We apply this technique to
relations from TACRED and DocRED and show that the resulting models are
competitive with models trained on manually annotated data and on data obtained
from distant supervision. The models also outperform models trained using NLG
data augmentation techniques. Extending the search-based approach with the NLG
method further improves the results.
| 2,021 |
Computation and Language
|
AuGPT: Auxiliary Tasks and Data Augmentation for End-To-End Dialogue
with Pre-Trained Language Models
|
Attention-based pre-trained language models such as GPT-2 brought
considerable progress to end-to-end dialogue modelling. However, they also
present considerable risks for task-oriented dialogue, such as lack of
knowledge grounding or diversity. To address these issues, we introduce
modified training objectives for language model finetuning, and we employ
massive data augmentation via back-translation to increase the diversity of the
training data. We further examine the possibilities of combining data from
multiples sources to improve performance on the target dataset. We carefully
evaluate our contributions with both human and automatic methods. Our model
substantially outperforms the baseline on the MultiWOZ data and shows
competitive performance with state of the art in both automatic and human
evaluation.
| 2,021 |
Computation and Language
|
Decontextualization: Making Sentences Stand-Alone
|
Models for question answering, dialogue agents, and summarization often
interpret the meaning of a sentence in a rich context and use that meaning in a
new context. Taking excerpts of text can be problematic, as key pieces may not
be explicit in a local window. We isolate and define the problem of sentence
decontextualization: taking a sentence together with its context and rewriting
it to be interpretable out of context, while preserving its meaning. We
describe an annotation procedure, collect data on the Wikipedia corpus, and use
the data to train models to automatically decontextualize sentences. We present
preliminary studies that show the value of sentence decontextualization in a
user facing task, and as preprocessing for systems that perform document
understanding. We argue that decontextualization is an important subtask in
many downstream applications, and that the definitions and resources provided
can benefit tasks that operate on sentences that occur in a richer context.
| 2,021 |
Computation and Language
|
SensPick: Sense Picking for Word Sense Disambiguation
|
Word sense disambiguation (WSD) methods identify the most suitable meaning of
a word with respect to the usage of that word in a specific context. Neural
network-based WSD approaches rely on a sense-annotated corpus since they do not
utilize lexical resources. In this study, we utilize both context and related
gloss information of a target word to model the semantic relationship between
the word and the set of glosses. We propose SensPick, a type of stacked
bidirectional Long Short Term Memory (LSTM) network to perform the WSD task.
The experimental evaluation demonstrates that SensPick outperforms traditional
and state-of-the-art models on most of the benchmark datasets with a relative
improvement of 3.5% in F-1 score. While the improvement is not significant,
incorporating semantic relationships brings SensPick in the leading position
compared to others.
| 2,021 |
Computation and Language
|
Biomedical Question Answering: A Survey of Approaches and Challenges
|
Automatic Question Answering (QA) has been successfully applied in various
domains such as search engines and chatbots. Biomedical QA (BQA), as an
emerging QA task, enables innovative applications to effectively perceive,
access and understand complex biomedical knowledge. There have been tremendous
developments of BQA in the past two decades, which we classify into 5
distinctive approaches: classic, information retrieval, machine reading
comprehension, knowledge base and question entailment approaches. In this
survey, we introduce available datasets and representative methods of each BQA
approach in detail. Despite the developments, BQA systems are still immature
and rarely used in real-life settings. We identify and characterize several key
challenges in BQA that might lead to this issue, and discuss some potential
future directions to explore.
| 2,024 |
Computation and Language
|
Language Models for Lexical Inference in Context
|
Lexical inference in context (LIiC) is the task of recognizing textual
entailment between two very similar sentences, i.e., sentences that only differ
in one expression. It can therefore be seen as a variant of the natural
language inference task that is focused on lexical semantics. We formulate and
evaluate the first approaches based on pretrained language models (LMs) for
this task: (i) a few-shot NLI classifier, (ii) a relation induction approach
based on handcrafted patterns expressing the semantics of lexical inference,
and (iii) a variant of (ii) with patterns that were automatically extracted
from a corpus. All our approaches outperform the previous state of the art,
showing the potential of pretrained LMs for LIiC. In an extensive analysis, we
investigate factors of success and failure of our three approaches.
| 2,021 |
Computation and Language
|
NUVA: A Naming Utterance Verifier for Aphasia Treatment
|
Anomia (word-finding difficulties) is the hallmark of aphasia, an acquired
language disorder most commonly caused by stroke. Assessment of speech
performance using picture naming tasks is a key method for both diagnosis and
monitoring of responses to treatment interventions by people with aphasia
(PWA). Currently, this assessment is conducted manually by speech and language
therapists (SLT). Surprisingly, despite advancements in automatic speech
recognition (ASR) and artificial intelligence with technologies like deep
learning, research on developing automated systems for this task has been
scarce. Here we present NUVA, an utterance verification system incorporating a
deep learning element that classifies 'correct' versus' incorrect' naming
attempts from aphasic stroke patients. When tested on eight native
British-English speaking PWA the system's performance accuracy ranged between
83.6% to 93.6%, with a 10-fold cross-validation mean of 89.5%. This performance
was not only significantly better than a baseline created for this study using
one of the leading commercially available ASRs (Google speech-to-text service)
but also comparable in some instances with two independent SLT ratings for the
same dataset.
| 2,021 |
Computation and Language
|
Student sentiment Analysis Using Classification With Feature Extraction
Techniques
|
Technical growths have empowered, numerous revolutions in the educational
system by acquainting with technology into the classroom and by elevating the
learning experience. Nowadays Web-based learning is getting much popularity.
This paper describes the web-based learning and their effectiveness towards
students. One of the prime factors in education or learning system is feedback;
it is beneficial to learning if it must be used effectively. In this paper, we
worked on how machine learning techniques like Logistic Regression (LR),
Support Vector Machine (SVM), Naive Bayes (NB), Decision Tree (DT) can be
applied over Web-based learning, emphasis given on sentiment present in the
feedback students. We also work on two types of Feature Extraction Technique
(FETs) namely Count Vector (CVr) or Bag of Words) (BoW) and Term Frequency and
Inverse Document Frequency (TF-IDF) Vector. In the research study, it is our
goal for our proposed LR, SVM, NB, and DT models to classify the presence of
Student Feedback Dataset (SFB) with improved accuracy with cleaned dataset and
feature extraction techniques. The SFB is one of the significant concerns among
the student sentimental analysis.
| 2,021 |
Computation and Language
|
Civil Rephrases Of Toxic Texts With Self-Supervised Transformers
|
Platforms that support online commentary, from social networks to news sites,
are increasingly leveraging machine learning to assist their moderation
efforts. But this process does not typically provide feedback to the author
that would help them contribute according to the community guidelines. This is
prohibitively time-consuming for human moderators to do, and computational
approaches are still nascent. This work focuses on models that can help suggest
rephrasings of toxic comments in a more civil manner. Inspired by recent
progress in unpaired sequence-to-sequence tasks, a self-supervised learning
model is introduced, called CAE-T5. CAE-T5 employs a pre-trained text-to-text
transformer, which is fine tuned with a denoising and cyclic auto-encoder loss.
Experimenting with the largest toxicity detection dataset to date (Civil
Comments) our model generates sentences that are more fluent and better at
preserving the initial content compared to earlier text style transfer systems
which we compare with using several scoring systems and human evaluation.
| 2,021 |
Computation and Language
|
Multi-turn Dialogue Reading Comprehension with Pivot Turns and Knowledge
|
Multi-turn dialogue reading comprehension aims to teach machines to read
dialogue contexts and solve tasks such as response selection and answering
questions. The major challenges involve noisy history contexts and especial
prerequisites of commonsense knowledge that is unseen in the given material.
Existing works mainly focus on context and response matching approaches. This
work thus makes the first attempt to tackle the above two challenges by
extracting substantially important turns as pivot utterances and utilizing
external knowledge to enhance the representation of context. We propose a
pivot-oriented deep selection model (PoDS) on top of the Transformer-based
language models for dialogue comprehension. In detail, our model first picks
out the pivot utterances from the conversation history according to the
semantic matching with the candidate response or question, if any. Besides,
knowledge items related to the dialogue context are extracted from a knowledge
graph as external knowledge. Then, the pivot utterances and the external
knowledge are combined with a well-designed mechanism for refining predictions.
Experimental results on four dialogue comprehension benchmark tasks show that
our proposed model achieves great improvements on baselines. A series of
empirical comparisons are conducted to show how our selection strategies and
the extra knowledge injection influence the results.
| 2,021 |
Computation and Language
|
Towards More Fine-grained and Reliable NLP Performance Prediction
|
Performance prediction, the task of estimating a system's performance without
performing experiments, allows us to reduce the experimental burden caused by
the combinatorial explosion of different datasets, languages, tasks, and
models. In this paper, we make two contributions to improving performance
prediction for NLP tasks. First, we examine performance predictors not only for
holistic measures of accuracy like F1 or BLEU but also fine-grained performance
measures such as accuracy over individual classes of examples. Second, we
propose methods to understand the reliability of a performance prediction model
from two angles: confidence intervals and calibration. We perform an analysis
of four types of NLP tasks, and both demonstrate the feasibility of
fine-grained performance prediction and the necessity to perform reliability
analysis for performance prediction methods in the future. We make our code
publicly available: \url{https://github.com/neulab/Reliable-NLPPP}
| 2,021 |
Computation and Language
|
Generating Synthetic Text Data to Evaluate Causal Inference Methods
|
Drawing causal conclusions from observational data requires making
assumptions about the true data-generating process. Causal inference research
typically considers low-dimensional data, such as categorical or numerical
fields in structured medical records. High-dimensional and unstructured data
such as natural language complicates the evaluation of causal inference
methods; such evaluations rely on synthetic datasets with known causal effects.
Models for natural language generation have been widely studied and perform
well empirically. However, existing methods not immediately applicable to
producing synthetic datasets for causal evaluations, as they do not allow for
quantifying a causal effect on the text itself. In this work, we develop a
framework for adapting existing generation models to produce synthetic text
datasets with known causal effects. We use this framework to perform an
empirical comparison of four recently-proposed methods for estimating causal
effects from text data. We release our code and synthetic datasets.
| 2,021 |
Computation and Language
|
Transfer Learning Approach for Arabic Offensive Language Detection
System -- BERT-Based Model
|
Developing a system to detect online offensive language is very important to
the health and the security of online users. Studies have shown that cyberhate,
online harassment and other misuses of technology are on the rise, particularly
during the global Coronavirus pandemic in 2020. According to the latest report
by the Anti-Defamation League (ADL), 35% of online users reported online
harassment related to their identity-based characteristics, which is a 3%
increase over 2019. Applying advanced techniques from the Natural Language
Processing (NLP) field to support the development of an online hate-free
community is a critical task for social justice. Transfer learning enhances the
performance of the classifier by allowing the transfer of knowledge from one
domain or one dataset to others that have not been seen before, thus,
supporting the classifier to be more generalizable. In our study, we apply the
principles of transfer learning cross multiple Arabic offensive language
datasets to compare the effects on system performance. This study aims at
investigating the effects of fine-tuning and training Bidirectional Encoder
Representations from Transformers (BERT) model on multiple Arabic offensive
language datasets individually and testing it using other datasets
individually. Our experiment starts with a comparison among multiple BERT
models to guide the selection of the main model that is used for our study. The
study also investigates the effects of concatenating all datasets to be used
for fine-tuning and training BERT model. Our results demonstrate the limited
effects of transfer learning on the performance of the classifiers,
particularly for highly dialectic comments.
| 2,021 |
Computation and Language
|
Differentiable Generative Phonology
|
The goal of generative phonology, as formulated by Chomsky and Halle (1968),
is to specify a formal system that explains the set of attested phonological
strings in a language. Traditionally, a collection of rules (or constraints, in
the case of optimality theory) and underlying forms (UF) are posited to work in
tandem to generate phonological strings. However, the degree of abstraction of
UFs with respect to their concrete realizations is contentious. As the main
contribution of our work, we implement the phonological generative system as a
neural model differentiable end-to-end, rather than as a set of rules or
constraints. Contrary to traditional phonology, in our model, UFs are
continuous vectors in $\mathbb{R}^d$, rather than discrete strings. As a
consequence, UFs are discovered automatically rather than posited by linguists,
and the model can scale to the size of a realistic vocabulary. Moreover, we
compare several modes of the generative process, contemplating: i) the presence
or absence of an underlying representation in between morphemes and surface
forms (SFs); and ii) the conditional dependence or independence of UFs with
respect to SFs. We evaluate the ability of each mode to predict attested
phonological strings on 2 datasets covering 5 and 28 languages, respectively.
The results corroborate two tenets of generative phonology, viz. the necessity
for UFs and their independence from SFs. In general, our neural model of
generative phonology learns both UFs and SFs automatically and on a
large-scale.
| 2,021 |
Computation and Language
|
Customizing Contextualized Language Models forLegal Document Reviews
|
Inspired by the inductive transfer learning on computer vision, many efforts
have been made to train contextualized language models that boost the
performance of natural language processing tasks. These models are mostly
trained on large general-domain corpora such as news, books, or
Wikipedia.Although these pre-trained generic language models well perceive the
semantic and syntactic essence of a language structure, exploiting them in a
real-world domain-specific scenario still needs some practical considerations
to be taken into account such as token distribution shifts, inference time,
memory, and their simultaneous proficiency in multiple tasks. In this paper, we
focus on the legal domain and present how different language model strained on
general-domain corpora can be best customized for multiple legal document
reviewing tasks. We compare their efficiencies with respect to task
performances and present practical considerations.
| 2,021 |
Computation and Language
|
Fused Acoustic and Text Encoding for Multimodal Bilingual Pretraining
and Speech Translation
|
Recently, representation learning for text and speech has successfully
improved many language related tasks. However, all existing methods suffer from
two limitations: (a) they only learn from one input modality, while a unified
representation for both speech and text is needed by tasks such as end-to-end
speech translation, and as a result,(b) they can not exploit various
large-scale text and speech data and their performance is limited by the
scarcity of parallel speech translation data.To address these problems, we
propose a Fused Acoustic and Text Masked Language Model (FAT-MLM) which jointly
learns a unified representation for both acoustic and text input from various
types of corpora including parallel data for speech recognition and machine
translation, and even pure speech and text data. Within this cross-modal
representation learning framework, we further present an end-to-end model for
Fused Acoustic and Text Speech Translation (FAT-ST). Experiments on three
translation directions show that by fine-tuning from FAT-MLM, our proposed
speech translation models substantially improve translation quality by up to
+5.9 BLEU.
| 2,021 |
Computation and Language
|
Toward Improving Coherence and Diversity of Slogan Generation
|
Previous work in slogan generation focused on utilising slogan skeletons
mined from existing slogans. While some generated slogans can be catchy, they
are often not coherent with the company's focus or style across their marketing
communications because the skeletons are mined from other companies' slogans.
We propose a sequence-to-sequence (seq2seq) transformer model to generate
slogans from a brief company description. A naive seq2seq model fine-tuned for
slogan generation is prone to introducing false information. We use company
name delexicalisation and entity masking to alleviate this problem and improve
the generated slogans' quality and truthfulness. Furthermore, we apply
conditional training based on the first words' POS tag to generate
syntactically diverse slogans. Our best model achieved a ROUGE-1/-2/-L F1 score
of 35.58/18.47/33.32. Besides, automatic and human evaluations indicate that
our method generates significantly more factual, diverse and catchy slogans
than strong LSTM and transformer seq2seq baselines.
| 2,021 |
Computation and Language
|
Text Compression-aided Transformer Encoding
|
Text encoding is one of the most important steps in Natural Language
Processing (NLP). It has been done well by the self-attention mechanism in the
current state-of-the-art Transformer encoder, which has brought about
significant improvements in the performance of many NLP tasks. Though the
Transformer encoder may effectively capture general information in its
resulting representations, the backbone information, meaning the gist of the
input text, is not specifically focused on. In this paper, we propose explicit
and implicit text compression approaches to enhance the Transformer encoding
and evaluate models using this approach on several typical downstream tasks
that rely on the encoding heavily. Our explicit text compression approaches use
dedicated models to compress text, while our implicit text compression approach
simply adds an additional module to the main model to handle text compression.
We propose three ways of integration, namely backbone source-side fusion,
target-side fusion, and both-side fusion, to integrate the backbone information
into Transformer-based models for various downstream tasks. Our evaluation on
benchmark datasets shows that the proposed explicit and implicit text
compression approaches improve results in comparison to strong baselines. We
therefore conclude, when comparing the encodings to the baseline models, text
compression helps the encoders to learn better language representations.
| 2,021 |
Computation and Language
|
An End-to-end Model for Entity-level Relation Extraction using
Multi-instance Learning
|
We present a joint model for entity-level relation extraction from documents.
In contrast to other approaches - which focus on local intra-sentence mention
pairs and thus require annotations on mention level - our model operates on
entity level. To do so, a multi-task approach is followed that builds upon
coreference resolution and gathers relevant signals via multi-instance learning
with multi-level representations combining global entity and local mention
information. We achieve state-of-the-art relation extraction results on the
DocRED dataset and report the first entity-level end-to-end relation extraction
results for future reference. Finally, our experimental results suggest that a
joint approach is on par with task-specific learning, though more efficient due
to shared parameters and training steps.
| 2,021 |
Computation and Language
|
Cross-Domain Multi-Task Learning for Sequential Sentence Classification
in Research Papers
|
Sequential sentence classification deals with the categorisation of sentences
based on their content and context. Applied to scientific texts, it enables the
automatic structuring of research papers and the improvement of academic search
engines. However, previous work has not investigated the potential of transfer
learning for sentence classification across different scientific domains and
the issue of different text structure of full papers and abstracts. In this
paper, we derive seven related research questions and present several
contributions to address them: First, we suggest a novel uniform deep learning
architecture and multi-task learning for cross-domain sequential sentence
classification in scientific texts. Second, we tailor two common transfer
learning methods, sequential transfer learning and multi-task learning, to deal
with the challenges of the given task. Semantic relatedness of tasks is a
prerequisite for successful transfer learning of neural models. Consequently,
our third contribution is an approach to semi-automatically identify
semantically related classes from different annotation schemes and we present
an analysis of four annotation schemes. Comprehensive experimental results
indicate that models, which are trained on datasets from different scientific
domains, benefit from one another when using the proposed multi-task learning
architecture. We also report comparisons with several state-of-the-art
approaches. Our approach outperforms the state of the art on full paper
datasets significantly while being on par for datasets consisting of abstracts.
| 2,022 |
Computation and Language
|
Unsupervised Extractive Summarization using Pointwise Mutual Information
|
Unsupervised approaches to extractive summarization usually rely on a notion
of sentence importance defined by the semantic similarity between a sentence
and the document. We propose new metrics of relevance and redundancy using
pointwise mutual information (PMI) between sentences, which can be easily
computed by a pre-trained language model. Intuitively, a relevant sentence
allows readers to infer the document content (high PMI with the document), and
a redundant sentence can be inferred from the summary (high PMI with the
summary). We then develop a greedy sentence selection algorithm to maximize
relevance and minimize redundancy of extracted sentences. We show that our
method outperforms similarity-based methods on datasets in a range of domains
including news, medical journal articles, and personal anecdotes.
| 2,021 |
Computation and Language
|
A reproduction of Apple's bi-directional LSTM models for language
identification in short strings
|
Language Identification is the task of identifying a document's language. For
applications like automatic spell checker selection, language identification
must use very short strings such as text message fragments. In this work, we
reproduce a language identification architecture that Apple briefly sketched in
a blog post. We confirm the bi-LSTM model's performance and find that it
outperforms current open-source language identifiers. We further find that its
language identification mistakes are due to confusion between related
languages.
| 2,021 |
Computation and Language
|
Speech-language Pre-training for End-to-end Spoken Language
Understanding
|
End-to-end (E2E) spoken language understanding (SLU) can infer semantics
directly from speech signal without cascading an automatic speech recognizer
(ASR) with a natural language understanding (NLU) module. However, paired
utterance recordings and corresponding semantics may not always be available or
sufficient to train an E2E SLU model in a real production environment. In this
paper, we propose to unify a well-optimized E2E ASR encoder (speech) and a
pre-trained language model encoder (language) into a transformer decoder. The
unified speech-language pre-trained model (SLP) is continually enhanced on
limited labeled data from a target domain by using a conditional masked
language model (MLM) objective, and thus can effectively generate a sequence of
intent, slot type, and slot value for given input speech in the inference. The
experimental results on two public corpora show that our approach to E2E SLU is
superior to the conventional cascaded method. It also outperforms the present
state-of-the-art approaches to E2E SLU with much less paired data.
| 2,021 |
Computation and Language
|
Embracing Domain Differences in Fake News: Cross-domain Fake News
Detection using Multi-modal Data
|
With the rapid evolution of social media, fake news has become a significant
social problem, which cannot be addressed in a timely manner using manual
investigation. This has motivated numerous studies on automating fake news
detection. Most studies explore supervised training models with different
modalities (e.g., text, images, and propagation networks) of news records to
identify fake news. However, the performance of such techniques generally drops
if news records are coming from different domains (e.g., politics,
entertainment), especially for domains that are unseen or rarely-seen during
training. As motivation, we empirically show that news records from different
domains have significantly different word usage and propagation patterns.
Furthermore, due to the sheer volume of unlabelled news records, it is
challenging to select news records for manual labelling so that the
domain-coverage of the labelled dataset is maximized. Hence, this work: (1)
proposes a novel framework that jointly preserves domain-specific and
cross-domain knowledge in news records to detect fake news from different
domains; and (2) introduces an unsupervised technique to select a set of
unlabelled informative news records for manual labelling, which can be
ultimately used to train a fake news detection model that performs well for
many domains while minimizing the labelling cost. Our experiments show that the
integration of the proposed fake news model and the selective annotation
approach achieves state-of-the-art performance for cross-domain news datasets,
while yielding notable improvements for rarely-appearing domains in news
datasets.
| 2,021 |
Computation and Language
|
Neural Inverse Text Normalization
|
While there have been several contributions exploring state of the art
techniques for text normalization, the problem of inverse text normalization
(ITN) remains relatively unexplored. The best known approaches leverage finite
state transducer (FST) based models which rely on manually curated rules and
are hence not scalable. We propose an efficient and robust neural solution for
ITN leveraging transformer based seq2seq models and FST-based text
normalization techniques for data preparation. We show that this can be easily
extended to other languages without the need for a linguistic expert to
manually curate them. We then present a hybrid framework for integrating Neural
ITN with an FST to overcome common recoverable errors in production
environments. Our empirical evaluations show that the proposed solution
minimizes incorrect perturbations (insertions, deletions and substitutions) to
ASR output and maintains high quality even on out of domain data. A transformer
based model infused with pretraining consistently achieves a lower WER across
several datasets and is able to outperform baselines on English, Spanish,
German and Italian datasets.
| 2,021 |
Computation and Language
|
Emoji-Based Transfer Learning for Sentiment Tasks
|
Sentiment tasks such as hate speech detection and sentiment analysis,
especially when performed on languages other than English, are often
low-resource. In this study, we exploit the emotional information encoded in
emojis to enhance the performance on a variety of sentiment tasks. This is done
using a transfer learning approach, where the parameters learned by an
emoji-based source task are transferred to a sentiment target task. We analyse
the efficacy of the transfer under three conditions, i.e. i) the emoji content
and ii) label distribution of the target task as well as iii) the difference
between monolingually and multilingually learned source tasks. We find i.a.
that the transfer is most beneficial if the target task is balanced with high
emoji content. Monolingually learned source tasks have the benefit of taking
into account the culturally specific use of emojis and gain up to F1 +0.280
over the baseline.
| 2,021 |
Computation and Language
|
Transformer Language Models with LSTM-based Cross-utterance Information
Representation
|
The effective incorporation of cross-utterance information has the potential
to improve language models (LMs) for automatic speech recognition (ASR). To
extract more powerful and robust cross-utterance representations for the
Transformer LM (TLM), this paper proposes the R-TLM which uses hidden states in
a long short-term memory (LSTM) LM. To encode the cross-utterance information,
the R-TLM incorporates an LSTM module together with a segment-wise recurrence
in some of the Transformer blocks. In addition to the LSTM module output, a
shortcut connection using a fusion layer that bypasses the LSTM module is also
investigated. The proposed system was evaluated on the AMI meeting corpus, the
Eval2000 and the RT03 telephone conversation evaluation sets. The best R-TLM
achieved 0.9%, 0.6%, and 0.8% absolute WER reductions over the single-utterance
TLM baseline, and 0.5%, 0.3%, 0.2% absolute WER reductions over a strong
cross-utterance TLM baseline on the AMI evaluation set, Eval2000 and RT03
respectively. Improvements on Eval2000 and RT03 were further supported by
significance tests. R-TLMs were found to have better LM scores on words where
recognition errors are more likely to occur. The R-TLM WER can be further
reduced by interpolation with an LSTM-LM.
| 2,021 |
Computation and Language
|
Two Training Strategies for Improving Relation Extraction over Universal
Graph
|
This paper explores how the Distantly Supervised Relation Extraction (DS-RE)
can benefit from the use of a Universal Graph (UG), the combination of a
Knowledge Graph (KG) and a large-scale text collection. A straightforward
extension of a current state-of-the-art neural model for DS-RE with a UG may
lead to degradation in performance. We first report that this degradation is
associated with the difficulty in learning a UG and then propose two training
strategies: (1) Path Type Adaptive Pretraining, which sequentially trains the
model with different types of UG paths so as to prevent the reliance on a
single type of UG path; and (2) Complexity Ranking Guided Attention mechanism,
which restricts the attention span according to the complexity of a UG path so
as to force the model to extract features not only from simple UG paths but
also from complex ones. Experimental results on both biomedical and NYT10
datasets prove the robustness of our methods and achieve a new state-of-the-art
result on the NYT10 dataset. The code and datasets used in this paper are
available at https://github.com/baodaiqin/UGDSRE.
| 2,021 |
Computation and Language
|
A Little Pretraining Goes a Long Way: A Case Study on Dependency Parsing
Task for Low-resource Morphologically Rich Languages
|
Neural dependency parsing has achieved remarkable performance for many
domains and languages. The bottleneck of massive labeled data limits the
effectiveness of these approaches for low resource languages. In this work, we
focus on dependency parsing for morphological rich languages (MRLs) in a
low-resource setting. Although morphological information is essential for the
dependency parsing task, the morphological disambiguation and lack of powerful
analyzers pose challenges to get this information for MRLs. To address these
challenges, we propose simple auxiliary tasks for pretraining. We perform
experiments on 10 MRLs in low-resource settings to measure the efficacy of our
proposed pretraining method and observe an average absolute gain of 2 points
(UAS) and 3.6 points (LAS). Code and data available at:
https://github.com/jivnesh/LCM
| 2,021 |
Computation and Language
|
Continuous Learning in Neural Machine Translation using Bilingual
Dictionaries
|
While recent advances in deep learning led to significant improvements in
machine translation, neural machine translation is often still not able to
continuously adapt to the environment. For humans, as well as for machine
translation, bilingual dictionaries are a promising knowledge source to
continuously integrate new knowledge. However, their exploitation poses several
challenges: The system needs to be able to perform one-shot learning as well as
model the morphology of source and target language.
In this work, we proposed an evaluation framework to assess the ability of
neural machine translation to continuously learn new phrases. We integrate
one-shot learning methods for neural machine translation with different word
representations and show that it is important to address both in order to
successfully make use of bilingual dictionaries. By addressing both challenges
we are able to improve the ability to translate new, rare words and phrases
from 30% to up to 70%. The correct lemma is even generated by more than 90%.
| 2,021 |
Computation and Language
|
Improving Zero-shot Neural Machine Translation on Language-specific
Encoders-Decoders
|
Recently, universal neural machine translation (NMT) with shared
encoder-decoder gained good performance on zero-shot translation. Unlike
universal NMT, jointly trained language-specific encoders-decoders aim to
achieve universal representation across non-shared modules, each of which is
for a language or language family. The non-shared architecture has the
advantage of mitigating internal language competition, especially when the
shared vocabulary and model parameters are restricted in their size. However,
the performance of using multiple encoders and decoders on zero-shot
translation still lags behind universal NMT. In this work, we study zero-shot
translation using language-specific encoders-decoders. We propose to generalize
the non-shared architecture and universal NMT by differentiating the
Transformer layers between language-specific and interlingua. By selectively
sharing parameters and applying cross-attentions, we explore maximizing the
representation universality and realizing the best alignment of
language-agnostic information. We also introduce a denoising auto-encoding
(DAE) objective to jointly train the model with the translation task in a
multi-task manner. Experiments on two public multilingual parallel datasets
show that our proposed model achieves a competitive or better results than
universal NMT and strong pivot baseline. Moreover, we experiment incrementally
adding new language to the trained model by only updating the new model
parameters. With this little effort, the zero-shot translation between this
newly added language and existing languages achieves a comparable result with
the model trained jointly from scratch on all languages.
| 2,021 |
Computation and Language
|
Optimizing Inference Performance of Transformers on CPUs
|
The Transformer architecture revolutionized the field of natural language
processing (NLP). Transformers-based models (e.g., BERT) power many important
Web services, such as search, translation, question-answering, etc. While
enormous research attention is paid to the training of those models, relatively
little efforts are made to improve their inference performance. This paper
comes to address this gap by presenting an empirical analysis of scalability
and performance of inferencing a Transformer-based model on CPUs. Focusing on
the highly popular BERT model, we identify key components of the Transformer
architecture where the bulk of the computation happens, and propose three
optimizations to speed them up. The optimizations are evaluated using the
inference benchmark from HuggingFace, and are shown to achieve the speedup of
up to x2.37. The considered optimizations do not require any changes to the
implementation of the models nor affect their accuracy.
| 2,021 |
Computation and Language
|
Structural Information Preserving for Graph-to-Text Generation
|
The task of graph-to-text generation aims at producing sentences that
preserve the meaning of input graphs. As a crucial defect, the current
state-of-the-art models may mess up or even drop the core structural
information of input graphs when generating outputs. We propose to tackle this
problem by leveraging richer training signals that can guide our model for
preserving input information. In particular, we introduce two types of
autoencoding losses, each individually focusing on different aspects (a.k.a.
views) of input graphs. The losses are then back-propagated to better calibrate
our model via multi-task training. Experiments on two benchmarks for
graph-to-text generation show the effectiveness of our approach over a
state-of-the-art baseline. Our code is available at
\url{http://github.com/Soistesimmer/AMR-multiview}.
| 2,021 |
Computation and Language
|
Do as I mean, not as I say: Sequence Loss Training for Spoken Language
Understanding
|
Spoken language understanding (SLU) systems extract transcriptions, as well
as semantics of intent or named entities from speech, and are essential
components of voice activated systems. SLU models, which either directly
extract semantics from audio or are composed of pipelined automatic speech
recognition (ASR) and natural language understanding (NLU) models, are
typically trained via differentiable cross-entropy losses, even when the
relevant performance metrics of interest are word or semantic error rates. In
this work, we propose non-differentiable sequence losses based on SLU metrics
as a proxy for semantic error and use the REINFORCE trick to train ASR and SLU
models with this loss. We show that custom sequence loss training is the
state-of-the-art on open SLU datasets and leads to 6% relative improvement in
both ASR and NLU performance metrics on large proprietary datasets. We also
demonstrate how the semantic sequence loss training paradigm can be used to
update ASR and SLU models without transcripts, using semantic feedback alone.
| 2,021 |
Computation and Language
|
They, Them, Theirs: Rewriting with Gender-Neutral English
|
Responsible development of technology involves applications being inclusive
of the diverse set of users they hope to support. An important part of this is
understanding the many ways to refer to a person and being able to fluently
change between the different forms as needed. We perform a case study on the
singular they, a common way to promote gender inclusion in English. We define a
re-writing task, create an evaluation benchmark, and show how a model can be
trained to produce gender-neutral English with <1% word error rate with no
human-labeled data. We discuss the practical applications and ethical
considerations of the task, providing direction for future work into inclusive
natural language systems.
| 2,021 |
Computation and Language
|
Exploring Classic and Neural Lexical Translation Models for Information
Retrieval: Interpretability, Effectiveness, and Efficiency Benefits
|
We study the utility of the lexical translation model (IBM Model 1) for
English text retrieval, in particular, its neural variants that are trained
end-to-end. We use the neural Model1 as an aggregator layer applied to
context-free or contextualized query/document embeddings. This new approach to
design a neural ranking system has benefits for effectiveness, efficiency, and
interpretability. Specifically, we show that adding an interpretable neural
Model 1 layer on top of BERT-based contextualized embeddings (1) does not
decrease accuracy and/or efficiency; and (2) may overcome the limitation on the
maximum sequence length of existing BERT models. The context-free neural Model
1 is less effective than a BERT-based ranking model, but it can run efficiently
on a CPU (without expensive index-time precomputation or query-time operations
on large tensors). Using Model 1 we produced best neural and non-neural runs on
the MS MARCO document ranking leaderboard in late 2020.
| 2,021 |
Computation and Language
|
Characterizing English Variation across Social Media Communities with
BERT
|
Much previous work characterizing language variation across Internet social
groups has focused on the types of words used by these groups. We extend this
type of study by employing BERT to characterize variation in the senses of
words as well, analyzing two months of English comments in 474 Reddit
communities. The specificity of different sense clusters to a community,
combined with the specificity of a community's unique word types, is used to
identify cases where a social group's language deviates from the norm. We
validate our metrics using user-created glossaries and draw on sociolinguistic
theories to connect language variation with trends in community behavior. We
find that communities with highly distinctive language are medium-sized, and
their loyal and highly engaged users interact in dense networks.
| 2,021 |
Computation and Language
|
Generating Diversified Comments via Reader-Aware Topic Modeling and
Saliency Detection
|
Automatic comment generation is a special and challenging task to verify the
model ability on news content comprehension and language generation. Comments
not only convey salient and interesting information in news articles, but also
imply various and different reader characteristics which we treat as the
essential clues for diversity. However, most of the comment generation
approaches only focus on saliency information extraction, while the
reader-aware factors implied by comments are neglected. To address this issue,
we propose a unified reader-aware topic modeling and saliency information
detection framework to enhance the quality of generated comments. For
reader-aware topic modeling, we design a variational generative clustering
algorithm for latent semantic learning and topic mining from reader comments.
For saliency information detection, we introduce Bernoulli distribution
estimating on news content to select saliency information. The obtained topic
representations as well as the selected saliency information are incorporated
into the decoder to generate diversified and informative comments. Experimental
results on three datasets show that our framework outperforms existing baseline
methods in terms of both automatic metrics and human evaluation. The potential
ethical issues are also discussed in detail.
| 2,021 |
Computation and Language
|
Capturing Label Distribution: A Case Study in NLI
|
We study estimating inherent human disagreement (annotation label
distribution) in natural language inference task. Post-hoc smoothing of the
predicted label distribution to match the expected label entropy is very
effective. Such simple manipulation can reduce KL divergence by almost half,
yet will not improve majority label prediction accuracy or learn label
distributions. To this end, we introduce a small amount of examples with
multiple references into training. We depart from the standard practice of
collecting a single reference per each training example, and find that
collecting multiple references can achieve better accuracy under the fixed
annotation budget. Lastly, we provide rich analyses comparing these two methods
for improving label distribution estimation.
| 2,021 |
Computation and Language
|
The first large scale collection of diverse Hausa language datasets
|
Hausa language belongs to the Afroasiatic phylum, and with more
first-language speakers than any other sub-Saharan African language. With a
majority of its speakers residing in the Northern and Southern areas of Nigeria
and the Republic of Niger, respectively, it is estimated that over 100 million
people speak the language. Hence, making it one of the most spoken Chadic
language. While Hausa is considered well-studied and documented language among
the sub-Saharan African languages, it is viewed as a low resource language from
the perspective of natural language processing (NLP) due to limited resources
to utilise in NLP-related tasks. This is common to most languages in Africa;
thus, it is crucial to enrich such languages with resources that will support
and speed the pace of conducting various downstream tasks to meet the demand of
the modern society. While there exist useful datasets, notably from news sites
and religious texts, more diversity is needed in the corpus.
We provide an expansive collection of curated datasets consisting of both
formal and informal forms of the language from refutable websites and online
social media networks, respectively. The collection is large and more diverse
than the existing corpora by providing the first and largest set of Hausa
social media data posts to capture the peculiarities in the language. The
collection also consists of a parallel dataset, which can be used for tasks
such as machine translation with applications in areas such as the detection of
spurious or inciteful online content. We describe the curation process -- from
the collection, preprocessing and how to obtain the data -- and proffer some
research problems that could be addressed using the data.
| 2,021 |
Computation and Language
|
Interactive Learning from Activity Description
|
We present a novel interactive learning protocol that enables training
request-fulfilling agents by verbally describing their activities. Unlike
imitation learning (IL), our protocol allows the teaching agent to provide
feedback in a language that is most appropriate for them. Compared with reward
in reinforcement learning (RL), the description feedback is richer and allows
for improved sample complexity. We develop a probabilistic framework and an
algorithm that practically implements our protocol. Empirical results in two
challenging request-fulfilling problems demonstrate the strengths of our
approach: compared with RL baselines, it is more sample-efficient; compared
with IL baselines, it achieves competitive success rates without requiring the
teaching agent to be able to demonstrate the desired behavior using the
learning agent's actions. Apart from empirical evaluation, we also provide
theoretical guarantees for our algorithm under certain assumptions about the
teacher and the environment.
| 2,021 |
Computation and Language
|
PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them
|
Open-domain Question Answering models which directly leverage question-answer
(QA) pairs, such as closed-book QA (CBQA) models and QA-pair retrievers, show
promise in terms of speed and memory compared to conventional models which
retrieve and read from text corpora. QA-pair retrievers also offer
interpretable answers, a high degree of control, and are trivial to update at
test time with new knowledge. However, these models lack the accuracy of
retrieve-and-read systems, as substantially less knowledge is covered by the
available QA-pairs relative to text corpora like Wikipedia. To facilitate
improved QA-pair models, we introduce Probably Asked Questions (PAQ), a very
large resource of 65M automatically-generated QA-pairs. We introduce a new
QA-pair retriever, RePAQ, to complement PAQ. We find that PAQ preempts and
caches test questions, enabling RePAQ to match the accuracy of recent
retrieve-and-read models, whilst being significantly faster. Using PAQ, we
train CBQA models which outperform comparable baselines by 5%, but trail RePAQ
by over 15%, indicating the effectiveness of explicit retrieval. RePAQ can be
configured for size (under 500MB) or speed (over 1K questions per second)
whilst retaining high accuracy. Lastly, we demonstrate RePAQ's strength at
selective QA, abstaining from answering when it is likely to be incorrect. This
enables RePAQ to ``back-off" to a more expensive state-of-the-art model,
leading to a combined system which is both more accurate and 2x faster than the
state-of-the-art model alone.
| 2,021 |
Computation and Language
|
Query-by-Example Keyword Spotting system using Multi-head Attention and
Softtriple Loss
|
This paper proposes a neural network architecture for tackling the
query-by-example user-defined keyword spotting task. A multi-head attention
module is added on top of a multi-layered GRU for effective feature extraction,
and a normalized multi-head attention module is proposed for feature
aggregation. We also adopt the softtriple loss - a combination of triplet loss
and softmax loss - and showcase its effectiveness. We demonstrate the
performance of our model on internal datasets with different languages and the
public Hey-Snips dataset. We compare the performance of our model to a baseline
system and conduct an ablation study to show the benefit of each component in
our architecture. The proposed work shows solid performance while preserving
simplicity.
| 2,021 |
Computation and Language
|
indicnlp@kgp at DravidianLangTech-EACL2021: Offensive Language
Identification in Dravidian Languages
|
The paper presents the submission of the team indicnlp@kgp to the EACL 2021
shared task "Offensive Language Identification in Dravidian Languages." The
task aimed to classify different offensive content types in 3 code-mixed
Dravidian language datasets. The work leverages existing state of the art
approaches in text classification by incorporating additional data and transfer
learning on pre-trained models. Our final submission is an ensemble of an
AWD-LSTM based model along with 2 different transformer model architectures
based on BERT and RoBERTa. We achieved weighted-average F1 scores of 0.97,
0.77, and 0.72 in the Malayalam-English, Tamil-English, and Kannada-English
datasets ranking 1st, 2nd, and 3rd on the respective tasks.
| 2,021 |
Computation and Language
|
Error-driven Pruning of Language Models for Virtual Assistants
|
Language models (LMs) for virtual assistants (VAs) are typically trained on
large amounts of data, resulting in prohibitively large models which require
excessive memory and/or cannot be used to serve user requests in real-time.
Entropy pruning results in smaller models but with significant degradation of
effectiveness in the tail of the user request distribution. We customize
entropy pruning by allowing for a keep list of infrequent n-grams that require
a more relaxed pruning threshold, and propose three methods to construct the
keep list. Each method has its own advantages and disadvantages with respect to
LM size, ASR accuracy and cost of constructing the keep list. Our best LM gives
8% average Word Error Rate (WER) reduction on a targeted test set, but is 3
times larger than the baseline. We also propose discriminative methods to
reduce the size of the LM while retaining the majority of the WER gains
achieved by the largest LM.
| 2,021 |
Computation and Language
|
MATCH: Metadata-Aware Text Classification in A Large Hierarchy
|
Multi-label text classification refers to the problem of assigning each given
document its most relevant labels from the label set. Commonly, the metadata of
the given documents and the hierarchy of the labels are available in real-world
applications. However, most existing studies focus on only modeling the text
information, with a few attempts to utilize either metadata or hierarchy
signals, but not both of them. In this paper, we bridge the gap by formalizing
the problem of metadata-aware text classification in a large label hierarchy
(e.g., with tens of thousands of labels). To address this problem, we present
the MATCH solution -- an end-to-end framework that leverages both metadata and
hierarchy information. To incorporate metadata, we pre-train the embeddings of
text and metadata in the same space and also leverage the fully-connected
attentions to capture the interrelations between them. To leverage the label
hierarchy, we propose different ways to regularize the parameters and output
probability of each child label by its parents. Extensive experiments on two
massive text datasets with large-scale label hierarchies demonstrate the
effectiveness of MATCH over state-of-the-art deep learning baselines.
| 2,023 |
Computation and Language
|
Prompt Programming for Large Language Models: Beyond the Few-Shot
Paradigm
|
Prevailing methods for mapping large generative language models to supervised
tasks may fail to sufficiently probe models' novel capabilities. Using GPT-3 as
a case study, we show that 0-shot prompts can significantly outperform few-shot
prompts. We suggest that the function of few-shot examples in these cases is
better described as locating an already learned task rather than meta-learning.
This analysis motivates rethinking the role of prompts in controlling and
evaluating powerful language models. In this work, we discuss methods of prompt
programming, emphasizing the usefulness of considering prompts through the lens
of natural language. We explore techniques for exploiting the capacity of
narratives and cultural anchors to encode nuanced intentions and techniques for
encouraging deconstruction of a problem into components before producing a
verdict. Informed by this more encompassing theory of prompt programming, we
also introduce the idea of a metaprompt that seeds the model to generate its
own natural language prompts for a range of tasks. Finally, we discuss how
these more general methods of interacting with language models can be
incorporated into existing and future benchmarks and practical applications.
| 2,021 |
Computation and Language
|
Leveraging Acoustic and Linguistic Embeddings from Pretrained speech and
language Models for Intent Classification
|
Intent classification is a task in spoken language understanding. An intent
classification system is usually implemented as a pipeline process, with a
speech recognition module followed by text processing that classifies the
intents. There are also studies of end-to-end system that takes acoustic
features as input and classifies the intents directly. Such systems don't take
advantage of relevant linguistic information, and suffer from limited training
data. In this work, we propose a novel intent classification framework that
employs acoustic features extracted from a pretrained speech recognition system
and linguistic features learned from a pretrained language model. We use
knowledge distillation technique to map the acoustic embeddings towards
linguistic embeddings. We perform fusion of both acoustic and linguistic
embeddings through cross-attention approach to classify intents. With the
proposed method, we achieve 90.86% and 99.07% accuracy on ATIS and Fluent
speech corpus, respectively.
| 2,021 |
Computation and Language
|
MAPGN: MAsked Pointer-Generator Network for sequence-to-sequence
pre-training
|
This paper presents a self-supervised learning method for pointer-generator
networks to improve spoken-text normalization. Spoken-text normalization that
converts spoken-style text into style normalized text is becoming an important
technology for improving subsequent processing such as machine translation and
summarization. The most successful spoken-text normalization method to date is
sequence-to-sequence (seq2seq) mapping using pointer-generator networks that
possess a copy mechanism from an input sequence. However, these models require
a large amount of paired data of spoken-style text and style normalized text,
and it is difficult to prepare such a volume of data. In order to construct
spoken-text normalization model from the limited paired data, we focus on
self-supervised learning which can utilize unpaired text data to improve
seq2seq models. Unfortunately, conventional self-supervised learning methods do
not assume that pointer-generator networks are utilized. Therefore, we propose
a novel self-supervised learning method, MAsked Pointer-Generator Network
(MAPGN). The proposed method can effectively pre-train the pointer-generator
network by learning to fill masked tokens using the copy mechanism. Our
experiments demonstrate that MAPGN is more effective for pointer-generator
networks than the conventional self-supervised learning methods in two
spoken-text normalization tasks.
| 2,021 |
Computation and Language
|
Beyond the English Web: Zero-Shot Cross-Lingual and Lightweight
Monolingual Classification of Registers
|
We explore cross-lingual transfer of register classification for web
documents. Registers, that is, text varieties such as blogs or news are one of
the primary predictors of linguistic variation and thus affect the automatic
processing of language. We introduce two new register annotated corpora,
FreCORE and SweCORE, for French and Swedish. We demonstrate that deep
pre-trained language models perform strongly in these languages and outperform
previous state-of-the-art in English and Finnish. Specifically, we show 1) that
zero-shot cross-lingual transfer from the large English CORE corpus can match
or surpass previously published monolingual models, and 2) that lightweight
monolingual classification requiring very little training data can reach or
surpass our zero-shot performance. We further analyse classification results
finding that certain registers continue to pose challenges in particular for
cross-lingual transfer.
| 2,021 |
Computation and Language
|
DOBF: A Deobfuscation Pre-Training Objective for Programming Languages
|
Recent advances in self-supervised learning have dramatically improved the
state of the art on a wide variety of tasks. However, research in language
model pre-training has mostly focused on natural languages, and it is unclear
whether models like BERT and its variants provide the best pre-training when
applied to other modalities, such as source code. In this paper, we introduce a
new pre-training objective, DOBF, that leverages the structural aspect of
programming languages and pre-trains a model to recover the original version of
obfuscated source code. We show that models pre-trained with DOBF significantly
outperform existing approaches on multiple downstream tasks, providing relative
improvements of up to 13% in unsupervised code translation, and 24% in natural
language code search. Incidentally, we found that our pre-trained model is able
to de-obfuscate fully obfuscated source files, and to suggest descriptive
variable names.
| 2,021 |
Computation and Language
|
Fast End-to-End Speech Recognition via Non-Autoregressive Models and
Cross-Modal Knowledge Transferring from BERT
|
Attention-based encoder-decoder (AED) models have achieved promising
performance in speech recognition. However, because the decoder predicts text
tokens (such as characters or words) in an autoregressive manner, it is
difficult for an AED model to predict all tokens in parallel. This makes the
inference speed relatively slow. We believe that because the encoder already
captures the whole speech utterance, which has the token-level relationship
implicitly, we can predict a token without explicitly autoregressive language
modeling. When the prediction of a token does not rely on other tokens, the
parallel prediction of all tokens in the sequence is realizable. Based on this
idea, we propose a non-autoregressive speech recognition model called LASO
(Listen Attentively, and Spell Once). The model consists of an encoder, a
decoder, and a position dependent summarizer (PDS). The three modules are based
on basic attention blocks. The encoder extracts high-level representations from
the speech. The PDS uses positional encodings corresponding to tokens to
convert the acoustic representations into token-level representations. The
decoder further captures token-level relationships with the self-attention
mechanism. At last, the probability distribution on the vocabulary is computed
for each token position. Therefore, speech recognition is re-formulated as a
position-wise classification problem. Further, we propose a cross-modal
transfer learning method to refine semantics from a large-scale pre-trained
language model BERT for improving the performance.
| 2,021 |
Computation and Language
|
Improved Customer Transaction Classification using Semi-Supervised
Knowledge Distillation
|
In pickup and delivery services, transaction classification based on customer
provided free text is a challenging problem. It involves the association of a
wide variety of customer inputs to a fixed set of categories while adapting to
the various customer writing styles. This categorization is important for the
business: it helps understand the market needs and trends, and also assist in
building a personalized experience for different segments of the customers.
Hence, it is vital to capture these category information trends at scale, with
high precision and recall. In this paper, we focus on a specific use-case where
a single category drives each transaction. We propose a cost-effective
transaction classification approach based on semi-supervision and knowledge
distillation frameworks. The approach identifies the category of a transaction
using free text input given by the customer. We use weak labelling and notice
that the performance gains are similar to that of using human-annotated
samples. On a large internal dataset and on 20Newsgroup dataset, we see that
RoBERTa performs the best for the categorization tasks. Further, using an
ALBERT model (it has 33x fewer parameters vis-a-vis parameters of RoBERTa),
with RoBERTa as the Teacher, we see a performance similar to that of RoBERTa
and better performance over unadapted ALBERT. This framework, with ALBERT as a
student and RoBERTa as teacher, is further referred to as R-ALBERT in this
paper. The model is in production and is used by business to understand
changing trends and take appropriate decisions.
| 2,021 |
Computation and Language
|
Personalization Strategies for End-to-End Speech Recognition Systems
|
The recognition of personalized content, such as contact names, remains a
challenging problem for end-to-end speech recognition systems. In this work, we
demonstrate how first and second-pass rescoring strategies can be leveraged
together to improve the recognition of such words. Following previous work, we
use a shallow fusion approach to bias towards recognition of personalized
content in the first-pass decoding. We show that such an approach can improve
personalized content recognition by up to 16% with minimum degradation on the
general use case. We describe a fast and scalable algorithm that enables our
biasing models to remain at the word-level, while applying the biasing at the
subword level. This has the advantage of not requiring the biasing models to be
dependent on any subword symbol table. We also describe a novel second-pass
de-biasing approach: used in conjunction with a first-pass shallow fusion that
optimizes on oracle WER, we can achieve an additional 14% improvement on
personalized content recognition, and even improve accuracy for the general use
case by up to 2.5%.
| 2,021 |
Computation and Language
|
How COVID-19 Is Changing Our Language : Detecting Semantic Shift in
Twitter Word Embeddings
|
Words are malleable objects, influenced by events that are reflected in
written texts. Situated in the global outbreak of COVID-19, our research aims
at detecting semantic shifts in social media language triggered by the health
crisis. With COVID-19 related big data extracted from Twitter, we train
separate word embedding models for different time periods after the outbreak.
We employ an alignment-based approach to compare these embeddings with a
general-purpose Twitter embedding unrelated to COVID-19. We also compare our
trained embeddings among them to observe diachronic evolution. Carrying out
case studies on a set of words chosen by topic detection, we verify that our
alignment approach is valid. Finally, we quantify the size of global semantic
shift by a stability measure based on back-and-forth rotational alignment.
| 2,021 |
Computation and Language
|
Meta Back-translation
|
Back-translation is an effective strategy to improve the performance of
Neural Machine Translation~(NMT) by generating pseudo-parallel data. However,
several recent works have found that better translation quality of the
pseudo-parallel data does not necessarily lead to better final translation
models, while lower-quality but more diverse data often yields stronger
results. In this paper, we propose a novel method to generate pseudo-parallel
data from a pre-trained back-translation model. Our method is a meta-learning
algorithm which adapts a pre-trained back-translation model so that the
pseudo-parallel data it generates would train a forward-translation model to do
well on a validation set. In our evaluations in both the standard datasets WMT
En-De'14 and WMT En-Fr'14, as well as a multilingual translation setting, our
method leads to significant improvements over strong baselines. Our code will
be made available.
| 2,021 |
Computation and Language
|
Have Attention Heads in BERT Learned Constituency Grammar?
|
With the success of pre-trained language models in recent years, more and
more researchers focus on opening the "black box" of these models. Following
this interest, we carry out a qualitative and quantitative analysis of
constituency grammar in attention heads of BERT and RoBERTa. We employ the
syntactic distance method to extract implicit constituency grammar from the
attention weights of each head. Our results show that there exist heads that
can induce some grammar types much better than baselines, suggesting that some
heads act as a proxy for constituency grammar. We also analyze how attention
heads' constituency grammar inducing (CGI) ability changes after fine-tuning
with two kinds of tasks, including sentence meaning similarity (SMS) tasks and
natural language inference (NLI) tasks. Our results suggest that SMS tasks
decrease the average CGI ability of upper layers, while NLI tasks increase it.
Lastly, we investigate the connections between CGI ability and natural language
understanding ability on QQP and MNLI tasks.
| 2,021 |
Computation and Language
|
Hierarchical Transformer-based Large-Context End-to-end ASR with
Large-Context Knowledge Distillation
|
We present a novel large-context end-to-end automatic speech recognition
(E2E-ASR) model and its effective training method based on knowledge
distillation. Common E2E-ASR models have mainly focused on utterance-level
processing in which each utterance is independently transcribed. On the other
hand, large-context E2E-ASR models, which take into account long-range
sequential contexts beyond utterance boundaries, well handle a sequence of
utterances such as discourses and conversations. However, the transformer
architecture, which has recently achieved state-of-the-art ASR performance
among utterance-level ASR systems, has not yet been introduced into the
large-context ASR systems. We can expect that the transformer architecture can
be leveraged for effectively capturing not only input speech contexts but also
long-range sequential contexts beyond utterance boundaries. Therefore, this
paper proposes a hierarchical transformer-based large-context E2E-ASR model
that combines the transformer architecture with hierarchical encoder-decoder
based large-context modeling. In addition, in order to enable the proposed
model to use long-range sequential contexts, we also propose a large-context
knowledge distillation that distills the knowledge from a pre-trained
large-context language model in the training phase. We evaluate the
effectiveness of the proposed model and proposed training method on Japanese
discourse ASR tasks.
| 2,021 |
Computation and Language
|
FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the
Dictionary
|
Current models for Word Sense Disambiguation (WSD) struggle to disambiguate
rare senses, despite reaching human performance on global WSD metrics. This
stems from a lack of data for both modeling and evaluating rare senses in
existing WSD datasets. In this paper, we introduce FEWS (Few-shot Examples of
Word Senses), a new low-shot WSD dataset automatically extracted from example
sentences in Wiktionary. FEWS has high sense coverage across different natural
language domains and provides: (1) a large training set that covers many more
senses than previous datasets and (2) a comprehensive evaluation set containing
few- and zero-shot examples of a wide variety of senses. We establish baselines
on FEWS with knowledge-based and neural WSD approaches and present transfer
learning experiments demonstrating that models additionally trained with FEWS
better capture rare senses in existing WSD datasets. Finally, we find humans
outperform the best baseline models on FEWS, indicating that FEWS will support
significant future work on low-shot WSD.
| 2,021 |
Computation and Language
|
Exploring Transformers in Natural Language Generation: GPT, BERT, and
XLNet
|
Recent years have seen a proliferation of attention mechanisms and the rise
of Transformers in Natural Language Generation (NLG). Previously,
state-of-the-art NLG architectures such as RNN and LSTM ran into vanishing
gradient problems; as sentences grew larger, distance between positions
remained linear, and sequential computation hindered parallelization since
sentences were processed word by word. Transformers usher in a new era. In this
paper, we explore three major Transformer-based models, namely GPT, BERT, and
XLNet, that carry significant implications for the field. NLG is a burgeoning
area that is now bolstered with rapid developments in attention mechanisms.
From poetry generation to summarization, text generation derives benefit as
Transformer-based language models achieve groundbreaking results.
| 2,021 |
Computation and Language
|
Large-Context Conversational Representation Learning: Self-Supervised
Learning for Conversational Documents
|
This paper presents a novel self-supervised learning method for handling
conversational documents consisting of transcribed text of human-to-human
conversations. One of the key technologies for understanding conversational
documents is utterance-level sequential labeling, where labels are estimated
from the documents in an utterance-by-utterance manner. The main issue with
utterance-level sequential labeling is the difficulty of collecting labeled
conversational documents, as manual annotations are very costly. To deal with
this issue, we propose large-context conversational representation learning
(LC-CRL), a self-supervised learning method specialized for conversational
documents. A self-supervised learning task in LC-CRL involves the estimation of
an utterance using all the surrounding utterances based on large-context
language modeling. In this way, LC-CRL enables us to effectively utilize
unlabeled conversational documents and thereby enhances the utterance-level
sequential labeling. The results of experiments on scene segmentation tasks
using contact center conversational datasets demonstrate the effectiveness of
the proposed method.
| 2,021 |
Computation and Language
|
End-to-End Automatic Speech Recognition with Deep Mutual Learning
|
This paper is the first study to apply deep mutual learning (DML) to
end-to-end ASR models. In DML, multiple models are trained simultaneously and
collaboratively by mimicking each other throughout the training process, which
helps to attain the global optimum and prevent models from making
over-confident predictions. While previous studies applied DML to simple
multi-class classification problems, there are no studies that have used it on
more complex sequence-to-sequence mapping problems. For this reason, this paper
presents a method to apply DML to state-of-the-art Transformer-based end-to-end
ASR models. In particular, we propose to combine DML with recent representative
training techniques. i.e., label smoothing, scheduled sampling, and
SpecAugment, each of which are essential for powerful end-to-end ASR models. We
expect that these training techniques work well with DML because DML has
complementary characteristics. We experimented with two setups for Japanese ASR
tasks: large-scale modeling and compact modeling. We demonstrate that DML
improves the ASR performance of both modeling setups compared with conventional
learning methods including knowledge distillation. We also show that combining
DML with the existing training techniques effectively improves ASR performance.
| 2,021 |
Computation and Language
|
Non-Autoregressive Text Generation with Pre-trained Language Models
|
Non-autoregressive generation (NAG) has recently attracted great attention
due to its fast inference speed. However, the generation quality of existing
NAG models still lags behind their autoregressive counterparts. In this work,
we show that BERT can be employed as the backbone of a NAG model to greatly
improve performance. Additionally, we devise mechanisms to alleviate the two
common problems of vanilla NAG models: the inflexibility of prefixed output
length and the conditional independence of individual token predictions.
Lastly, to further increase the speed advantage of the proposed model, we
propose a new decoding strategy, ratio-first, for applications where the output
lengths can be approximately estimated beforehand. For a comprehensive
evaluation, we test the proposed model on three text generation tasks,
including text summarization, sentence compression and machine translation.
Experimental results show that our model significantly outperforms existing
non-autoregressive baselines and achieves competitive performance with many
strong autoregressive models. In addition, we also conduct extensive analysis
experiments to reveal the effect of each proposed component.
| 2,021 |
Computation and Language
|
NoiseQA: Challenge Set Evaluation for User-Centric Question Answering
|
When Question-Answering (QA) systems are deployed in the real world, users
query them through a variety of interfaces, such as speaking to voice
assistants, typing questions into a search engine, or even translating
questions to languages supported by the QA system. While there has been
significant community attention devoted to identifying correct answers in
passages assuming a perfectly formed question, we show that components in the
pipeline that precede an answering engine can introduce varied and considerable
sources of error, and performance can degrade substantially based on these
upstream noise sources even for powerful pre-trained QA models. We conclude
that there is substantial room for progress before QA systems can be
effectively deployed, highlight the need for QA evaluation to expand to
consider real-world use, and hope that our findings will spur greater community
interest in the issues that arise when our systems actually need to be of
utility to humans.
| 2,021 |
Computation and Language
|
Revisiting Language Encoding in Learning Multilingual Representations
|
Transformer has demonstrated its great power to learn contextual word
representations for multiple languages in a single model. To process
multilingual sentences in the model, a learnable vector is usually assigned to
each language, which is called "language embedding". The language embedding can
be either added to the word embedding or attached at the beginning of the
sentence. It serves as a language-specific signal for the Transformer to
capture contextual representations across languages. In this paper, we revisit
the use of language embedding and identify several problems in the existing
formulations. By investigating the interaction between language embedding and
word embedding in the self-attention module, we find that the current methods
cannot reflect the language-specific word correlation well. Given these
findings, we propose a new approach called Cross-lingual Language Projection
(XLP) to replace language embedding. For a sentence, XLP projects the word
embeddings into language-specific semantic space, and then the projected
embeddings will be fed into the Transformer model to process with their
language-specific meanings. In such a way, XLP achieves the purpose of
appropriately encoding "language" in a multilingual Transformer model.
Experimental results show that XLP can freely and significantly boost the model
performance on extensive multilingual benchmark datasets. Codes and models will
be released at https://github.com/lsj2408/XLP.
| 2,021 |
Computation and Language
|
Boosting Low-Resource Biomedical QA via Entity-Aware Masking Strategies
|
Biomedical question-answering (QA) has gained increased attention for its
capability to provide users with high-quality information from a vast
scientific literature. Although an increasing number of biomedical QA datasets
has been recently made available, those resources are still rather limited and
expensive to produce. Transfer learning via pre-trained language models (LMs)
has been shown as a promising approach to leverage existing general-purpose
knowledge. However, finetuning these large models can be costly and time
consuming, often yielding limited benefits when adapting to specific themes of
specialised domains, such as the COVID-19 literature. To bootstrap further
their domain adaptation, we propose a simple yet unexplored approach, which we
call biomedical entity-aware masking (BEM). We encourage masked language models
to learn entity-centric knowledge based on the pivotal entities characterizing
the domain at hand, and employ those entities to drive the LM fine-tuning. The
resulting strategy is a downstream process applicable to a wide variety of
masked LMs, not requiring additional memory or components in the neural
architectures. Experimental results show performance on par with
state-of-the-art models on several biomedical QA datasets.
| 2,021 |
Computation and Language
|
Searching for Search Errors in Neural Morphological Inflection
|
Neural sequence-to-sequence models are currently the predominant choice for
language generation tasks. Yet, on word-level tasks, exact inference of these
models reveals the empty string is often the global optimum. Prior works have
speculated this phenomenon is a result of the inadequacy of neural models for
language generation. However, in the case of morphological inflection, we find
that the empty string is almost never the most probable solution under the
model. Further, greedy search often finds the global optimum. These
observations suggest that the poor calibration of many neural models may stem
from characteristics of a specific subset of tasks rather than general
ill-suitedness of such models for language generation.
| 2,021 |
Computation and Language
|
COCO-LM: Correcting and Contrasting Text Sequences for Language Model
Pretraining
|
We present a self-supervised learning framework, COCO-LM, that pretrains
Language Models by COrrecting and COntrasting corrupted text sequences.
Following ELECTRA-style pretraining, COCO-LM employs an auxiliary language
model to corrupt text sequences, upon which it constructs two new tasks for
pretraining the main model. The first token-level task, Corrective Language
Modeling, is to detect and correct tokens replaced by the auxiliary model, in
order to better capture token-level semantics. The second sequence-level task,
Sequence Contrastive Learning, is to align text sequences originated from the
same source input while ensuring uniformity in the representation space.
Experiments on GLUE and SQuAD demonstrate that COCO-LM not only outperforms
recent state-of-the-art pretrained models in accuracy, but also improves
pretraining efficiency. It achieves the MNLI accuracy of ELECTRA with 50% of
its pretraining GPU hours. With the same pretraining steps of standard
base/large-sized models, COCO-LM outperforms the previous best models by 1+
GLUE average points.
| 2,021 |
Computation and Language
|
A Context-Enhanced De-identification System
|
Many modern entity recognition systems, including the current
state-of-the-art de-identification systems, are based on bidirectional long
short-term memory (biLSTM) units augmented by a conditional random field (CRF)
sequence optimizer. These systems process the input sentence by sentence. This
approach prevents the systems from capturing dependencies over sentence
boundaries and makes accurate sentence boundary detection a prerequisite. Since
sentence boundary detection can be problematic especially in clinical reports,
where dependencies and co-references across sentence boundaries are abundant,
these systems have clear limitations. In this study, we built a new system on
the framework of one of the current state-of-the-art de-identification systems,
NeuroNER, to overcome these limitations. This new system incorporates context
embeddings through forward and backward n-grams without using sentence
boundaries. Our context-enhanced de-identification (CEDI) system captures
dependencies over sentence boundaries and bypasses the sentence boundary
detection problem altogether. We enhanced this system with deep affix features
and an attention mechanism to capture the pertinent parts of the input. The
CEDI system outperforms NeuroNER on the 2006 i2b2 de-identification challenge
dataset, the 2014 i2b2 shared task de-identification dataset, and the 2016 CEGS
N-GRID de-identification dataset (p<0.01). All datasets comprise narrative
clinical reports in English but contain different note types varying from
discharge summaries to psychiatric notes. Enhancing CEDI with deep affix
features and the attention mechanism further increased performance.
| 2,021 |
Computation and Language
|
Transferability of Neural Network Clinical De-identification Systems
|
Objective: Neural network de-identification studies have focused on
individual datasets. These studies assume the availability of a sufficient
amount of human-annotated data to train models that can generalize to
corresponding test data. In real-world situations, however, researchers often
have limited or no in-house training data. Existing systems and external data
can help jump-start de-identification on in-house data; however, the most
efficient way of utilizing existing systems and external data is unclear. This
article investigates the transferability of a state-of-the-art neural clinical
de-identification system, NeuroNER, across a variety of datasets, when it is
modified architecturally for domain generalization and when it is trained
strategically for domain transfer. Methods and Materials: We conducted a
comparative study of the transferability of NeuroNER using four clinical note
corpora with multiple note types from two institutions. We modified NeuroNER
architecturally to integrate two types of domain generalization approaches. We
evaluated each architecture using three training strategies. We measured:
transferability from external sources; transferability across note types; the
contribution of external source data when in-domain training data are
available; and transferability across institutions. Results and Conclusions:
Transferability from a single external source gave inconsistent results. Using
additional external sources consistently yielded an F1-score of approximately
80%. Fine-tuning emerged as a dominant transfer strategy, with or without
domain generalization. We also found that external sources were useful even in
cases where in-domain training data were available. Transferability across
institutions differed by note type and annotation label but resulted in
improved performance.
| 2,021 |
Computation and Language
|
ATCSpeechNet: A multilingual end-to-end speech recognition framework for
air traffic control systems
|
In this paper, a multilingual end-to-end framework, called as ATCSpeechNet,
is proposed to tackle the issue of translating communication speech into
human-readable text in air traffic control (ATC) systems. In the proposed
framework, we focus on integrating the multilingual automatic speech
recognition (ASR) into one model, in which an end-to-end paradigm is developed
to convert speech waveform into text directly, without any feature engineering
or lexicon. In order to make up for the deficiency of the handcrafted feature
engineering caused by ATC challenges, a speech representation learning (SRL)
network is proposed to capture robust and discriminative speech representations
from the raw wave. The self-supervised training strategy is adopted to optimize
the SRL network from unlabeled data, and further to predict the speech
features, i.e., wave-to-feature. An end-to-end architecture is improved to
complete the ASR task, in which a grapheme-based modeling unit is applied to
address the multilingual ASR issue. Facing the problem of small transcribed
samples in the ATC domain, an unsupervised approach with mask prediction is
applied to pre-train the backbone network of the ASR model on unlabeled data by
a feature-to-feature process. Finally, by integrating the SRL with ASR, an
end-to-end multilingual ASR framework is formulated in a supervised manner,
which is able to translate the raw wave into text in one model, i.e.,
wave-to-text. Experimental results on the ATCSpeech corpus demonstrate that the
proposed approach achieves a high performance with a very small labeled corpus
and less resource consumption, only 4.20% label error rate on the 58-hour
transcribed corpus. Compared to the baseline model, the proposed approach
obtains over 100% relative performance improvement which can be further
enhanced with the increasing of the size of the transcribed samples.
| 2,021 |
Computation and Language
|
First Target and Opinion then Polarity: Enhancing Target-opinion
Correlation for Aspect Sentiment Triplet Extraction
|
Aspect Sentiment Triplet Extraction (ASTE) aims to extract triplets from a
sentence, including target entities, associated sentiment polarities, and
opinion spans which rationalize the polarities. Existing methods are short on
building correlation between target-opinion pairs, and neglect the mutual
interference among different sentiment triplets. To address these issues, we
utilize a two-stage framework to enhance the correlation between targets and
opinions: at stage one, we extract targets and opinions through sequence
tagging; then we append a group of artificial tags named Perceivable Pair,
which indicate the span of a specific target-opinion tuple, to the input
sentence to obtain closer correlated target-opinion pair representation.
Meanwhile, we reduce the negative interference between triplets by restricting
tokens' attention field. Finally, the polarity is identified according to the
representation of the Perceivable Pair. We conduct experiments on four
datasets, and the experimental results show the effectiveness of our model.
| 2,021 |
Computation and Language
|
Integrating Pre-trained Model into Rule-based Dialogue Management
|
Rule-based dialogue management is still the most popular solution for
industrial task-oriented dialogue systems for their interpretablility. However,
it is hard for developers to maintain the dialogue logic when the scenarios get
more and more complex. On the other hand, data-driven dialogue systems, usually
with end-to-end structures, are popular in academic research and easier to deal
with complex conversations, but such methods require plenty of training data
and the behaviors are less interpretable. In this paper, we propose a method to
leverages the strength of both rule-based and data-driven dialogue managers
(DM). We firstly introduce the DM of Carina Dialog System (CDS, an advanced
industrial dialogue system built by Microsoft). Then we propose the
"model-trigger" design to make the DM trainable thus scalable to scenario
changes. Furthermore, we integrate pre-trained models and empower the DM with
few-shot capability. The experimental results demonstrate the effectiveness and
strong few-shot capability of our method.
| 2,021 |
Computation and Language
|
Contextual Skipgram: Training Word Representation Using Context
Information
|
The skip-gram (SG) model learns word representation by predicting the words
surrounding a center word from unstructured text data. However, not all words
in the context window contribute to the meaning of the center word. For
example, less relevant words could be in the context window, hindering the SG
model from learning a better quality representation. In this paper, we propose
an enhanced version of the SG that leverages context information to produce
word representation. The proposed model, Contextual Skip-gram, is designed to
predict contextual words with both the center words and the context
information. This simple idea helps to reduce the impact of irrelevant words on
the training process, thus enhancing the final performance
| 2,021 |
Computation and Language
|
Towards Faithfulness in Open Domain Table-to-text Generation from an
Entity-centric View
|
In open domain table-to-text generation, we notice that the unfaithful
generation usually contains hallucinated content which can not be aligned to
any input table record. We thus try to evaluate the generation faithfulness
with two entity-centric metrics: table record coverage and the ratio of
hallucinated entities in text, both of which are shown to have strong agreement
with human judgements. Then based on these metrics, we quantitatively analyze
the correlation between training data quality and generation fidelity which
indicates the potential usage of entity information in faithful generation.
Motivated by these findings, we propose two methods for faithful generation: 1)
augmented training by incorporating the auxiliary entity information, including
both an augmented plan-based model and an unsupervised model and 2) training
instance selection based on faithfulness ranking. We show these approaches
improve generation fidelity in both full dataset setting and few shot learning
settings by both automatic and human evaluations.
| 2,021 |
Computation and Language
|
Open-Retrieval Conversational Machine Reading
|
In conversational machine reading, systems need to interpret natural language
rules, answer high-level questions such as "May I qualify for VA health care
benefits?", and ask follow-up clarification questions whose answer is necessary
to answer the original question. However, existing works assume the rule text
is provided for each user question, which neglects the essential retrieval step
in real scenarios. In this work, we propose and investigate an open-retrieval
setting of conversational machine reading. In the open-retrieval setting, the
relevant rule texts are unknown so that a system needs to retrieve
question-relevant evidence from a collection of rule texts, and answer users'
high-level questions according to multiple retrieved rule texts in a
conversational manner. We propose MUDERN, a Multi-passage Discourse-aware
Entailment Reasoning Network which extracts conditions in the rule texts
through discourse segmentation, conducts multi-passage entailment reasoning to
answer user questions directly, or asks clarification follow-up questions to
inquiry more information. On our created OR-ShARC dataset, MUDERN achieves the
state-of-the-art performance, outperforming existing single-passage
conversational machine reading models as well as a new multi-passage
conversational machine reading baseline by a large margin. In addition, we
conduct in-depth analyses to provide new insights into this new setting and our
model.
| 2,021 |
Computation and Language
|
Decoding EEG Brain Activity for Multi-Modal Natural Language Processing
|
Until recently, human behavioral data from reading has mainly been of
interest to researchers to understand human cognition. However, these human
language processing signals can also be beneficial in machine learning-based
natural language processing tasks. Using EEG brain activity to this purpose is
largely unexplored as of yet. In this paper, we present the first large-scale
study of systematically analyzing the potential of EEG brain activity data for
improving natural language processing tasks, with a special focus on which
features of the signal are most beneficial. We present a multi-modal machine
learning architecture that learns jointly from textual input as well as from
EEG features. We find that filtering the EEG signals into frequency bands is
more beneficial than using the broadband signal. Moreover, for a range of word
embedding types, EEG data improves binary and ternary sentiment classification
and outperforms multiple baselines. For more complex tasks such as relation
detection, further research is needed. Finally, EEG data shows to be
particularly promising when limited training data is available.
| 2,021 |
Computation and Language
|
Predicting Lexical Complexity in English Texts: The Complex 2.0 Dataset
|
Identifying words which may cause difficulty for a reader is an essential
step in most lexical text simplification systems prior to lexical substitution
and can also be used for assessing the readability of a text. This task is
commonly referred to as Complex Word Identification (CWI) and is often modelled
as a supervised classification problem. For training such systems, annotated
datasets in which words and sometimes multi-word expressions are labelled
regarding complexity are required. In this paper we analyze previous work
carried out in this task and investigate the properties of CWI datasets for
English. We develop a protocol for the annotation of lexical complexity and use
this to annotate a new dataset, CompLex 2.0. We present experiments using both
new and old datasets to investigate the nature of lexical complexity. We found
that a Likert-scale annotation protocol provides an objective setting that is
superior for identifying the complexity of words compared to a binary
annotation protocol. We release a new dataset using our new protocol to promote
the task of Lexical Complexity Prediction.
| 2,022 |
Computation and Language
|
SciDr at SDU-2020: IDEAS -- Identifying and Disambiguating Everyday
Acronyms for Scientific Domain
|
We present our systems submitted for the shared tasks of Acronym
Identification (AI) and Acronym Disambiguation (AD) held under Workshop on SDU.
We mainly experiment with BERT and SciBERT. In addition, we assess the
effectiveness of "BIOless" tagging and blending along with the prowess of
ensembling in AI. For AD, we formulate the problem as a span prediction task,
experiment with different training techniques and also leverage the use of
external data. Our systems rank 11th and 3rd in AI and AD tasks respectively.
| 2,021 |
Computation and Language
|
Metrical Tagging in the Wild: Building and Annotating Poetry Corpora
with Rhythmic Features
|
A prerequisite for the computational study of literature is the availability
of properly digitized texts, ideally with reliable meta-data and ground-truth
annotation. Poetry corpora do exist for a number of languages, but larger
collections lack consistency and are encoded in various standards, while
annotated corpora are typically constrained to a particular genre and/or were
designed for the analysis of certain linguistic features (like rhyme). In this
work, we provide large poetry corpora for English and German, and annotate
prosodic features in smaller corpora to train corpus driven neural models that
enable robust large scale analysis.
We show that BiLSTM-CRF models with syllable embeddings outperform a CRF
baseline and different BERT-based approaches. In a multi-task setup, particular
beneficial task relations illustrate the inter-dependence of poetic features. A
model learns foot boundaries better when jointly predicting syllable stress,
aesthetic emotions and verse measures benefit from each other, and we find that
caesuras are quite dependent on syntax and also integral to shaping the overall
measure of the line.
| 2,021 |
Computation and Language
|
Towards generalisable hate speech detection: a review on obstacles and
solutions
|
Hate speech is one type of harmful online content which directly attacks or
promotes hate towards a group or an individual member based on their actual or
perceived aspects of identity, such as ethnicity, religion, and sexual
orientation. With online hate speech on the rise, its automatic detection as a
natural language processing task is gaining increasing interest. However, it is
only recently that it has been shown that existing models generalise poorly to
unseen data. This survey paper attempts to summarise how generalisable existing
hate speech detection models are, reason why hate speech models struggle to
generalise, sums up existing attempts at addressing the main obstacles, and
then proposes directions of future research to improve generalisation in hate
speech detection.
| 2,021 |
Computation and Language
|
THEaiTRE 1.0: Interactive generation of theatre play scripts
|
We present the first version of a system for interactive generation of
theatre play scripts. The system is based on a vanilla GPT-2 model with several
adjustments, targeting specific issues we encountered in practice. We also list
other issues we encountered but plan to only solve in a future version of the
system. The presented system was used to generate a theatre play script planned
for premiere in February 2021.
| 2,021 |
Computation and Language
|
Cross-SEAN: A Cross-Stitch Semi-Supervised Neural Attention Model for
COVID-19 Fake News Detection
|
As the COVID-19 pandemic sweeps across the world, it has been accompanied by
a tsunami of fake news and misinformation on social media. At the time when
reliable information is vital for public health and safety, COVID-19 related
fake news has been spreading even faster than the facts. During times such as
the COVID-19 pandemic, fake news can not only cause intellectual confusion but
can also place lives of people at risk. This calls for an immediate need to
contain the spread of such misinformation on social media. We introduce CTF,
the first COVID-19 Twitter fake news dataset with labeled genuine and fake
tweets. Additionally, we propose Cross-SEAN, a cross-stitch based
semi-supervised end-to-end neural attention model, which leverages the large
amount of unlabelled data. Cross-SEAN partially generalises to emerging fake
news as it learns from relevant external knowledge. We compare Cross-SEAN with
seven state-of-the-art fake news detection methods. We observe that it achieves
$0.95$ F1 Score on CTF, outperforming the best baseline by $9\%$. We also
develop Chrome-SEAN, a Cross-SEAN based chrome extension for real-time
detection of fake tweets.
| 2,021 |
Computation and Language
|
Sparsely Factored Neural Machine Translation
|
The standard approach to incorporate linguistic information to neural machine
translation systems consists in maintaining separate vocabularies for each of
the annotated features to be incorporated (e.g. POS tags, dependency relation
label), embed them, and then aggregate them with each subword in the word they
belong to. This approach, however, cannot easily accommodate annotation schemes
that are not dense for every word.
We propose a method suited for such a case, showing large improvements in
out-of-domain data, and comparable quality for the in-domain data. Experiments
are performed in morphologically-rich languages like Basque and German, for the
case of low-resource scenarios.
| 2,021 |
Computation and Language
|
Quiz-Style Question Generation for News Stories
|
A large majority of American adults get at least some of their news from the
Internet. Even though many online news products have the goal of informing
their users about the news, they lack scalable and reliable tools for measuring
how well they are achieving this goal, and therefore have to resort to noisy
proxy metrics (e.g., click-through rates or reading time) to track their
performance.
As a first step towards measuring news informedness at a scale, we study the
problem of quiz-style multiple-choice question generation, which may be used to
survey users about their knowledge of recent news. In particular, we formulate
the problem as two sequence-to-sequence tasks: question-answer generation (QAG)
and distractor, or incorrect answer, generation (DG). We introduce NewsQuizQA,
the first dataset intended for quiz-style question-answer generation,
containing 20K human written question-answer pairs from 5K news article
summaries. Using this dataset, we propose a series of novel techniques for
applying large pre-trained Transformer encoder-decoder models, namely PEGASUS
and T5, to the tasks of question-answer generation and distractor generation.
We show that our models outperform strong baselines using both automated
metrics and human raters. We provide a case study of running weekly quizzes on
real-world users via the Google Surveys platform over the course of two months.
We found that users generally found the automatically generated questions to be
educational and enjoyable. Finally, to serve the research community, we are
releasing the NewsQuizQA dataset.
| 2,021 |
Computation and Language
|
Echo State Speech Recognition
|
We propose automatic speech recognition (ASR) models inspired by echo state
network (ESN), in which a subset of recurrent neural networks (RNN) layers in
the models are randomly initialized and untrained. Our study focuses on RNN-T
and Conformer models, and we show that model quality does not drop even when
the decoder is fully randomized. Furthermore, such models can be trained more
efficiently as the decoders do not require to be updated. By contrast,
randomizing encoders hurts model quality, indicating that optimizing encoders
and learn proper representations for acoustic inputs are more vital for speech
recognition. Overall, we challenge the common practice of training ASR models
for all components, and demonstrate that ESN-based models can perform equally
well but enable more efficient training and storage than fully-trainable
counterparts.
| 2,021 |
Computation and Language
|
Entity-level Factual Consistency of Abstractive Text Summarization
|
A key challenge for abstractive summarization is ensuring factual consistency
of the generated summary with respect to the original document. For example,
state-of-the-art models trained on existing datasets exhibit entity
hallucination, generating names of entities that are not present in the source
document. We propose a set of new metrics to quantify the entity-level factual
consistency of generated summaries and we show that the entity hallucination
problem can be alleviated by simply filtering the training data. In addition,
we propose a summary-worthy entity classification task to the training process
as well as a joint entity and summary generation approach, which yield further
improvements in entity level metrics.
| 2,021 |
Computation and Language
|
From Extreme Multi-label to Multi-class: A Hierarchical Approach for
Automated ICD-10 Coding Using Phrase-level Attention
|
Clinical coding is the task of assigning a set of alphanumeric codes,
referred to as ICD (International Classification of Diseases), to a medical
event based on the context captured in a clinical narrative. The latest version
of ICD, ICD-10, includes more than 70,000 codes. As this is a labor-intensive
and error-prone task, automatic ICD coding of medical reports using machine
learning has gained significant interest in the last decade. Existing
literature has modeled this problem as a multi-label task. Nevertheless, such
multi-label approach is challenging due to the extremely large label set size.
Furthermore, the interpretability of the predictions is essential for the
endusers (e.g., healthcare providers and insurance companies). In this paper,
we propose a novel approach for automatic ICD coding by reformulating the
extreme multi-label problem into a simpler multi-class problem using a
hierarchical solution. We made this approach viable through extensive data
collection to acquire phrase-level human coder annotations to supervise our
models on learning the specific relations between the input text and predicted
ICD codes. Our approach employs two independently trained networks, the
sentence tagger and the ICD classifier, stacked hierarchically to predict a
codeset for a medical report. The sentence tagger identifies focus sentences
containing a medical event or concept relevant to an ICD coding. Using a
supervised attention mechanism, the ICD classifier then assigns each focus
sentence with an ICD code. The proposed approach outperforms strong baselines
by large margins of 23% in subset accuracy, 18% in micro-F1, and 15% in
instance based F-1. With our proposed approach, interpretability is achieved
not through implicitly learned attention scores but by attributing each
prediction to a particular sentence and words selected by human coders.
| 2,022 |
Computation and Language
|
Learning to Select Context in a Hierarchical and Global Perspective for
Open-domain Dialogue Generation
|
Open-domain multi-turn conversations mainly have three features, which are
hierarchical semantic structure, redundant information, and long-term
dependency. Grounded on these, selecting relevant context becomes a challenge
step for multi-turn dialogue generation. However, existing methods cannot
differentiate both useful words and utterances in long distances from a
response. Besides, previous work just performs context selection based on a
state in the decoder, which lacks a global guidance and could lead some focuses
on irrelevant or unnecessary information. In this paper, we propose a novel
model with hierarchical self-attention mechanism and distant supervision to not
only detect relevant words and utterances in short and long distances, but also
discern related information globally when decoding. Experimental results on two
public datasets of both automatic and human evaluations show that our model
significantly outperforms other baselines in terms of fluency, coherence, and
informativeness.
| 2,021 |
Computation and Language
|
UnibucKernel: Geolocating Swiss German Jodels Using Ensemble Learning
|
In this work, we describe our approach addressing the Social Media Variety
Geolocation task featured in the 2021 VarDial Evaluation Campaign. We focus on
the second subtask, which is based on a data set formed of approximately 30
thousand Swiss German Jodels. The dialect identification task is about
accurately predicting the latitude and longitude of test samples. We frame the
task as a double regression problem, employing an XGBoost meta-learner with the
combined power of a variety of machine learning approaches to predict both
latitude and longitude. The models included in our ensemble range from simple
regression techniques, such as Support Vector Regression, to deep neural
models, such as a hybrid neural network and a neural transformer. To minimize
the prediction error, we approach the problem from a few different perspectives
and consider various types of features, from low-level character n-grams to
high-level BERT embeddings. The XGBoost ensemble resulted from combining the
power of the aforementioned methods achieves a median distance of 23.6 km on
the test data, which places us on the third place in the ranking, at a
difference of 6.05 km and 2.9 km from the submissions on the first and second
places, respectively.
| 2,021 |
Computation and Language
|
Meta-Transfer Learning for Low-Resource Abstractive Summarization
|
Neural abstractive summarization has been studied in many pieces of
literature and achieves great success with the aid of large corpora. However,
when encountering novel tasks, one may not always benefit from transfer
learning due to the domain shifting problem, and overfitting could happen
without adequate labeled examples. Furthermore, the annotations of abstractive
summarization are costly, which often demand domain knowledge to ensure the
ground-truth quality. Thus, there are growing appeals for Low-Resource
Abstractive Summarization, which aims to leverage past experience to improve
the performance with limited labeled examples of target corpus. In this paper,
we propose to utilize two knowledge-rich sources to tackle this problem, which
are large pre-trained models and diverse existing corpora. The former can
provide the primary ability to tackle summarization tasks; the latter can help
discover common syntactic or semantic information to improve the generalization
ability. We conduct extensive experiments on various summarization corpora with
different writing styles and forms. The results demonstrate that our approach
achieves the state-of-the-art on 6 corpora in low-resource scenarios, with only
0.7% of trainable parameters compared to previous work.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.