Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks
|
In zero-shot cross-lingual transfer, a supervised NLP task trained on a
corpus in one language is directly applicable to another language without any
additional training. A source of cross-lingual transfer can be as
straightforward as lexical overlap between languages (e.g., use of the same
scripts, shared subwords) that naturally forces text embeddings to occupy a
similar representation space. Recently introduced cross-lingual language model
(XLM) pretraining brings out neural parameter sharing in Transformer-style
networks as the most important factor for the transfer. In this paper, we aim
to validate the hypothetically strong cross-lingual transfer properties induced
by XLM pretraining. Particularly, we take XLM-RoBERTa (XLMR) in our experiments
that extend semantic textual similarity (STS), SQuAD and KorQuAD for machine
reading comprehension, sentiment analysis, and alignment of sentence embeddings
under various cross-lingual settings. Our results indicate that the presence of
cross-lingual transfer is most pronounced in STS, sentiment analysis the next,
and MRC the last. That is, the complexity of a downstream task softens the
degree of crosslingual transfer. All of our results are empirically observed
and measured, and we make our code and data publicly available.
| 2,021 |
Computation and Language
|
Neural machine translation, corpus and frugality
|
In machine translation field, in both academia and industry, there is a
growing interest in increasingly powerful systems, using corpora of several
hundred million to several billion examples. These systems represent the
state-of-the-art. Here we defend the idea of developing in parallel <<frugal>>
bilingual translation systems, trained with relatively small corpora. Based on
the observation of a standard human professional translator, we estimate that
the corpora should be composed at maximum of a monolingual sub-corpus of 75
million examples for the source language, a second monolingual sub-corpus of 6
million examples for the target language, and an aligned bilingual sub-corpus
of 6 million bi-examples. A less desirable alternative would be an aligned
bilingual corpus of 47.5 million bi-examples.
| 2,021 |
Computation and Language
|
Few-Shot Semantic Parsing for New Predicates
|
In this work, we investigate the problems of semantic parsing in a few-shot
learning setting. In this setting, we are provided with utterance-logical form
pairs per new predicate. The state-of-the-art neural semantic parsers achieve
less than 25% accuracy on benchmark datasets when k= 1. To tackle this problem,
we proposed to i) apply a designated meta-learning method to train the model;
ii) regularize attention scores with alignment statistics; iii) apply a
smoothing technique in pre-training. As a result, our method consistently
outperforms all the baselines in both one and two-shot settings.
| 2,021 |
Computation and Language
|
Exploring Transitivity in Neural NLI Models through Veridicality
|
Despite the recent success of deep neural networks in natural language
processing, the extent to which they can demonstrate human-like generalization
capacities for natural language understanding remains unclear. We explore this
issue in the domain of natural language inference (NLI), focusing on the
transitivity of inference relations, a fundamental property for systematically
drawing inferences. A model capturing transitivity can compose basic inference
patterns and draw new inferences. We introduce an analysis method using
synthetic and naturalistic NLI datasets involving clause-embedding verbs to
evaluate whether models can perform transitivity inferences composed of
veridical inferences and arbitrary inference types. We find that current NLI
models do not perform consistently well on transitivity inference tasks,
suggesting that they lack the generalization capacity for drawing composite
inferences from provided training examples. The data and code for our analysis
are publicly available at https://github.com/verypluming/transitivity.
| 2,021 |
Computation and Language
|
Combining Deep Generative Models and Multi-lingual Pretraining for
Semi-supervised Document Classification
|
Semi-supervised learning through deep generative models and multi-lingual
pretraining techniques have orchestrated tremendous success across different
areas of NLP. Nonetheless, their development has happened in isolation, while
the combination of both could potentially be effective for tackling
task-specific labelled data shortage. To bridge this gap, we combine
semi-supervised deep generative models and multi-lingual pretraining to form a
pipeline for document classification task. Compared to strong supervised
learning baselines, our semi-supervised classification framework is highly
competitive and outperforms the state-of-the-art counterparts in low-resource
settings across several languages.
| 2,021 |
Computation and Language
|
Regulatory Compliance through Doc2Doc Information Retrieval: A case
study in EU/UK legislation where text similarity has limitations
|
Major scandals in corporate history have urged the need for regulatory
compliance, where organizations need to ensure that their controls (processes)
comply with relevant laws, regulations, and policies. However, keeping track of
the constantly changing legislation is difficult, thus organizations are
increasingly adopting Regulatory Technology (RegTech) to facilitate the
process. To this end, we introduce regulatory information retrieval (REG-IR),
an application of document-to-document information retrieval (DOC2DOC IR),
where the query is an entire document making the task more challenging than
traditional IR where the queries are short. Furthermore, we compile and release
two datasets based on the relationships between EU directives and UK
legislation. We experiment on these datasets using a typical two-step pipeline
approach comprising a pre-fetcher and a neural re-ranker. Experimenting with
various pre-fetchers from BM25 to k nearest neighbors over representations from
several BERT models, we show that fine-tuning a BERT model on an in-domain
classification task produces the best representations for IR. We also show that
neural re-rankers under-perform due to contradicting supervision, i.e., similar
query-document pairs with opposite labels. Thus, they are biased towards the
pre-fetcher's score. Interestingly, applying a date filter further improves the
performance, showcasing the importance of the time dimension.
| 2,021 |
Computation and Language
|
Summarising Historical Text in Modern Languages
|
We introduce the task of historical text summarisation, where documents in
historical forms of a language are summarised in the corresponding modern
language. This is a fundamentally important routine to historians and digital
humanities researchers but has never been automated. We compile a high-quality
gold-standard text summarisation dataset, which consists of historical German
and Chinese news from hundreds of years ago summarised in modern German or
Chinese. Based on cross-lingual transfer learning techniques, we propose a
summarisation model that can be trained even with no cross-lingual (historical
to modern) parallel data, and further benchmark it against state-of-the-art
algorithms. We report automatic and human evaluations that distinguish the
historic to modern language summarisation task from standard cross-lingual
summarisation (i.e., modern to modern language), highlight the distinctness and
value of our dataset, and demonstrate that our transfer learning approach
outperforms standard cross-lingual benchmarks on this task.
| 2,021 |
Computation and Language
|
Spark NLP: Natural Language Understanding at Scale
|
Spark NLP is a Natural Language Processing (NLP) library built on top of
Apache Spark ML. It provides simple, performant and accurate NLP annotations
for machine learning pipelines that can scale easily in a distributed
environment. Spark NLP comes with 1100 pre trained pipelines and models in more
than 192 languages. It supports nearly all the NLP tasks and modules that can
be used seamlessly in a cluster. Downloaded more than 2.7 million times and
experiencing nine times growth since January 2020, Spark NLP is used by 54% of
healthcare organizations as the worlds most widely used NLP library in the
enterprise.
| 2,021 |
Computation and Language
|
I Beg to Differ: A study of constructive disagreement in online
conversations
|
Disagreements are pervasive in human communication. In this paper we
investigate what makes disagreement constructive. To this end, we construct
WikiDisputes, a corpus of 7 425 Wikipedia Talk page conversations that contain
content disputes, and define the task of predicting whether disagreements will
be escalated to mediation by a moderator. We evaluate feature-based models with
linguistic markers from previous work, and demonstrate that their performance
is improved by using features that capture changes in linguistic markers
throughout the conversations, as opposed to averaged values. We develop a
variety of neural models and show that taking into account the structure of the
conversation improves predictive accuracy, exceeding that of feature-based
models. We assess our best neural model in terms of both predictive accuracy
and uncertainty by evaluating its behaviour when it is only exposed to the
beginning of the conversation, finding that model accuracy improves and
uncertainty reduces as models are exposed to more information.
| 2,021 |
Computation and Language
|
Attention Can Reflect Syntactic Structure (If You Let It)
|
Since the popularization of the Transformer as a general-purpose feature
encoder for NLP, many studies have attempted to decode linguistic structure
from its novel multi-head attention mechanism. However, much of such work
focused almost exclusively on English -- a language with rigid word order and a
lack of inflectional morphology. In this study, we present decoding experiments
for multilingual BERT across 18 languages in order to test the generalizability
of the claim that dependency syntax is reflected in attention patterns. We show
that full trees can be decoded above baseline accuracy from single attention
heads, and that individual relations are often tracked by the same heads across
languages. Furthermore, in an attempt to address recent debates about the
status of attention as an explanatory mechanism, we experiment with fine-tuning
mBERT on a supervised parsing objective while freezing different series of
parameters. Interestingly, in steering the objective to learn explicit
linguistic structure, we find much of the same structure represented in the
resulting attention patterns, with interesting differences with respect to
which parameters are frozen.
| 2,021 |
Computation and Language
|
"Laughing at you or with you": The Role of Sarcasm in Shaping the
Disagreement Space
|
Detecting arguments in online interactions is useful to understand how
conflicts arise and get resolved. Users often use figurative language, such as
sarcasm, either as persuasive devices or to attack the opponent by an ad
hominem argument. To further our understanding of the role of sarcasm in
shaping the disagreement space, we present a thorough experimental setup using
a corpus annotated with both argumentative moves (agree/disagree) and sarcasm.
We exploit joint modeling in terms of (a) applying discrete features that are
useful in detecting sarcasm to the task of argumentative relation
classification (agree/disagree/none), and (b) multitask learning for
argumentative relation classification and sarcasm detection using deep learning
architectures (e.g., dual Long Short-Term Memory (LSTM) with hierarchical
attention and Transformer-based architectures). We demonstrate that modeling
sarcasm improves the argumentative relation classification task
(agree/disagree/none) in all setups.
| 2,021 |
Computation and Language
|
Muppet: Massive Multi-task Representations with Pre-Finetuning
|
We propose pre-finetuning, an additional large-scale learning stage between
language model pre-training and fine-tuning. Pre-finetuning is massively
multi-task learning (around 50 datasets, over 4.8 million total labeled
examples), and is designed to encourage learning of representations that
generalize better to many different tasks. We show that pre-finetuning
consistently improves performance for pretrained discriminators (e.g.~RoBERTa)
and generation models (e.g.~BART) on a wide range of tasks (sentence
prediction, commonsense reasoning, MRC, etc.), while also significantly
improving sample efficiency during fine-tuning. We also show that large-scale
multi-tasking is crucial; pre-finetuning can hurt performance when few tasks
are used up until a critical point (usually above 15) after which performance
improves linearly in the number of tasks.
| 2,021 |
Computation and Language
|
A Comparison of Approaches to Document-level Machine Translation
|
Document-level machine translation conditions on surrounding sentences to
produce coherent translations. There has been much recent work in this area
with the introduction of custom model architectures and decoding algorithms.
This paper presents a systematic comparison of selected approaches from the
literature on two benchmarks for which document-level phenomena evaluation
suites exist. We find that a simple method based purely on back-translating
monolingual document-level data performs as well as much more elaborate
alternatives, both in terms of document-level metrics as well as human
evaluation.
| 2,021 |
Computation and Language
|
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT
|
We investigate how Multilingual BERT (mBERT) encodes grammar by examining how
the high-order grammatical feature of morphosyntactic alignment (how different
languages define what counts as a "subject") is manifested across the embedding
spaces of different languages. To understand if and how morphosyntactic
alignment affects contextual embedding spaces, we train classifiers to recover
the subjecthood of mBERT embeddings in transitive sentences (which do not
contain overt information about morphosyntactic alignment) and then evaluate
them zero-shot on intransitive sentences (where subjecthood classification
depends on alignment), within and across languages. We find that the resulting
classifier distributions reflect the morphosyntactic alignment of their
training languages. Our results demonstrate that mBERT representations are
influenced by high-level grammatical features that are not manifested in any
one input sentence, and that this is robust across languages. Further examining
the characteristics that our classifiers rely on, we find that features such as
passive voice, animacy and case strongly correlate with classification
decisions, suggesting that mBERT does not encode subjecthood purely
syntactically, but that subjecthood embedding is continuous and dependent on
semantic and discourse factors, as is proposed in much of the functional
linguistics literature. Together, these results provide insight into how
grammatical features manifest in contextual embedding spaces, at a level of
abstraction not covered by previous work.
| 2,021 |
Computation and Language
|
Event-Driven News Stream Clustering using Entity-Aware Contextual
Embeddings
|
We propose a method for online news stream clustering that is a variant of
the non-parametric streaming K-means algorithm. Our model uses a combination of
sparse and dense document representations, aggregates document-cluster
similarity along these multiple representations and makes the clustering
decision using a neural classifier. The weighted document-cluster similarity
model is learned using a novel adaptation of the triplet loss into a linear
classification objective. We show that the use of a suitable fine-tuning
objective and external knowledge in pre-trained transformer models yields
significant improvements in the effectiveness of contextual embeddings for
clustering. Our model achieves a new state-of-the-art on a standard stream
clustering dataset of English documents.
| 2,021 |
Computation and Language
|
First Align, then Predict: Understanding the Cross-Lingual Ability of
Multilingual BERT
|
Multilingual pretrained language models have demonstrated remarkable
zero-shot cross-lingual transfer capabilities. Such transfer emerges by
fine-tuning on a task of interest in one language and evaluating on a distinct
language, not seen during the fine-tuning. Despite promising results, we still
lack a proper understanding of the source of this transfer. Using a novel layer
ablation technique and analyses of the model's internal representations, we
show that multilingual BERT, a popular multilingual language model, can be
viewed as the stacking of two sub-networks: a multilingual encoder followed by
a task-specific language-agnostic predictor. While the encoder is crucial for
cross-lingual transfer and remains mostly unchanged during fine-tuning, the
task predictor has little importance on the transfer and can be reinitialized
during fine-tuning. We present extensive experiments with three distinct tasks,
seventeen typologically diverse languages and multiple domains to support our
hypothesis.
| 2,021 |
Computation and Language
|
Cross-Lingual Named Entity Recognition Using Parallel Corpus: A New
Approach Using XLM-RoBERTa Alignment
|
We propose a novel approach for cross-lingual Named Entity Recognition (NER)
zero-shot transfer using parallel corpora. We built an entity alignment model
on top of XLM-RoBERTa to project the entities detected on the English part of
the parallel data to the target language sentences, whose accuracy surpasses
all previous unsupervised models. With the alignment model we can get
pseudo-labeled NER data set in the target language to train task-specific
model. Unlike using translation methods, this approach benefits from natural
fluency and nuances in target-language original corpus. We also propose a
modified loss function similar to focal loss but assigns weights in the
opposite direction to further improve the model training on noisy
pseudo-labeled data set. We evaluated this proposed approach over 4 target
languages on benchmark data sets and got competitive F1 scores compared to most
recent SOTA models. We also gave extra discussions about the impact of parallel
corpus size and domain on the final transfer performance.
| 2,021 |
Computation and Language
|
Named Entity Recognition in the Style of Object Detection
|
In this work, we propose a two-stage method for named entity recognition
(NER), especially for nested NER. We borrowed the idea from the two-stage
Object Detection in computer vision and the way how they construct the loss
function. First, a region proposal network generates region candidates and then
a second-stage model discriminates and classifies the entity and makes the
final prediction. We also designed a special loss function for the second-stage
training that predicts the entityness and entity type at the same time. The
model is built on top of pretrained BERT encoders, and we tried both BERT base
and BERT large models. For experiments, we first applied it to flat NER tasks
such as CoNLL2003 and OntoNotes 5.0 and got comparable results with traditional
NER models using sequence labeling methodology. We then tested the model on the
nested named entity recognition task ACE2005 and Genia, and got F1 score of
85.6$\%$ and 76.8$\%$ respectively. In terms of the second-stage training, we
found that adding extra randomly selected regions plays an important role in
improving the precision. We also did error profiling to better evaluate the
performance of the model in different circumstances for potential improvements
in the future.
| 2,021 |
Computation and Language
|
CLiMP: A Benchmark for Chinese Language Model Evaluation
|
Linguistically informed analyses of language models (LMs) contribute to the
understanding and improvement of these models. Here, we introduce the corpus of
Chinese linguistic minimal pairs (CLiMP), which can be used to investigate what
knowledge Chinese LMs acquire. CLiMP consists of sets of 1,000 minimal pairs
(MPs) for 16 syntactic contrasts in Mandarin, covering 9 major Mandarin
linguistic phenomena. The MPs are semi-automatically generated, and human
agreement with the labels in CLiMP is 95.8%. We evaluated 11 different LMs on
CLiMP, covering n-grams, LSTMs, and Chinese BERT. We find that classifier-noun
agreement and verb complement selection are the phenomena that models generally
perform best at. However, models struggle the most with the ba construction,
binding, and filler-gap dependencies. Overall, Chinese BERT achieves an 81.8%
average accuracy, while the performances of LSTMs and 5-grams are only
moderately above chance level.
| 2,021 |
Computation and Language
|
Open-domain Topic Identification of Out-of-domain Utterances using
Wikipedia
|
Users of spoken dialogue systems (SDS) expect high quality interactions
across a wide range of diverse topics. However, the implementation of SDS
capable of responding to every conceivable user utterance in an informative way
is a challenging problem. Multi-domain SDS must necessarily identify and deal
with out-of-domain (OOD) utterances to generate appropriate responses as users
do not always know in advance what domains the SDS can handle. To address this
problem, we extend the current state-of-the-art in multi-domain SDS by
estimating the topic of OOD utterances using external knowledge representation
from Wikipedia. Experimental results on real human-to-human dialogues showed
that our approach does not degrade domain prediction performance when compared
to the base model. But more significantly, our joint training achieves more
accurate predictions of the nearest Wikipedia article by up to about 30% when
compared to the benchmarks.
| 2,021 |
Computation and Language
|
Exploring multi-task multi-lingual learning of transformer models for
hate speech and offensive speech identification in social media
|
Hate Speech has become a major content moderation issue for online social
media platforms. Given the volume and velocity of online content production, it
is impossible to manually moderate hate speech related content on any platform.
In this paper we utilize a multi-task and multi-lingual approach based on
recently proposed Transformer Neural Networks to solve three sub-tasks for hate
speech. These sub-tasks were part of the 2019 shared task on hate speech and
offensive content (HASOC) identification in Indo-European languages. We expand
on our submission to that competition by utilizing multi-task models which are
trained using three approaches, a) multi-task learning with separate task
heads, b) back-translation, and c) multi-lingual training. Finally, we
investigate the performance of various models and identify instances where the
Transformer based models perform differently and better. We show that it is
possible to to utilize different combined approaches to obtain models that can
generalize easily on different languages and tasks, while trading off slight
accuracy (in some cases) for a much reduced inference time compute cost. We
open source an updated version of our HASOC 2019 code with the new improvements
at https://github.com/socialmediaie/MTML_HateSpeech.
| 2,021 |
Computation and Language
|
LSOIE: A Large-Scale Dataset for Supervised Open Information Extraction
|
Open Information Extraction (OIE) systems seek to compress the factual
propositions of a sentence into a series of n-ary tuples. These tuples are
useful for downstream tasks in natural language processing like knowledge base
creation, textual entailment, and natural language understanding. However,
current OIE datasets are limited in both size and diversity. We introduce a new
dataset by converting the QA-SRL 2.0 dataset to a large-scale OIE dataset
(LSOIE). Our LSOIE dataset is 20 times larger than the next largest
human-annotated OIE dataset. We construct and evaluate several benchmark OIE
models on LSOIE, providing baselines for future improvements on the task. Our
LSOIE data, models, and code are made publicly available
| 2,021 |
Computation and Language
|
Neural Sentence Ordering Based on Constraint Graphs
|
Sentence ordering aims at arranging a list of sentences in the correct order.
Based on the observation that sentence order at different distances may rely on
different types of information, we devise a new approach based on
multi-granular orders between sentences. These orders form multiple constraint
graphs, which are then encoded by Graph Isomorphism Networks and fused into
sentence representations. Finally, sentence order is determined using the
order-enhanced sentence representations. Our experiments on five benchmark
datasets show that our method outperforms all the existing baselines
significantly, achieving a new state-of-the-art performance. The results
demonstrate the advantage of considering multiple types of order information
and using graph neural networks to integrate sentence content and order
information for the task. Our code is available at
https://github.com/DaoD/ConstraintGraph4NSO.
| 2,021 |
Computation and Language
|
Joint Coreference Resolution and Character Linking for Multiparty
Conversation
|
Character linking, the task of linking mentioned people in conversations to
the real world, is crucial for understanding the conversations. For the
efficiency of communication, humans often choose to use pronouns (e.g., "she")
or normal phrases (e.g., "that girl") rather than named entities (e.g.,
"Rachel") in the spoken language, which makes linking those mentions to real
people a much more challenging than a regular entity linking task. To address
this challenge, we propose to incorporate the richer context from the
coreference relations among different mentions to help the linking. On the
other hand, considering that finding coreference clusters itself is not a
trivial task and could benefit from the global character information, we
propose to jointly solve these two tasks. Specifically, we propose C$^2$, the
joint learning model of Coreference resolution and Character linking. The
experimental results demonstrate that C$^2$ can significantly outperform
previous works on both tasks. Further analyses are conducted to analyze the
contribution of all modules in the proposed model and the effect of all
hyper-parameters.
| 2,021 |
Computation and Language
|
FGNET-RH: Fine-Grained Named Entity Typing via Refinement in Hyperbolic
Space
|
Fine-Grained Named Entity Typing (FG-NET) aims at classifying the entity
mentions into a wide range of entity types (usually hundreds) depending upon
the context. While distant supervision is the most common way to acquire
supervised training data, it brings in label noise, as it assigns type labels
to the entity mentions irrespective of mentions context. In attempts to deal
with the label noise, leading research on the FG-NET assumes that the
fine-grained entity typing data possesses a euclidean nature, which restraints
the ability of the existing models in combating the label noise. Given the fact
that the fine-grained type hierarchy exhibits a hierarchical structure, it
makes hyperbolic space a natural choice to model the FG-NET data. In this
research, we propose FGNET-RH, a novel framework that benefits from the
hyperbolic geometry in combination with the graph structures to perform entity
typing in a performance-enhanced fashion. FGNET-RH initially uses LSTM networks
to encode the mention in relation with its context, later it forms a graph to
distill/refine the mention encodings in the hyperbolic space. Finally, the
refined mention encoding is used for entity typing. Experimentation using
different benchmark datasets shows that FGNET-RH improves the performance on
FG-NET by up to 3.5-% in terms of strict accuracy.
| 2,022 |
Computation and Language
|
Towards Robustness to Label Noise in Text Classification via Noise
Modeling
|
Large datasets in NLP suffer from noisy labels, due to erroneous automatic
and human annotation procedures. We study the problem of text classification
with label noise, and aim to capture this noise through an auxiliary noise
model over the classifier. We first assign a probability score to each training
sample of having a noisy label, through a beta mixture model fitted on the
losses at an early epoch of training. Then, we use this score to selectively
guide the learning of the noise model and classifier. Our empirical evaluation
on two text classification tasks shows that our approach can improve over the
baseline accuracy, and prevent over-fitting to the noise.
| 2,022 |
Computation and Language
|
PPT: Parsimonious Parser Transfer for Unsupervised Cross-Lingual
Adaptation
|
Cross-lingual transfer is a leading technique for parsing low-resource
languages in the absence of explicit supervision. Simple `direct transfer' of a
learned model based on a multilingual input encoding has provided a strong
benchmark. This paper presents a method for unsupervised cross-lingual transfer
that improves over direct transfer systems by using their output as implicit
supervision as part of self-training on unlabelled text in the target language.
The method assumes minimal resources and provides maximal flexibility by (a)
accepting any pre-trained arc-factored dependency parser; (b) assuming no
access to source language data; (c) supporting both projective and
non-projective parsing; and (d) supporting multi-source transfer. With English
as the source language, we show significant improvements over state-of-the-art
transfer models on both distant and nearby languages, despite our conceptually
simpler approach. We provide analyses of the choice of source languages for
multi-source transfer, and the advantage of non-projective parsing. Our code is
available online.
| 2,021 |
Computation and Language
|
Enquire One's Parent and Child Before Decision: Fully Exploit
Hierarchical Structure for Self-Supervised Taxonomy Expansion
|
Taxonomy is a hierarchically structured knowledge graph that plays a crucial
role in machine intelligence. The taxonomy expansion task aims to find a
position for a new term in an existing taxonomy to capture the emerging
knowledge in the world and keep the taxonomy dynamically updated. Previous
taxonomy expansion solutions neglect valuable information brought by the
hierarchical structure and evaluate the correctness of merely an added edge,
which downgrade the problem to node-pair scoring or mini-path classification.
In this paper, we propose the Hierarchy Expansion Framework (HEF), which fully
exploits the hierarchical structure's properties to maximize the coherence of
expanded taxonomy. HEF makes use of taxonomy's hierarchical structure in
multiple aspects: i) HEF utilizes subtrees containing most relevant nodes as
self-supervision data for a complete comparison of parental and sibling
relations; ii) HEF adopts a coherence modeling module to evaluate the coherence
of a taxonomy's subtree by integrating hypernymy relation detection and several
tree-exclusive features; iii) HEF introduces the Fitting Score for position
selection, which explicitly evaluates both path and level selections and takes
full advantage of parental relations to interchange information for
disambiguation and self-correction. Extensive experiments show that by better
exploiting the hierarchical structure and optimizing taxonomy's coherence, HEF
vastly surpasses the prior state-of-the-art on three benchmark datasets by an
average improvement of 46.7% in accuracy and 32.3% in mean reciprocal rank.
| 2,022 |
Computation and Language
|
VisualMRC: Machine Reading Comprehension on Document Images
|
Recent studies on machine reading comprehension have focused on text-level
understanding but have not yet reached the level of human understanding of the
visual layout and content of real-world documents. In this study, we introduce
a new visual machine reading comprehension dataset, named VisualMRC, wherein
given a question and a document image, a machine reads and comprehends texts in
the image to answer the question in natural language. Compared with existing
visual question answering (VQA) datasets that contain texts in images,
VisualMRC focuses more on developing natural language understanding and
generation abilities. It contains 30,000+ pairs of a question and an
abstractive answer for 10,000+ document images sourced from multiple domains of
webpages. We also introduce a new model that extends existing
sequence-to-sequence models, pre-trained with large-scale text corpora, to take
into account the visual layout and content of documents. Experiments with
VisualMRC show that this model outperformed the base sequence-to-sequence
models and a state-of-the-art VQA model. However, its performance is still
below that of humans on most automatic evaluation metrics. The dataset will
facilitate research aimed at connecting vision and language understanding.
| 2,021 |
Computation and Language
|
Language Modelling as a Multi-Task Problem
|
In this paper, we propose to study language modelling as a multi-task
problem, bringing together three strands of research: multi-task learning,
linguistics, and interpretability. Based on hypotheses derived from linguistic
theory, we investigate whether language models adhere to learning principles of
multi-task learning during training. To showcase the idea, we analyse the
generalisation behaviour of language models as they learn the linguistic
concept of Negative Polarity Items (NPIs). Our experiments demonstrate that a
multi-task setting naturally emerges within the objective of the more general
task of language modelling.We argue that this insight is valuable for
multi-task learning, linguistics and interpretability research and can lead to
exciting new findings in all three domains.
| 2,021 |
Computation and Language
|
How to Evaluate a Summarizer: Study Design and Statistical Analysis for
Manual Linguistic Quality Evaluation
|
Manual evaluation is essential to judge progress on automatic text
summarization. However, we conduct a survey on recent summarization system
papers that reveals little agreement on how to perform such evaluation studies.
We conduct two evaluation experiments on two aspects of summaries' linguistic
quality (coherence and repetitiveness) to compare Likert-type and ranking
annotations and show that best choice of evaluation method can vary from one
aspect to another. In our survey, we also find that study parameters such as
the overall number of annotators and distribution of annotators to annotation
items are often not fully reported and that subsequent statistical analysis
ignores grouping factors arising from one annotator judging multiple summaries.
Using our evaluation experiments, we show that the total number of annotators
can have a strong impact on study power and that current statistical analysis
methods can inflate type I error rates up to eight-fold. In addition, we
highlight that for the purpose of system comparison the current practice of
eliciting multiple judgements per summary leads to less powerful and reliable
annotations given a fixed study budget.
| 2,021 |
Computation and Language
|
Multilingual and cross-lingual document classification: A meta-learning
approach
|
The great majority of languages in the world are considered under-resourced
for the successful application of deep learning methods. In this work, we
propose a meta-learning approach to document classification in limited-resource
setting and demonstrate its effectiveness in two different settings: few-shot,
cross-lingual adaptation to previously unseen languages; and multilingual joint
training when limited target-language data is available during training. We
conduct a systematic comparison of several meta-learning methods, investigate
multiple settings in terms of data availability and show that meta-learning
thrives in settings with a heterogeneous task distribution. We propose a
simple, yet effective adjustment to existing meta-learning methods which allows
for better and more stable learning, and set a new state of the art on several
languages while performing on-par on others, using only a small amount of
labeled data.
| 2,021 |
Computation and Language
|
Adversarial Stylometry in the Wild: Transferable Lexical Substitution
Attacks on Author Profiling
|
Written language contains stylistic cues that can be exploited to
automatically infer a variety of potentially sensitive author information.
Adversarial stylometry intends to attack such models by rewriting an author's
text. Our research proposes several components to facilitate deployment of
these adversarial attacks in the wild, where neither data nor target models are
accessible. We introduce a transformer-based extension of a lexical replacement
attack, and show it achieves high transferability when trained on a weakly
labeled corpus -- decreasing target model performance below chance. While not
completely inconspicuous, our more successful attacks also prove notably less
detectable by humans. Our framework therefore provides a promising direction
for future privacy-preserving adversarial attacks.
| 2,021 |
Computation and Language
|
A phonetic model of non-native spoken word processing
|
Non-native speakers show difficulties with spoken word processing. Many
studies attribute these difficulties to imprecise phonological encoding of
words in the lexical memory. We test an alternative hypothesis: that some of
these difficulties can arise from the non-native speakers' phonetic perception.
We train a computational model of phonetic learning, which has no access to
phonology, on either one or two languages. We first show that the model
exhibits predictable behaviors on phone-level and word-level discrimination
tasks. We then test the model on a spoken word processing task, showing that
phonology may not be necessary to explain some of the word processing effects
observed in non-native speakers. We run an additional analysis of the model's
lexical representation space, showing that the two training languages are not
fully separated in that space, similarly to the languages of a bilingual human
speaker.
| 2,021 |
Computation and Language
|
Triangular Bidword Generation for Sponsored Search Auction
|
Sponsored search auction is a crucial component of modern search engines. It
requires a set of candidate bidwords that advertisers can place bids on.
Existing methods generate bidwords from search queries or advertisement
content. However, they suffer from the data noise in <query, bidword> and
<advertisement, bidword> pairs. In this paper, we propose a triangular bidword
generation model (TRIDENT), which takes the high-quality data of paired <query,
advertisement> as a supervision signal to indirectly guide the bidword
generation process. Our proposed model is simple yet effective: by using
bidword as the bridge between search query and advertisement, the generation of
search query, advertisement and bidword can be jointly learned in the
triangular training framework. This alleviates the problem that the training
data of bidword may be noisy. Experimental results, including automatic and
human evaluations, show that our proposed TRIDENT can generate relevant and
diverse bidwords for both search queries and advertisements. Our evaluation on
online real data validates the effectiveness of the TRIDENT's generated
bidwords for product search.
| 2,021 |
Computation and Language
|
An Empirical Study of Cross-Lingual Transferability in Generative
Dialogue State Tracker
|
There has been a rapid development in data-driven task-oriented dialogue
systems with the benefit of large-scale datasets. However, the progress of
dialogue systems in low-resource languages lags far behind due to the lack of
high-quality data. To advance the cross-lingual technology in building dialog
systems, DSTC9 introduces the task of cross-lingual dialog state tracking,
where we test the DST module in a low-resource language given the rich-resource
training dataset.
This paper studies the transferability of a cross-lingual generative dialogue
state tracking system using a multilingual pre-trained seq2seq model. We
experiment under different settings, including joint-training or pre-training
on cross-lingual and cross-ontology datasets. We also find out the low
cross-lingual transferability of our approaches and provides investigation and
discussion.
| 2,021 |
Computation and Language
|
KoreALBERT: Pretraining a Lite BERT Model for Korean Language
Understanding
|
A Lite BERT (ALBERT) has been introduced to scale up deep bidirectional
representation learning for natural languages. Due to the lack of pretrained
ALBERT models for Korean language, the best available practice is the
multilingual model or resorting back to the any other BERT-based model. In this
paper, we develop and pretrain KoreALBERT, a monolingual ALBERT model
specifically for Korean language understanding. We introduce a new training
objective, namely Word Order Prediction (WOP), and use alongside the existing
MLM and SOP criteria to the same architecture and model parameters. Despite
having significantly fewer model parameters (thus, quicker to train), our
pretrained KoreALBERT outperforms its BERT counterpart on 6 different NLU
tasks. Consistent with the empirical results in English by Lan et al.,
KoreALBERT seems to improve downstream task performance involving
multi-sentence encoding for Korean language. The pretrained KoreALBERT is
publicly available to encourage research and application development for Korean
NLP.
| 2,021 |
Computation and Language
|
Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis
|
Clinical diagnosis, which aims to assign diagnosis codes for a patient based
on the clinical note, plays an essential role in clinical decision-making.
Considering that manual diagnosis could be error-prone and time-consuming, many
intelligent approaches based on clinical text mining have been proposed to
perform automatic diagnosis. However, these methods may not achieve
satisfactory results due to the following challenges. First, most of the
diagnosis codes are rare, and the distribution is extremely unbalanced. Second,
existing methods are challenging to capture the correlation between diagnosis
codes. Third, the lengthy clinical note leads to the excessive dispersion of
key information related to codes. To tackle these challenges, we propose a
novel framework to combine the inheritance-guided hierarchical assignment and
co-occurrence graph propagation for clinical automatic diagnosis. Specifically,
we propose a hierarchical joint prediction strategy to address the challenge of
unbalanced codes distribution. Then, we utilize graph convolutional neural
networks to obtain the correlation and semantic representations of medical
ontology. Furthermore, we introduce multi attention mechanisms to extract
crucial information. Finally, extensive experiments on MIMIC-III dataset
clearly validate the effectiveness of our method.
| 2,021 |
Computation and Language
|
Recent Trends in Named Entity Recognition (NER)
|
The availability of large amounts of computer-readable textual data and
hardware that can process the data has shifted the focus of knowledge projects
towards deep learning architecture. Natural Language Processing, particularly
the task of Named Entity Recognition is no exception. The bulk of the learning
methods that have produced state-of-the-art results have changed the deep
learning model, the training method used, the training data itself or the
encoding of the output of the NER system. In this paper, we review significant
learning methods that have been employed for NER in the recent past and how
they came about from the linear learning methods of the past. We also cover the
progress of related tasks that are upstream or downstream to NER, e.g.,
sequence tagging, entity linking, etc., wherever the processes in question have
also improved NER results.
| 2,021 |
Computation and Language
|
Cisco at AAAI-CAD21 shared task: Predicting Emphasis in Presentation
Slides using Contextualized Embeddings
|
This paper describes our proposed system for the AAAI-CAD21 shared task:
Predicting Emphasis in Presentation Slides. In this specific task, given the
contents of a slide we are asked to predict the degree of emphasis to be laid
on each word in the slide. We propose 2 approaches to this problem including a
BiLSTM-ELMo approach and a transformers based approach based on RoBERTa and
XLNet architectures. We achieve a score of 0.518 on the evaluation leaderboard
which ranks us 3rd and 0.543 on the post-evaluation leaderboard which ranks us
1st at the time of writing the paper.
| 2,021 |
Computation and Language
|
A More Efficient Chinese Named Entity Recognition base on BERT and
Syntactic Analysis
|
We propose a new Named entity recognition (NER) method to effectively make
use of the results of Part-of-speech (POS) tagging, Chinese word segmentation
(CWS) and parsing while avoiding NER error caused by POS tagging error. This
paper first uses Stanford natural language process (NLP) tool to annotate
large-scale untagged data so as to reduce the dependence on the tagged data;
then a new NLP model, g-BERT model, is designed to compress Bidirectional
Encoder Representations from Transformers (BERT) model in order to reduce
calculation quantity; finally, the model is evaluated based on Chinese NER
dataset. The experimental results show that the calculation quantity in g-BERT
model is reduced by 60% and performance improves by 2% with Test F1 to 96.5
compared with that in BERT model.
| 2,021 |
Computation and Language
|
geoGAT: Graph Model Based on Attention Mechanism for Geographic Text
Classification
|
In the area of geographic information processing. There are few researches on
geographic text classification. However, the application of this task in
Chinese is relatively rare. In our work, we intend to implement a method to
extract text containing geographical entities from a large number of network
text. The geographic information in these texts is of great practical
significance to transportation, urban and rural planning, disaster relief and
other fields. We use the method of graph convolutional neural network with
attention mechanism to achieve this function. Graph attention networks is an
improvement of graph convolutional neural networks. Compared with GCN, the
advantage of GAT is that the attention mechanism is proposed to weight the sum
of the characteristics of adjacent nodes. In addition, We construct a Chinese
dataset containing geographical classification from multiple datasets of
Chinese text classification. The Macro-F Score of the geoGAT we used reached
95\% on the new Chinese dataset.
| 2,021 |
Computation and Language
|
Fake News Detection System using XLNet model with Topic Distributions:
CONSTRAINT@AAAI2021 Shared Task
|
With the ease of access to information, and its rapid dissemination over the
internet (both velocity and volume), it has become challenging to filter out
truthful information from fake ones. The research community is now faced with
the task of automatic detection of fake news, which carries real-world
socio-political impact. One such research contribution came in the form of the
Constraint@AAA12021 Shared Task on COVID19 Fake News Detection in English. In
this paper, we shed light on a novel method we proposed as a part of this
shared task. Our team introduced an approach to combine topical distributions
from Latent Dirichlet Allocation (LDA) with contextualized representations from
XLNet. We also compared our method with existing baselines to show that XLNet +
Topic Distributions outperforms other approaches by attaining an F1-score of
0.967.
| 2,021 |
Computation and Language
|
TSQA: Tabular Scenario Based Question Answering
|
Scenario-based question answering (SQA) has attracted an increasing research
interest. Compared with the well-studied machine reading comprehension (MRC),
SQA is a more challenging task: a scenario may contain not only a textual
passage to read but also structured data like tables, i.e., tabular scenario
based question answering (TSQA). AI applications of TSQA such as answering
multiple-choice questions in high-school exams require synthesizing data in
multiple cells and combining tables with texts and domain knowledge to infer
answers. To support the study of this task, we construct GeoTSQA. This dataset
contains 1k real questions contextualized by tabular scenarios in the geography
domain. To solve the task, we extend state-of-the-art MRC methods with TTGen, a
novel table-to-text generator. It generates sentences from variously
synthesized tabular data and feeds the downstream MRC method with the most
useful sentences. Its sentence ranking model fuses the information in the
scenario, question, and domain knowledge. Our approach outperforms a variety of
strong baseline methods on GeoTSQA.
| 2,021 |
Computation and Language
|
An Explainable CNN Approach for Medical Codes Prediction from Clinical
Text
|
Method: We develop CNN-based methods for automatic ICD coding based on
clinical text from intensive care unit (ICU) stays. We come up with the Shallow
and Wide Attention convolutional Mechanism (SWAM), which allows our model to
learn local and low-level features for each label. The key idea behind our
model design is to look for the presence of informative snippets in the
clinical text that correlated with each code, and we infer that there exists a
correspondence between "informative snippet" and convolution filter. Results:
We evaluate our approach on MIMIC-III, an open-access dataset of ICU medical
records. Our approach substantially outperforms previous results on top-50
medical code prediction on MIMIC-III dataset. We attribute this improvement to
SWAM, by which the wide architecture gives the model ability to more
extensively learn the unique features of different codes, and we prove it by
ablation experiment. Besides, we perform manual analysis of the performance
imbalance between different codes, and preliminary conclude the characteristics
that determine the difficulty of learning specific codes. Conclusions: We
present SWAM, an explainable CNN approach for multi-label document
classification, which employs a wide convolution layer to learn local and
low-level features for each label, yields strong improvements over previous
metrics on the ICD-9 code prediction task, while providing satisfactory
explanations for its internal mechanics.
| 2,021 |
Computation and Language
|
SkillNER: Mining and Mapping Soft Skills from any Text
|
In today's digital world, there is an increasing focus on soft skills. On the
one hand, they facilitate innovation at companies, but on the other, they are
unlikely to be automated soon. Researchers struggle with accurately approaching
quantitatively the study of soft skills due to the lack of data-driven methods
to retrieve them. This limits the possibility for psychologists and HR managers
to understand the relation between humans and digitalisation. This paper
presents SkillNER, a novel data-driven method for automatically extracting soft
skills from text. It is a named entity recognition (NER) system trained with a
support vector machine (SVM) on a corpus of more than 5000 scientific papers.
We developed this system by measuring the performance of our approach against
different training models and validating the results together with a team of
psychologists. Finally, SkillNER was tested in a real-world case study using
the job descriptions of ESCO (European Skill/Competence Qualification and
Occupation) as textual source. The system enabled the detection of communities
of job profiles based on their shared soft skills and communities of soft
skills based on their shared job profiles. This case study demonstrates that
the tool can automatically retrieve soft skills from a large corpus in an
efficient way, proving useful for firms, institutions, and workers. The tool is
open and available online to foster quantitative methods for the study of soft
skills.
| 2,021 |
Computation and Language
|
Transformer-Based Models for Question Answering on COVID19
|
In response to the Kaggle's COVID-19 Open Research Dataset (CORD-19)
challenge, we have proposed three transformer-based question-answering systems
using BERT, ALBERT, and T5 models. Since the CORD-19 dataset is unlabeled, we
have evaluated the question-answering models' performance on two labeled
questions answers datasets \textemdash CovidQA and CovidGQA. The BERT-based QA
system achieved the highest F1 score (26.32), while the ALBERT-based QA system
achieved the highest Exact Match (13.04). However, numerous challenges are
associated with developing high-performance question-answering systems for the
ongoing COVID-19 pandemic and future pandemics. At the end of this paper, we
discuss these challenges and suggest potential solutions to address them.
| 2,021 |
Computation and Language
|
Analysis of Basic Emotions in Texts Based on BERT Vector Representation
|
In the following paper the authors present a GAN-type model and the most
important stages of its development for the task of emotion recognition in
text. In particular, we propose an approach for generating a synthetic dataset
of all possible emotions combinations based on manually labelled incomplete
data.
| 2,021 |
Computation and Language
|
Exploratory Arabic Offensive Language Dataset Analysis
|
This paper adding more insights towards resources and datasets used in Arabic
offensive language research. The main goal of this paper is to guide
researchers in Arabic offensive language in selecting appropriate datasets
based on their content, and in creating new Arabic offensive language resources
to support and complement the available ones.
| 2,021 |
Computation and Language
|
Challenges Encountered in Turkish Natural Language Processing Studies
|
Natural language processing is a branch of computer science that combines
artificial intelligence with linguistics. It aims to analyze a language element
such as writing or speaking with software and convert it into information.
Considering that each language has its own grammatical rules and vocabulary
diversity, the complexity of the studies in this field is somewhat
understandable. For instance, Turkish is a very interesting language in many
ways. Examples of this are agglutinative word structure, consonant/vowel
harmony, a large number of productive derivational morphemes (practically
infinite vocabulary), derivation and syntactic relations, a complex emphasis on
vocabulary and phonological rules. In this study, the interesting features of
Turkish in terms of natural language processing are mentioned. In addition,
summary info about natural language processing techniques, systems and various
sources developed for Turkish are given.
| 2,020 |
Computation and Language
|
Using Finite-State Machines to Automatically Scan Classical Greek
Hexameter
|
This paper presents a fully automatic approach to the scansion of Classical
Greek hexameter verse. In particular, the paper describes an algorithm that
uses deterministic finite-state automata and local linguistic rules to
implement a targeted search for valid spondeus patterns and, in addition, a
weighted finite-state transducer to correct and complete partial analyses and
to reject invalid candidates. The paper also details the results of an
empirical evaluation of the annotation quality resulting from this approach on
hand-annotated data. It is shown that a finite-state approach provides quick
and linguistically sound analyses of hexameter verses as well as an efficient
formalisation of linguistic knowledge. The project code is available (see
https://github.com/anetschka/greek_scansion).
| 2,021 |
Computation and Language
|
Medical Segment Coloring of Clinical Notes
|
This paper proposes a deep learning-based method to identify the segments of
a clinical note corresponding to ICD-9 broad categories which are further
color-coded with respect to 17 ICD-9 categories. The proposed Medical Segment
Colorer (MSC) architecture is a pipeline framework that works in three stages:
(1) word categorization, (2) phrase allocation, and (3) document
classification. MSC uses gated recurrent unit neural networks (GRUs) to map
from an input document to word multi-labels to phrase allocations, and uses
statistical median to map phrase allocation to document multi-label. We compute
variable length segment coloring from overlapping phrase allocation
probabilities. These cross-level bidirectional contextual links identify
adaptive context and then produce segment coloring. We train and evaluate MSC
using the document labeled MIMIC-III clinical notes. Training is conducted
solely using document multi-labels without any information on phrases,
segments, or words. In addition to coloring a clinical note, MSC generates as
byproducts document multi-labeling and word tagging -- creation of ICD9
category keyword lists based on segment coloring. Performance comparison of MSC
byproduct document multi-labels versus methods whose purpose is to produce
justifiable document multi-labels is 64% vs 52.4% micro-average F1-score
against the CAML (CNN attention multi label) method. For evaluation of MSC
segment coloring results, medical practitioners independently assigned the
colors to broad ICD9 categories given a sample of 40 colored notes and a sample
of 50 words related to each category based on the word tags. Binary scoring of
this evaluation has a median value of 83.3% and mean of 63.7%.
| 2,021 |
Computation and Language
|
On the Evolution of Syntactic Information Encoded by BERT's
Contextualized Representations
|
The adaptation of pretrained language models to solve supervised tasks has
become a baseline in NLP, and many recent works have focused on studying how
linguistic information is encoded in the pretrained sentence representations.
Among other information, it has been shown that entire syntax trees are
implicitly embedded in the geometry of such models. As these models are often
fine-tuned, it becomes increasingly important to understand how the encoded
knowledge evolves along the fine-tuning. In this paper, we analyze the
evolution of the embedded syntax trees along the fine-tuning process of BERT
for six different tasks, covering all levels of the linguistic structure.
Experimental results show that the encoded syntactic information is forgotten
(PoS tagging), reinforced (dependency and constituency parsing) or preserved
(semantics-related tasks) in different ways along the fine-tuning process
depending on the task.
| 2,021 |
Computation and Language
|
Mining Large-Scale Low-Resource Pronunciation Data From Wikipedia
|
Pronunciation modeling is a key task for building speech technology in new
languages, and while solid grapheme-to-phoneme (G2P) mapping systems exist,
language coverage can stand to be improved. The information needed to build G2P
models for many more languages can easily be found on Wikipedia, but
unfortunately, it is stored in disparate formats. We report on a system we
built to mine a pronunciation data set in 819 languages from loosely structured
tables within Wikipedia. The data includes phoneme inventories, and for 63
low-resource languages, also includes the grapheme-to-phoneme (G2P) mapping. 54
of these languages do not have easily findable G2P mappings online otherwise.
We turned the information from Wikipedia into a structured, machine-readable
TSV format, and make the resulting data set publicly available so it can be
improved further and used in a variety of applications involving low-resource
languages.
| 2,021 |
Computation and Language
|
Transformer Based Deliberation for Two-Pass Speech Recognition
|
Interactive speech recognition systems must generate words quickly while also
producing accurate results. Two-pass models excel at these requirements by
employing a first-pass decoder that quickly emits words, and a second-pass
decoder that requires more context but is more accurate. Previous work has
established that a deliberation network can be an effective second-pass model.
The model attends to two kinds of inputs at once: encoded audio frames and the
hypothesis text from the first-pass model. In this work, we explore using
transformer layers instead of long-short term memory (LSTM) layers for
deliberation rescoring. In transformer layers, we generalize the
"encoder-decoder" attention to attend to both encoded audio and first-pass text
hypotheses. The output context vectors are then combined by a merger layer.
Compared to LSTM-based deliberation, our best transformer deliberation achieves
7% relative word error rate improvements along with a 38% reduction in
computation. We also compare against non-deliberation transformer rescoring,
and find a 9% relative improvement.
| 2,021 |
Computation and Language
|
Knowledge-driven Natural Language Understanding of English Text and its
Applications
|
Understanding the meaning of a text is a fundamental challenge of natural
language understanding (NLU) research. An ideal NLU system should process a
language in a way that is not exclusive to a single task or a dataset. Keeping
this in mind, we have introduced a novel knowledge driven semantic
representation approach for English text. By leveraging the VerbNet lexicon, we
are able to map syntax tree of the text to its commonsense meaning represented
using basic knowledge primitives. The general purpose knowledge represented
from our approach can be used to build any reasoning based NLU system that can
also provide justification. We applied this approach to construct two NLU
applications that we present here: SQuARE (Semantic-based Question Answering
and Reasoning Engine) and StaCACK (Stateful Conversational Agent using
Commonsense Knowledge). Both these systems work by "truly understanding" the
natural language text they process and both provide natural language
explanations for their responses while maintaining high accuracy.
| 2,021 |
Computation and Language
|
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language
Generation
|
Recent advances in deep learning techniques have enabled machines to generate
cohesive open-ended text when prompted with a sequence of words as context.
While these models now empower many downstream applications from conversation
bots to automatic storytelling, they have been shown to generate texts that
exhibit social biases. To systematically study and benchmark social biases in
open-ended language generation, we introduce the Bias in Open-Ended Language
Generation Dataset (BOLD), a large-scale dataset that consists of 23,679
English text generation prompts for bias benchmarking across five domains:
profession, gender, race, religion, and political ideology. We also propose new
automated metrics for toxicity, psycholinguistic norms, and text gender
polarity to measure social biases in open-ended text generation from multiple
angles. An examination of text generated from three popular language models
reveals that the majority of these models exhibit a larger social bias than
human-written Wikipedia text across all domains. With these results we
highlight the need to benchmark biases in open-ended language generation and
caution users of language generation models on downstream tasks to be cognizant
of these embedded prejudices.
| 2,021 |
Computation and Language
|
Compositionality Through Language Transmission, using Artificial Neural
Networks
|
We propose an architecture and process for using the Iterated Learning Model
("ILM") for artificial neural networks. We show that ILM does not lead to the
same clear compositionality as observed using DCGs, but does lead to a modest
improvement in compositionality, as measured by holdout accuracy and topologic
similarity. We show that ILM can lead to an anti-correlation between holdout
accuracy and topologic rho. We demonstrate that ILM can increase
compositionality when using non-symbolic high-dimensional images as input.
| 2,021 |
Computation and Language
|
ProtoDA: Efficient Transfer Learning for Few-Shot Intent Classification
|
Practical sequence classification tasks in natural language processing often
suffer from low training data availability for target classes. Recent works
towards mitigating this problem have focused on transfer learning using
embeddings pre-trained on often unrelated tasks, for instance, language
modeling. We adopt an alternative approach by transfer learning on an ensemble
of related tasks using prototypical networks under the meta-learning paradigm.
Using intent classification as a case study, we demonstrate that increasing
variability in training tasks can significantly improve classification
performance. Further, we apply data augmentation in conjunction with
meta-learning to reduce sampling bias. We make use of a conditional generator
for data augmentation that is trained directly using the meta-learning
objective and simultaneously with prototypical networks, hence ensuring that
data augmentation is customized to the task. We explore augmentation in the
sentence embedding space as well as prototypical embedding space. Combining
meta-learning with augmentation provides upto 6.49% and 8.53% relative F1-score
improvements over the best performing systems in the 5-shot and 10-shot
learning, respectively.
| 2,021 |
Computation and Language
|
Weakly Supervised Neuro-Symbolic Module Networks for Numerical Reasoning
|
Neural Module Networks (NMNs) have been quite successful in incorporating
explicit reasoning as learnable modules in various question answering tasks,
including the most generic form of numerical reasoning over text in Machine
Reading Comprehension (MRC). However, to achieve this, contemporary NMNs need
strong supervision in executing the query as a specialized program over
reasoning modules and fail to generalize to more open-ended settings without
such supervision. Hence we propose Weakly-Supervised Neuro-Symbolic Module
Network (WNSMN) trained with answers as the sole supervision for numerical
reasoning based MRC. It learns to execute a noisy heuristic program obtained
from the dependency parsing of the query, as discrete actions over both neural
and symbolic reasoning modules and trains it end-to-end in a reinforcement
learning framework with discrete reward from answer matching. On the
numerical-answer subset of DROP, WNSMN out-performs NMN by 32% and the
reasoning-free language model GenBERT by 8% in exact match accuracy when
trained under comparable weak supervised settings. This showcases the
effectiveness and generalizability of modular networks that can handle explicit
discrete reasoning over noisy programs in an end-to-end manner.
| 2,021 |
Computation and Language
|
DRAG: Director-Generator Language Modelling Framework for Non-Parallel
Author Stylized Rewriting
|
Author stylized rewriting is the task of rewriting an input text in a
particular author's style. Recent works in this area have leveraged
Transformer-based language models in a denoising autoencoder setup to generate
author stylized text without relying on a parallel corpus of data. However,
these approaches are limited by the lack of explicit control of target
attributes and being entirely data-driven. In this paper, we propose a
Director-Generator framework to rewrite content in the target author's style,
specifically focusing on certain target attributes. We show that our proposed
framework works well even with a limited-sized target author corpus. Our
experiments on corpora consisting of relatively small-sized text authored by
three distinct authors show significant improvements upon existing works to
rewrite input texts in target author's style. Our quantitative and qualitative
analyses further show that our model has better meaning retention and results
in more fluent generations.
| 2,021 |
Computation and Language
|
Does Typological Blinding Impede Cross-Lingual Sharing?
|
Bridging the performance gap between high- and low-resource languages has
been the focus of much previous work. Typological features from databases such
as the World Atlas of Language Structures (WALS) are a prime candidate for
this, as such data exists even for very low-resource languages. However,
previous work has only found minor benefits from using typological information.
Our hypothesis is that a model trained in a cross-lingual setting will pick up
on typological cues from the input data, thus overshadowing the utility of
explicitly using such features. We verify this hypothesis by blinding a model
to typological information, and investigate how cross-lingual sharing and
performance is impacted. Our model is based on a cross-lingual architecture in
which the latent weights governing the sharing between languages is learnt
during training. We show that (i) preventing this model from exploiting
typology severely reduces performance, while a control experiment reaffirms
that (ii) encouraging sharing according to typology somewhat improves
performance.
| 2,021 |
Computation and Language
|
Explaining Natural Language Processing Classifiers with Occlusion and
Language Modeling
|
Deep neural networks are powerful statistical learners. However, their
predictions do not come with an explanation of their process. To analyze these
models, explanation methods are being developed. We present a novel explanation
method, called OLM, for natural language processing classifiers. This method
combines occlusion and language modeling, which are techniques central to
explainability and NLP, respectively. OLM gives explanations that are
theoretically sound and easy to understand.
We make several contributions to the theory of explanation methods. Axioms
for explanation methods are an interesting theoretical concept to explore their
basics and deduce methods. We introduce a new axiom, give its intuition and
show it contradicts another existing axiom. Additionally, we point out
theoretical difficulties of existing gradient-based and some occlusion-based
explanation methods in natural language processing. We provide an extensive
argument why evaluation of explanation methods is difficult. We compare OLM to
other explanation methods and underline its uniqueness experimentally. Finally,
we investigate corner cases of OLM and discuss its validity and possible
improvements.
| 2,021 |
Computation and Language
|
LESA: Linguistic Encapsulation and Semantic Amalgamation Based
Generalised Claim Detection from Online Content
|
The conceptualization of a claim lies at the core of argument mining. The
segregation of claims is complex, owing to the divergence in textual syntax and
context across different distributions. Another pressing issue is the
unavailability of labeled unstructured text for experimentation. In this paper,
we propose LESA, a framework which aims at advancing headfirst into expunging
the former issue by assembling a source-independent generalized model that
captures syntactic features through part-of-speech and dependency embeddings,
as well as contextual features through a fine-tuned language model. We resolve
the latter issue by annotating a Twitter dataset which aims at providing a
testing ground on a large unstructured dataset. Experimental results show that
LESA improves upon the state-of-the-art performance across six benchmark claim
datasets by an average of 3 claim-F1 points for in-domain experiments and by 2
claim-F1 points for general-domain experiments. On our dataset too, LESA
outperforms existing baselines by 1 claim-F1 point on the in-domain experiments
and 2 claim-F1 points on the general-domain experiments. We also release
comprehensive data annotation guidelines compiled during the annotation phase
(which was missing in the current literature).
| 2,021 |
Computation and Language
|
The Role of Syntactic Planning in Compositional Image Captioning
|
Image captioning has focused on generalizing to images drawn from the same
distribution as the training set, and not to the more challenging problem of
generalizing to different distributions of images. Recently, Nikolaus et al.
(2019) introduced a dataset to assess compositional generalization in image
captioning, where models are evaluated on their ability to describe images with
unseen adjective-noun and noun-verb compositions. In this work, we investigate
different methods to improve compositional generalization by planning the
syntactic structure of a caption. Our experiments show that jointly modeling
tokens and syntactic tags enhances generalization in both RNN- and
Transformer-based models, while also improving performance on standard metrics.
| 2,021 |
Computation and Language
|
Identifying COVID-19 Fake News in Social Media
|
The evolution of social media platforms have empowered everyone to access
information easily. Social media users can easily share information with the
rest of the world. This may sometimes encourage spread of fake news, which can
result in undesirable consequences. In this work, we train models which can
identify health news related to COVID-19 pandemic as real or fake. Our models
achieve a high F1-score of 98.64%. Our models achieve second place on the
leaderboard, tailing the first position with a very narrow margin 0.05% points.
| 2,021 |
Computation and Language
|
Us vs. Them: A Dataset of Populist Attitudes, News Bias and Emotions
|
Computational modelling of political discourse tasks has become an
increasingly important area of research in natural language processing.
Populist rhetoric has risen across the political sphere in recent years;
however, computational approaches to it have been scarce due to its complex
nature. In this paper, we present the new $\textit{Us vs. Them}$ dataset,
consisting of 6861 Reddit comments annotated for populist attitudes and the
first large-scale computational models of this phenomenon. We investigate the
relationship between populist mindsets and social groups, as well as a range of
emotions typically associated with these. We set a baseline for two tasks
related to populist attitudes and present a set of multi-task learning models
that leverage and demonstrate the importance of emotion and group
identification as auxiliary tasks.
| 2,022 |
Computation and Language
|
Attention Guided Dialogue State Tracking with Sparse Supervision
|
Existing approaches to Dialogue State Tracking (DST) rely on turn level
dialogue state annotations, which are expensive to acquire in large scale. In
call centers, for tasks like managing bookings or subscriptions, the user goal
can be associated with actions (e.g.~API calls) issued by customer service
agents. These action logs are available in large volumes and can be utilized
for learning dialogue states. However, unlike turn-level annotations, such
logged actions are only available sparsely across the dialogue, providing only
a form of weak supervision for DST models. To efficiently learn DST with sparse
labels, we extend a state-of-the-art encoder-decoder model. The model learns a
slot-aware representation of dialogue history, which focuses on relevant turns
to guide the decoder. We present results on two public multi-domain DST
datasets (MultiWOZ and Schema Guided Dialogue) in both settings i.e. training
with turn-level and with sparse supervision. The proposed approach improves
over baseline in both settings. More importantly, our model trained with sparse
supervision is competitive in performance to fully supervised baselines, while
being more data and cost efficient.
| 2,021 |
Computation and Language
|
Syntactic Nuclei in Dependency Parsing -- A Multilingual Exploration
|
Standard models for syntactic dependency parsing take words to be the
elementary units that enter into dependency relations. In this paper, we
investigate whether there are any benefits from enriching these models with the
more abstract notion of nucleus proposed by Tesni\`{e}re. We do this by showing
how the concept of nucleus can be defined in the framework of Universal
Dependencies and how we can use composition functions to make a
transition-based dependency parser aware of this concept. Experiments on 12
languages show that nucleus composition gives small but significant
improvements in parsing accuracy. Further analysis reveals that the improvement
mainly concerns a small number of dependency relations, including nominal
modifiers, relations of coordination, main predicates, and direct objects.
| 2,021 |
Computation and Language
|
Semi-automatic Generation of Multilingual Datasets for Stance Detection
in Twitter
|
Popular social media networks provide the perfect environment to study the
opinions and attitudes expressed by users. While interactions in social media
such as Twitter occur in many natural languages, research on stance detection
(the position or attitude expressed with respect to a specific topic) within
the Natural Language Processing field has largely been done for English.
Although some efforts have recently been made to develop annotated data in
other languages, there is a telling lack of resources to facilitate
multilingual and crosslingual research on stance detection. This is partially
due to the fact that manually annotating a corpus of social media texts is a
difficult, slow and costly process. Furthermore, as stance is a highly domain-
and topic-specific phenomenon, the need for annotated data is specially
demanding. As a result, most of the manually labeled resources are hindered by
their relatively small size and skewed class distribution. This paper presents
a method to obtain multilingual datasets for stance detection in Twitter.
Instead of manually annotating on a per tweet basis, we leverage user-based
information to semi-automatically label large amounts of tweets. Empirical
monolingual and cross-lingual experimentation and qualitative analysis show
that our method helps to overcome the aforementioned difficulties to build
large, balanced and multilingual labeled corpora. We believe that our method
can be easily adapted to easily generate labeled social media data for other
Natural Language Processing tasks and domains.
| 2,021 |
Computation and Language
|
BERTa\'u: Ita\'u BERT for digital customer service
|
In the last few years, three major topics received increased interest: deep
learning, NLP and conversational agents. Bringing these three topics together
to create an amazing digital customer experience and indeed deploy in
production and solve real-world problems is something innovative and
disruptive. We introduce a new Portuguese financial domain language
representation model called BERTa\'u. BERTa\'u is an uncased BERT-base trained
from scratch with data from the Ita\'u virtual assistant chatbot solution. Our
novel contribution is that BERTa\'u pretrained language model requires less
data, reached state-of-the-art performance in three NLP tasks, and generates a
smaller and lighter model that makes the deployment feasible. We developed
three tasks to validate our model: information retrieval with Frequently Asked
Questions (FAQ) from Ita\'u bank, sentiment analysis from our virtual assistant
data, and a NER solution. All proposed tasks are real-world solutions in
production on our environment and the usage of a specialist model proved to be
effective when compared to Google BERT multilingual and the DPRQuestionEncoder
from Facebook, available at Hugging Face. The BERTa\'u improves the performance
in 22% of FAQ Retrieval MRR metric, 2.1% in Sentiment Analysis F1 score, 4.4%
in NER F1 score and can also represent the same sequence in up to 66% fewer
tokens when compared to "shelf models".
| 2,021 |
Computation and Language
|
A transformer based approach for fighting COVID-19 fake news
|
The rapid outbreak of COVID-19 has caused humanity to come to a stand-still
and brought with it a plethora of other problems. COVID-19 is the first
pandemic in history when humanity is the most technologically advanced and
relies heavily on social media platforms for connectivity and other benefits.
Unfortunately, fake news and misinformation regarding this virus is also
available to people and causing some massive problems. So, fighting this
infodemic has become a significant challenge. We present our solution for the
"Constraint@AAAI2021 - COVID19 Fake News Detection in English" challenge in
this work. After extensive experimentation with numerous architectures and
techniques, we use eight different transformer-based pre-trained models with
additional layers to construct a stacking ensemble classifier and fine-tuned
them for our purpose. We achieved 0.979906542 accuracy, 0.979913119 precision,
0.979906542 recall, and 0.979907901 f1-score on the test dataset of the
competition.
| 2,021 |
Computation and Language
|
Enhancing Sequence-to-Sequence Neural Lemmatization with External
Resources
|
We propose a novel hybrid approach to lemmatization that enhances the seq2seq
neural model with additional lemmas extracted from an external lexicon or a
rule-based system. During training, the enhanced lemmatizer learns both to
generate lemmas via a sequential decoder and copy the lemma characters from the
external candidates supplied during run-time. Our lemmatizer enhanced with
candidates extracted from the Apertium morphological analyzer achieves
statistically significant improvements compared to baseline models not
utilizing additional lemma information, achieves an average accuracy of 97.25%
on a set of 23 UD languages, which is 0.55% higher than obtained with the
Stanford Stanza model on the same set of languages. We also compare with other
methods of integrating external data into lemmatization and show that our
enhanced system performs considerably better than a simple lexicon extension
method based on the Stanza system, and it achieves complementary improvements
w.r.t. the data augmentation method.
| 2,022 |
Computation and Language
|
A Neural Few-Shot Text Classification Reality Check
|
Modern classification models tend to struggle when the amount of annotated
data is scarce. To overcome this issue, several neural few-shot classification
models have emerged, yielding significant progress over time, both in Computer
Vision and Natural Language Processing. In the latter, such models used to rely
on fixed word embeddings before the advent of transformers. Additionally, some
models used in Computer Vision are yet to be tested in NLP applications. In
this paper, we compare all these models, first adapting those made in the field
of image processing to NLP, and second providing them access to transformers.
We then test these models equipped with the same transformer-based encoder on
the intent detection task, known for having a large number of classes. Our
results reveal that while methods perform almost equally on the ARSC dataset,
this is not the case for the Intent Detection task, where the most recent and
supposedly best competitors perform worse than older and simpler ones (while
all are given access to transformers). We also show that a simple baseline is
surprisingly strong. All the new developed models, as well as the evaluation
framework, are made publicly available.
| 2,021 |
Computation and Language
|
Modeling Context in Answer Sentence Selection Systems on a Latency
Budget
|
Answer Sentence Selection (AS2) is an efficient approach for the design of
open-domain Question Answering (QA) systems. In order to achieve low latency,
traditional AS2 models score question-answer pairs individually, ignoring any
information from the document each potential answer was extracted from. In
contrast, more computationally expensive models designed for machine reading
comprehension tasks typically receive one or more passages as input, which
often results in better accuracy. In this work, we present an approach to
efficiently incorporate contextual information in AS2 models. For each answer
candidate, we first use unsupervised similarity techniques to extract relevant
sentences from its source document, which we then feed into an efficient
transformer architecture fine-tuned for AS2. Our best approach, which leverages
a multi-way attention architecture to efficiently encode context, improves 6%
to 11% over noncontextual state of the art in AS2 with minimal impact on system
latency. All experiments in this work were conducted in English.
| 2,021 |
Computation and Language
|
LOME: Large Ontology Multilingual Extraction
|
We present LOME, a system for performing multilingual information extraction.
Given a text document as input, our core system identifies spans of textual
entity and event mentions with a FrameNet (Baker et al., 1998) parser. It
subsequently performs coreference resolution, fine-grained entity typing, and
temporal relation prediction between events. By doing so, the system constructs
an event and entity focused knowledge graph. We can further apply third-party
modules for other types of annotation, like relation extraction. Our
(multilingual) first-party modules either outperform or are competitive with
the (monolingual) state-of-the-art. We achieve this through the use of
multilingual encoders like XLM-R (Conneau et al., 2020) and leveraging
multilingual training data. LOME is available as a Docker container on Docker
Hub. In addition, a lightweight version of the system is accessible as a web
demo.
| 2,021 |
Computation and Language
|
Combining pre-trained language models and structured knowledge
|
In recent years, transformer-based language models have achieved state of the
art performance in various NLP benchmarks. These models are able to extract
mostly distributional information with some semantics from unstructured text,
however it has proven challenging to integrate structured information, such as
knowledge graphs into these models. We examine a variety of approaches to
integrate structured knowledge into current language models and determine
challenges, and possible opportunities to leverage both structured and
unstructured information sources. From our survey, we find that there are still
opportunities at exploiting adapter-based injections and that it may be
possible to further combine various of the explored approaches into one system.
| 2,021 |
Computation and Language
|
Few-Shot Domain Adaptation for Grammatical Error Correction via
Meta-Learning
|
Most existing Grammatical Error Correction (GEC) methods based on
sequence-to-sequence mainly focus on how to generate more pseudo data to obtain
better performance. Few work addresses few-shot GEC domain adaptation. In this
paper, we treat different GEC domains as different GEC tasks and propose to
extend meta-learning to few-shot GEC domain adaptation without using any pseudo
data. We exploit a set of data-rich source domains to learn the initialization
of model parameters that facilitates fast adaptation on new resource-poor
target domains. We adapt GEC model to the first language (L1) of the second
language learner. To evaluate the proposed method, we use nine L1s as source
domains and five L1s as target domains. Experiment results on the L1 GEC domain
adaptation dataset demonstrate that the proposed approach outperforms the
multi-task transfer learning baseline by 0.50 $F_{0.5}$ score on average and
enables us to effectively adapt to a new L1 domain with only 200 parallel
sentences.
| 2,021 |
Computation and Language
|
Synthesizing Monolingual Data for Neural Machine Translation
|
In neural machine translation (NMT), monolingual data in the target language
are usually exploited through a method so-called "back-translation" to
synthesize additional training parallel data. The synthetic data have been
shown helpful to train better NMT, especially for low-resource language pairs
and domains. Nonetheless, large monolingual data in the target domains or
languages are not always available to generate large synthetic parallel data.
In this work, we propose a new method to generate large synthetic parallel data
leveraging very small monolingual data in a specific domain. We fine-tune a
pre-trained GPT-2 model on such small in-domain monolingual data and use the
resulting model to generate a large amount of synthetic in-domain monolingual
data. Then, we perform back-translation, or forward translation, to generate
synthetic in-domain parallel data. Our preliminary experiments on three
language pairs and five domains show the effectiveness of our method to
generate fully synthetic but useful in-domain parallel data for improving NMT
in all configurations. We also show promising results in extreme adaptation for
personalized NMT.
| 2,021 |
Computation and Language
|
Does injecting linguistic structure into language models lead to better
alignment with brain recordings?
|
Neuroscientists evaluate deep neural networks for natural language processing
as possible candidate models for how language is processed in the brain. These
models are often trained without explicit linguistic supervision, but have been
shown to learn some linguistic structure in the absence of such supervision
(Manning et al., 2020), potentially questioning the relevance of symbolic
linguistic theories in modeling such cognitive processes (Warstadt and Bowman,
2020). We evaluate across two fMRI datasets whether language models align
better with brain recordings, if their attention is biased by annotations from
syntactic or semantic formalisms. Using structure from dependency or minimal
recursion semantic annotations, we find alignments improve significantly for
one of the datasets. For another dataset, we see more mixed results. We present
an extensive analysis of these results. Our proposed approach enables the
evaluation of more targeted hypotheses about the composition of meaning in the
brain, expanding the range of possible scientific inferences a neuroscientist
could make, and opens up new opportunities for cross-pollination between
computational neuroscience and linguistics.
| 2,021 |
Computation and Language
|
CD2CR: Co-reference Resolution Across Documents and Domains
|
Cross-document co-reference resolution (CDCR) is the task of identifying and
linking mentions to entities and concepts across many text documents. Current
state-of-the-art models for this task assume that all documents are of the same
type (e.g. news articles) or fall under the same theme. However, it is also
desirable to perform CDCR across different domains (type or theme). A
particular use case we focus on in this paper is the resolution of entities
mentioned across scientific work and newspaper articles that discuss them.
Identifying the same entities and corresponding concepts in both scientific
articles and news can help scientists understand how their work is represented
in mainstream media. We propose a new task and English language dataset for
cross-document cross-domain co-reference resolution (CD$^2$CR). The task aims
to identify links between entities across heterogeneous document types. We show
that in this cross-domain, cross-document setting, existing CDCR models do not
perform well and we provide a baseline model that outperforms current
state-of-the-art CDCR models on CD$^2$CR. Our data set, annotation tool and
guidelines as well as our model for cross-document cross-domain co-reference
are all supplied as open access open source resources.
| 2,021 |
Computation and Language
|
Enhancing the Transformer Decoder with Transition-based Syntax
|
Notwithstanding recent advances, syntactic generalization remains a challenge
for text decoders. While some studies showed gains from incorporating
source-side symbolic syntactic and semantic structure into text generation
Transformers, very little work addressed the decoding of such structure. We
propose a general approach for tree decoding using a transition-based approach.
Examining the challenging test case of incorporating Universal Dependencies
syntax into machine translation, we present substantial improvements on test
sets that focus on syntactic generalization, while presenting improved or
comparable performance on standard MT benchmarks. Further qualitative analysis
addresses cases where syntactic generalization in the vanilla Transformer
decoder is inadequate and demonstrates the advantages afforded by integrating
syntactic information.
| 2,022 |
Computation and Language
|
NLPBK at VLSP-2020 shared task: Compose transformer pretrained models
for Reliable Intelligence Identification on Social network
|
This paper describes our method for tuning a transformer-based pretrained
model, to adaptation with Reliable Intelligence Identification on Vietnamese
SNSs problem. We also proposed a model that combines bert-base pretrained
models with some metadata features, such as the number of comments, number of
likes, images of SNS documents,... to improved results for VLSP shared task:
Reliable Intelligence Identification on Vietnamese SNSs. With appropriate
training techniques, our model is able to achieve 0.9392 ROC-AUC on public test
set and the final version settles at top 2 ROC-AUC (0.9513) on private test
set.
| 2,021 |
Computation and Language
|
Challenges in Automated Debiasing for Toxic Language Detection
|
Biased associations have been a challenge in the development of classifiers
for detecting toxic language, hindering both fairness and accuracy. As
potential solutions, we investigate recently introduced debiasing methods for
text classification datasets and models, as applied to toxic language
detection. Our focus is on lexical (e.g., swear words, slurs, identity
mentions) and dialectal markers (specifically African American English). Our
comprehensive experiments establish that existing methods are limited in their
ability to prevent biased behavior in current toxicity detectors. We then
propose an automatic, dialect-aware data correction method, as a
proof-of-concept. Despite the use of synthetic labels, this method reduces
dialectal associations with toxicity. Overall, our findings show that debiasing
a model trained on biased toxic language data is not as effective as simply
relabeling the data to remove existing biases.
| 2,021 |
Computation and Language
|
Can We Automate Scientific Reviewing?
|
The rapid development of science and technology has been accompanied by an
exponential growth in peer-reviewed scientific publications. At the same time,
the review of each paper is a laborious process that must be carried out by
subject matter experts. Thus, providing high-quality reviews of this growing
number of papers is a significant challenge. In this work, we ask the question
"can we automate scientific reviewing?", discussing the possibility of using
state-of-the-art natural language processing (NLP) models to generate
first-pass peer reviews for scientific papers. Arguably the most difficult part
of this is defining what a "good" review is in the first place, so we first
discuss possible evaluation measures for such reviews. We then collect a
dataset of papers in the machine learning domain, annotate them with different
aspects of content covered in each review, and train targeted summarization
models that take in papers to generate reviews. Comprehensive experimental
results show that system-generated reviews tend to touch upon more aspects of
the paper than human-written reviews, but the generated text can suffer from
lower constructiveness for all aspects except the explanation of the core ideas
of the papers, which are largely factually correct. We finally summarize eight
challenges in the pursuit of a good review generation system together with
potential solutions, which, hopefully, will inspire more future research on
this subject. We make all code, and the dataset publicly available:
https://github.com/neulab/ReviewAdvisor, as well as a ReviewAdvisor system:
http://review.nlpedia.ai/.
| 2,021 |
Computation and Language
|
Taxonomic survey of Hindi Language NLP systems
|
Natural Language processing (NLP) represents the task of automatic handling
of natural human language by machines.There is large spectrum of possible
applications of NLP which help in automating tasks like translating text from
one language to other, retrieving and summarizing data from very huge
repositories, spam email filtering, identifying fake news in digital media,
find sentiment and feedback of people, find political opinions and views of
people on various government policies, provide effective medical assistance
based on past history records of patient etc. Hindi is the official language of
India with nearly 691 million users in India and 366 million in rest of world.
At present, a number of government and private sector projects and researchers
in India and abroad, are working towards developing NLP applications and
resources for Indian languages. This survey gives a report of the resources and
applications available for Hindi language NLP.
| 2,021 |
Computation and Language
|
Learning From How Humans Correct
|
In industry NLP application, our manually labeled data has a certain number
of noisy data. We present a simple method to find the noisy data and relabel
them manually, meanwhile we collect the correction information. Then we present
novel method to incorporate the human correction information into deep learning
model. Human know how to correct noisy data. So the correction information can
be inject into deep learning model. We do the experiment on our own text
classification dataset, which is manually labeled, because we need to relabel
the noisy data in our dataset for our industry application. The experiment
result shows that our learn-on-correction method improve the classification
accuracy from 91.7% to 92.5% in test dataset. The 91.7% accuracy is trained on
the corrected dataset, which improve the baseline from 83.3% to 91.7% in test
dataset. The accuracy under human evaluation achieves more than 97%.
| 2,024 |
Computation and Language
|
ShufText: A Simple Black Box Approach to Evaluate the Fragility of Text
Classification Models
|
Text classification is the most basic natural language processing task. It
has a wide range of applications ranging from sentiment analysis to topic
classification. Recently, deep learning approaches based on CNN, LSTM, and
Transformers have been the de facto approach for text classification. In this
work, we highlight a common issue associated with these approaches. We show
that these systems are over-reliant on the important words present in the text
that are useful for classification. With limited training data and
discriminative training strategy, these approaches tend to ignore the semantic
meaning of the sentence and rather just focus on keywords or important n-grams.
We propose a simple black box technique ShutText to present the shortcomings of
the model and identify the over-reliance of the model on keywords. This
involves randomly shuffling the words in a sentence and evaluating the
classification accuracy. We see that on common text classification datasets
there is very little effect of shuffling and with high probability these models
predict the original class. We also evaluate the effect of language model
pretraining on these models and try to answer questions around model robustness
to out of domain sentences. We show that simple models based on CNN or LSTM as
well as complex models like BERT are questionable in terms of their syntactic
and semantic understanding.
| 2,022 |
Computation and Language
|
Triple M: A Practical Text-to-speech Synthesis System With
Multi-guidance Attention And Multi-band Multi-time LPCNet
|
In this work, a robust and efficient text-to-speech (TTS) synthesis system
named Triple M is proposed for large-scale online application. The key
components of Triple M are: 1) A sequence-to-sequence model adopts a novel
multi-guidance attention to transfer complementary advantages from guiding
attention mechanisms to the basic attention mechanism without in-domain
performance loss and online service modification. Compared with single
attention mechanism, multi-guidance attention not only brings better
naturalness to long sentence synthesis, but also reduces the word error rate by
26.8%. 2) A new efficient multi-band multi-time vocoder framework, which
reduces the computational complexity from 2.8 to 1.0 GFLOP and speeds up LPCNet
by 2.75x on a single CPU.
| 2,021 |
Computation and Language
|
Machine Translationese: Effects of Algorithmic Bias on Linguistic
Complexity in Machine Translation
|
Recent studies in the field of Machine Translation (MT) and Natural Language
Processing (NLP) have shown that existing models amplify biases observed in the
training data. The amplification of biases in language technology has mainly
been examined with respect to specific phenomena, such as gender bias. In this
work, we go beyond the study of gender in MT and investigate how bias
amplification might affect language in a broader sense. We hypothesize that the
'algorithmic bias', i.e. an exacerbation of frequently observed patterns in
combination with a loss of less frequent ones, not only exacerbates societal
biases present in current datasets but could also lead to an artificially
impoverished language: 'machine translationese'. We assess the linguistic
richness (on a lexical and morphological level) of translations created by
different data-driven MT paradigms - phrase-based statistical (PB-SMT) and
neural MT (NMT). Our experiments show that there is a loss of lexical and
morphological richness in the translations produced by all investigated MT
paradigms for two language pairs (EN<=>FR and EN<=>ES).
| 2,021 |
Computation and Language
|
Fake it Till You Make it: Self-Supervised Semantic Shifts for
Monolingual Word Embedding Tasks
|
The use of language is subject to variation over time as well as across
social groups and knowledge domains, leading to differences even in the
monolingual scenario. Such variation in word usage is often called lexical
semantic change (LSC). The goal of LSC is to characterize and quantify language
variations with respect to word meaning, to measure how distinct two language
sources are (that is, people or language models). Because there is hardly any
data available for such a task, most solutions involve unsupervised methods to
align two embeddings and predict semantic change with respect to a distance
measure. To that end, we propose a self-supervised approach to model lexical
semantic change by generating training samples by introducing perturbations of
word vectors in the input corpora. We show that our method can be used for the
detection of semantic change with any alignment method. Furthermore, it can be
used to choose the landmark words to use in alignment and can lead to
substantial improvements over the existing techniques for alignment.
We illustrate the utility of our techniques using experimental results on
three different datasets, involving words with the same or different meanings.
Our methods not only provide significant improvements but also can lead to
novel findings for the LSC problem.
| 2,021 |
Computation and Language
|
If you've got it, flaunt it: Making the most of fine-grained sentiment
annotations
|
Fine-grained sentiment analysis attempts to extract sentiment holders,
targets and polar expressions and resolve the relationship between them, but
progress has been hampered by the difficulty of annotation. Targeted sentiment
analysis, on the other hand, is a more narrow task, focusing on extracting
sentiment targets and classifying their polarity.In this paper, we explore
whether incorporating holder and expression information can improve target
extraction and classification and perform experiments on eight English
datasets. We conclude that jointly predicting target and polarity BIO labels
improves target extraction, and that augmenting the input text with gold
expressions generally improves targeted polarity classification. This
highlights the potential importance of annotating expressions for fine-grained
sentiment datasets. At the same time, our results show that performance of
current models for predicting polar expressions is poor, hampering the benefit
of this information in practice.
| 2,021 |
Computation and Language
|
Contextualized Rewriting for Text Summarization
|
Extractive summarization suffers from irrelevance, redundancy and
incoherence. Existing work shows that abstractive rewriting for extractive
summaries can improve the conciseness and readability. These rewriting systems
consider extracted summaries as the only input, which is relatively focused but
can lose important background knowledge. In this paper, we investigate
contextualized rewriting, which ingests the entire original document. We
formalize contextualized rewriting as a seq2seq problem with group alignments,
introducing group tag as a solution to model the alignments, identifying
extracted summaries through content-based addressing. Results show that our
approach significantly outperforms non-contextualized rewriting systems without
requiring reinforcement learning, achieving strong improvements on ROUGE scores
upon multiple extractive summarizers.
| 2,021 |
Computation and Language
|
An Unsupervised Language-Independent Entity Disambiguation Method and
its Evaluation on the English and Persian Languages
|
Entity Linking is one of the essential tasks of information extraction and
natural language understanding. Entity linking mainly consists of two tasks:
recognition and disambiguation of named entities. Most studies address these
two tasks separately or focus only on one of them. Moreover, most of the
state-of-the -art entity linking algorithms are either supervised, which have
poor performance in the absence of annotated corpora or language-dependent,
which are not appropriate for multi-lingual applications. In this paper, we
introduce an Unsupervised Language-Independent Entity Disambiguation (ULIED),
which utilizes a novel approach to disambiguate and link named entities.
Evaluation of ULIED on different English entity linking datasets as well as the
only available Persian dataset illustrates that ULIED in most of the cases
outperforms the state-of-the-art unsupervised multi-lingual approaches.
| 2,021 |
Computation and Language
|
BNLP: Natural language processing toolkit for Bengali language
|
BNLP is an open source language processing toolkit for Bengali language
consisting with tokenization, word embedding, POS tagging, NER tagging
facilities. BNLP provides pre-trained model with high accuracy to do model
based tokenization, embedding, POS tagging, NER tagging task for Bengali
language. BNLP pre-trained model achieves significant results in Bengali text
tokenization, word embedding, POS tagging and NER tagging task. BNLP is using
widely in the Bengali research communities with 16K downloads, 119 stars and 31
forks. BNLP is available at https://github.com/sagorbrur/bnlp.
| 2,021 |
Computation and Language
|
An Empirical Study on the Generalization Power of Neural Representations
Learned via Visual Guessing Games
|
Guessing games are a prototypical instance of the "learning by interacting"
paradigm. This work investigates how well an artificial agent can benefit from
playing guessing games when later asked to perform on novel NLP downstream
tasks such as Visual Question Answering (VQA). We propose two ways to exploit
playing guessing games: 1) a supervised learning scenario in which the agent
learns to mimic successful guessing games and 2) a novel way for an agent to
play by itself, called Self-play via Iterated Experience Learning (SPIEL).
We evaluate the ability of both procedures to generalize: an in-domain
evaluation shows an increased accuracy (+7.79) compared with competitors on the
evaluation suite CompGuessWhat?!; a transfer evaluation shows improved
performance for VQA on the TDIUC dataset in terms of harmonic average accuracy
(+5.31) thanks to more fine-grained object representations learned via SPIEL.
| 2,021 |
Computation and Language
|
Introduction of a novel word embedding approach based on technology
labels extracted from patent data
|
Diversity in patent language is growing and makes finding synonyms for
conducting patent searches more and more challenging. In addition to that, most
approaches for dealing with diverse patent language are based on manual search
and human intuition. In this paper, a word embedding approach using statistical
analysis of human labeled data to produce accurate and language independent
word vectors for technical terms is introduced. This paper focuses on the
explanation of the idea behind the statistical analysis and shows first
qualitative results. The resulting algorithm is a development of the former
EQMania UG (eqmania.com) and can be tested under eqalice.com until April 2021.
| 2,021 |
Computation and Language
|
Multilingual Email Zoning
|
The segmentation of emails into functional zones (also dubbed email zoning)
is a relevant preprocessing step for most NLP tasks that deal with emails.
However, despite the multilingual character of emails and their applications,
previous literature regarding email zoning corpora and systems was developed
essentially for English.
In this paper, we analyse the existing email zoning corpora and propose a new
multilingual benchmark composed of 625 emails in Portuguese, Spanish and
French. Moreover, we introduce OKAPI, the first multilingual email segmentation
model based on a language agnostic sentence encoder. Besides generalizing well
for unseen languages, our model is competitive with current English benchmarks,
and reached new state-of-the-art performances for domain adaptation tasks in
English.
| 2,021 |
Computation and Language
|
Adversarial Contrastive Pre-training for Protein Sequences
|
Recent developments in Natural Language Processing (NLP) demonstrate that
large-scale, self-supervised pre-training can be extremely beneficial for
downstream tasks. These ideas have been adapted to other domains, including the
analysis of the amino acid sequences of proteins. However, to date most
attempts on protein sequences rely on direct masked language model style
pre-training. In this work, we design a new, adversarial pre-training method
for proteins, extending and specializing similar advances in NLP. We show
compelling results in comparison to traditional MLM pre-training, though
further development is needed to ensure the gains are worth the significant
computational cost.
| 2,021 |
Computation and Language
|
Mixup Regularized Adversarial Networks for Multi-Domain Text
Classification
|
Using the shared-private paradigm and adversarial training has significantly
improved the performances of multi-domain text classification (MDTC) models.
However, there are two issues for the existing methods. First, instances from
the multiple domains are not sufficient for domain-invariant feature
extraction. Second, aligning on the marginal distributions may lead to fatal
mismatching. In this paper, we propose a mixup regularized adversarial network
(MRAN) to address these two issues. More specifically, the domain and category
mixup regularizations are introduced to enrich the intrinsic features in the
shared latent space and enforce consistent predictions in-between training
instances such that the learned features can be more domain-invariant and
discriminative. We conduct experiments on two benchmarks: The Amazon review
dataset and the FDU-MTL dataset. Our approach on these two datasets yields
average accuracies of 87.64\% and 89.0\% respectively, outperforming all
relevant baselines.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.