Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Fake News Detection: a comparison between available Deep Learning
techniques in vector space
|
Fake News Detection is an essential problem in the field of Natural Language
Processing. The benefits of an effective solution in this area are manifold for
the goodwill of society. On a surface level, it broadly matches with the
general problem of text classification. Researchers have proposed various
approaches to tackle fake news using simple as well as some complex techniques.
In this paper, we try to make a comparison between the present Deep Learning
techniques by representing the news instances in some vector space using a
combination of common mathematical operations with available vector space
representations. We do a number of experiments using various combinations and
permutations. Finally, we conclude with a sound analysis of the results and
evaluate the reasons for such results.
| 2,020 |
Computation and Language
|
Regular Expressions for Fast-response COVID-19 Text Classification
|
Text classifiers are at the core of many NLP applications and use a variety
of algorithmic approaches and software. This paper introduces infrastructure
and methodologies for text classifiers based on large-scale regular
expressions. In particular, we describe how Facebook determines if a given
piece of text - anything from a hashtag to a post - belongs to a narrow topic
such as COVID-19. To fully define a topic and evaluate classifier performance
we employ human-guided iterations of keyword discovery, but do not require
labeled data. For COVID-19, we build two sets of regular expressions: (1) for
66 languages, with 99% precision and recall >50%, (2) for the 11 most common
languages, with precision >90% and recall >90%. Regular expressions enable
low-latency queries from multiple platforms. Response to challenges like
COVID-19 is fast and so are revisions. Comparisons to a DNN classifier show
explainable results, higher precision and recall, and less overfitting. Our
learnings can be applied to other narrow-topic classifiers.
| 2,021 |
Computation and Language
|
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout
Transformer
|
We address the challenging problem of Natural Language Comprehension beyond
plain-text documents by introducing the TILT neural network architecture which
simultaneously learns layout information, visual features, and textual
semantics. Contrary to previous approaches, we rely on a decoder capable of
unifying a variety of problems involving natural language. The layout is
represented as an attention bias and complemented with contextualized visual
information, while the core of our model is a pretrained encoder-decoder
Transformer. Our novel approach achieves state-of-the-art results in extracting
information from documents and answering questions which demand layout
understanding (DocVQA, CORD, SROIE). At the same time, we simplify the process
by employing an end-to-end model.
| 2,021 |
Computation and Language
|
A Systematic Review of Natural Language Processing Applied to Radiology
Reports
|
NLP has a significant role in advancing healthcare and has been found to be
key in extracting structured information from radiology reports. Understanding
recent developments in NLP application to radiology is of significance but
recent reviews on this are limited. This study systematically assesses recent
literature in NLP applied to radiology reports. Our automated literature search
yields 4,799 results using automated filtering, metadata enriching steps and
citation search combined with manual review. Our analysis is based on 21
variables including radiology characteristics, NLP methodology, performance,
study, and clinical application characteristics. We present a comprehensive
analysis of the 164 publications retrieved with each categorised into one of 6
clinical application categories. Deep learning use increases but conventional
machine learning approaches are still prevalent. Deep learning remains
challenged when data is scarce and there is little evidence of adoption into
clinical practice. Despite 17% of studies reporting greater than 0.85 F1
scores, it is hard to comparatively evaluate these approaches given that most
of them use different datasets. Only 14 studies made their data and 15 their
code available with 10 externally validating results. Automated understanding
of clinical narratives of the radiology reports has the potential to enhance
the healthcare process but reproducibility and explainability of models are
important if the domain is to move applications into clinical use. More could
be done to share code enabling validation of methods on different institutional
data and to reduce heterogeneity in reporting of study properties allowing
inter-study comparisons. Our results have significance for researchers
providing a systematic synthesis of existing work to build on, identify gaps,
opportunities for collaboration and avoid duplication.
| 2,021 |
Computation and Language
|
Within-Document Event Coreference with BERT-Based Contextualized
Representations
|
Event coreference continues to be a challenging problem in information
extraction. With the absence of any external knowledge bases for events,
coreference becomes a clustering task that relies on effective representations
of the context in which event mentions appear. Recent advances in
contextualized language representations have proven successful in many tasks,
however, their use in event linking been limited. Here we present a three part
approach that (1) uses representations derived from a pretrained BERT model to
(2) train a neural classifier to (3) drive a simple clustering algorithm to
create coreference chains. We achieve state of the art results with this model
on two standard datasets for within-document event coreference task and
establish a new standard on a third newer dataset.
| 2,021 |
Computation and Language
|
MUDES: Multilingual Detection of Offensive Spans
|
The interest in offensive content identification in social media has grown
substantially in recent years. Previous work has dealt mostly with post level
annotations. However, identifying offensive spans is useful in many ways. To
help coping with this important challenge, we present MUDES, a multilingual
system to detect offensive spans in texts. MUDES features pre-trained models, a
Python API for developers, and a user-friendly web-based interface. A detailed
description of MUDES' components is presented in this paper.
| 2,021 |
Computation and Language
|
Fixing Errors of the Google Voice Recognizer through Phonetic Distance
Metrics
|
Speech recognition systems for the Spanish language, such as Google's,
produce errors quite frequently when used in applications of a specific domain.
These errors mostly occur when recognizing words new to the recognizer's
language model or ad hoc to the domain. This article presents an algorithm that
uses Levenshtein distance on phonemes to reduce the speech recognizer's errors.
The preliminary results show that it is possible to correct the recognizer's
errors significantly by using this metric and using a dictionary of specific
phrases from the domain of the application. Despite being designed for
particular domains, the algorithm proposed here is of general application. The
phrases that must be recognized can be explicitly defined for each application,
without the algorithm having to be modified. It is enough to indicate to the
algorithm the set of sentences on which it must work. The algorithm's
complexity is $O(tn)$, where $t$ is the number of words in the transcript to be
corrected, and $n$ is the number of phrases specific to the domain.
| 2,019 |
Computation and Language
|
WebRED: Effective Pretraining And Finetuning For Relation Extraction On
The Web
|
Relation extraction is used to populate knowledge bases that are important to
many applications. Prior datasets used to train relation extraction models
either suffer from noisy labels due to distant supervision, are limited to
certain domains or are too small to train high-capacity models. This constrains
downstream applications of relation extraction. We therefore introduce: WebRED
(Web Relation Extraction Dataset), a strongly-supervised human annotated
dataset for extracting relationships from a variety of text found on the World
Wide Web, consisting of ~110K examples. We also describe the methods we used to
collect ~200M examples as pre-training data for this task. We show that
combining pre-training on a large weakly supervised dataset with fine-tuning on
a small strongly-supervised dataset leads to better relation extraction
performance. We provide baselines for this new dataset and present a case for
the importance of human annotation in improving the performance of relation
extraction from text found on the web.
| 2,021 |
Computation and Language
|
Calibrate Before Use: Improving Few-Shot Performance of Language Models
|
GPT-3 can perform numerous tasks when provided a natural language prompt that
contains a few training examples. We show that this type of few-shot learning
can be unstable: the choice of prompt format, training examples, and even the
order of the training examples can cause accuracy to vary from near chance to
near state-of-the-art. We demonstrate that this instability arises from the
bias of language models towards predicting certain answers, e.g., those that
are placed near the end of the prompt or are common in the pre-training data.
To mitigate this, we first estimate the model's bias towards each answer by
asking for its prediction when given the training prompt and a content-free
test input such as "N/A". We then fit calibration parameters that cause the
prediction for this input to be uniform across answers. On a diverse set of
tasks, this contextual calibration procedure substantially improves GPT-3 and
GPT-2's average accuracy (up to 30.0% absolute) and reduces variance across
different choices of the prompt.
| 2,021 |
Computation and Language
|
Back Translation Survey for Improving Text Augmentation
|
Natural Language Processing (NLP) relies heavily on training data.
Transformers, as they have gotten bigger, have required massive amounts of
training data. To satisfy this requirement, text augmentation should be looked
at as a way to expand your current dataset and to generalize your models. One
text augmentation we will look at is translation augmentation. We take an
English sentence and translate it to another language before translating it
back to English. In this paper, we look at the effect of 108 different language
back translations on various metrics and text embeddings.
| 2,022 |
Computation and Language
|
Learning Dynamic BERT via Trainable Gate Variables and a Bi-modal
Regularizer
|
The BERT model has shown significant success on various natural language
processing tasks. However, due to the heavy model size and high computational
cost, the model suffers from high latency, which is fatal to its deployments on
resource-limited devices. To tackle this problem, we propose a dynamic
inference method on BERT via trainable gate variables applied on input tokens
and a regularizer that has a bi-modal property. Our method shows reduced
computational cost on the GLUE dataset with a minimal performance drop.
Moreover, the model adjusts with a trade-off between performance and
computational cost with the user-specified hyperparameter.
| 2,021 |
Computation and Language
|
Dialect Identification in Nuanced Arabic Tweets Using Farasa
Segmentation and AraBERT
|
This paper presents our approach to address the EACL WANLP-2021 Shared Task
1: Nuanced Arabic Dialect Identification (NADI). The task is aimed at
developing a system that identifies the geographical location(country/province)
from where an Arabic tweet in the form of modern standard Arabic or dialect
comes from. We solve the task in two parts. The first part involves
pre-processing the provided dataset by cleaning, adding and segmenting various
parts of the text. This is followed by carrying out experiments with different
versions of two Transformer based models, AraBERT and AraELECTRA. Our final
approach achieved macro F1-scores of 0.216, 0.235, 0.054, and 0.043 in the four
subtasks, and we were ranked second in MSA identification subtasks and fourth
in DA identification subtasks.
| 2,021 |
Computation and Language
|
Progressive Transformer-Based Generation of Radiology Reports
|
Inspired by Curriculum Learning, we propose a consecutive (i.e.,
image-to-text-to-text) generation framework where we divide the problem of
radiology report generation into two steps. Contrary to generating the full
radiology report from the image at once, the model generates global concepts
from the image in the first step and then reforms them into finer and coherent
texts using a transformer architecture. We follow the transformer-based
sequence-to-sequence paradigm at each step. We improve upon the
state-of-the-art on two benchmark datasets.
| 2,021 |
Computation and Language
|
An Empirical Study on Measuring the Similarity of Sentential Arguments
with Language Model Domain Adaptation
|
Measuring the similarity between two different sentential arguments is an
important task in argument mining. However, one of the challenges in this field
is that the dataset must be annotated using expertise in a variety of topics,
making supervised learning with labeled data expensive. In this paper, we
investigated whether this problem could be alleviated through transfer
learning. We first adapted a pretrained language model to a domain of interest
using self-supervised learning. Then, we fine-tuned the model to a task of
measuring the similarity between sentences taken from different domains. Our
approach improves a correlation with human-annotated similarity scores compared
to competitive baseline models on the Argument Facet Similarity dataset in an
unsupervised setting. Moreover, we achieve comparable performance to a fully
supervised baseline model by using only about 60% of the labeled data samples.
We believe that our work suggests the possibility of a generalized argument
clustering model for various argumentative topics.
| 2,021 |
Computation and Language
|
KBCNMUJAL@HASOC-Dravidian-CodeMix-FIRE2020: Using Machine Learning for
Detection of Hate Speech and Offensive Code-Mixed Social Media text
|
This paper describes the system submitted by our team, KBCNMUJAL, for Task 2
of the shared task Hate Speech and Offensive Content Identification in
Indo-European Languages (HASOC), at Forum for Information Retrieval Evaluation,
December 16-20, 2020, Hyderabad, India. The datasets of two Dravidian languages
Viz. Malayalam and Tamil of size 4000 observations, each were shared by the
HASOC organizers. These datasets are used to train the machine using different
machine learning algorithms, based on classification and regression models. The
datasets consist of tweets or YouTube comments with two class labels offensive
and not offensive. The machine is trained to classify such social media
messages in these two categories. Appropriate n-gram feature sets are extracted
to learn the specific characteristics of the Hate Speech text messages. These
feature models are based on TFIDF weights of n-gram. The referred work and
respective experiments show that the features such as word, character and
combined model of word and character n-grams could be used to identify the term
patterns of offensive text contents. As a part of the HASOC shared task, the
test data sets are made available by the HASOC track organizers. The best
performing classification models developed for both languages are applied on
test datasets. The model which gives the highest accuracy result on training
dataset for Malayalam language was experimented to predict the categories of
respective test data. This system has obtained an F1 score of 0.77. Similarly
the best performing model for Tamil language has obtained an F1 score of 0.87.
This work has received 2nd and 3rd rank in this shared Task 2 for Malayalam and
Tamil language respectively. The proposed system is named HASOC_kbcnmujal.
| 2,021 |
Computation and Language
|
Alternate Endings: Improving Prosody for Incremental Neural TTS with
Predicted Future Text Input
|
The prosody of a spoken word is determined by its surrounding context. In
incremental text-to-speech synthesis, where the synthesizer produces an output
before it has access to the complete input, the full context is often unknown
which can result in a loss of naturalness in the synthesized speech. In this
paper, we investigate whether the use of predicted future text can attenuate
this loss. We compare several test conditions of next future word: (a) unknown
(zero-word), (b) language model predicted, (c) randomly predicted and (d)
ground-truth. We measure the prosodic features (pitch, energy and duration) and
find that predicted text provides significant improvements over a zero-word
lookahead, but only slight gains over random-word lookahead. We confirm these
results with a perceptive test.
| 2,021 |
Computation and Language
|
Back to Prior Knowledge: Joint Event Causality Extraction via
Convolutional Semantic Infusion
|
Joint event and causality extraction is a challenging yet essential task in
information retrieval and data mining. Recently, pre-trained language models
(e.g., BERT) yield state-of-the-art results and dominate in a variety of NLP
tasks. However, these models are incapable of imposing external knowledge in
domain-specific extraction. Considering the prior knowledge of frequent n-grams
that represent cause/effect events may benefit both event and causality
extraction, in this paper, we propose convolutional knowledge infusion for
frequent n-grams with different windows of length within a joint extraction
framework. Knowledge infusion during convolutional filter initialization not
only helps the model capture both intra-event (i.e., features in an event
cluster) and inter-event (i.e., associations across event clusters) features
but also boosts training convergence. Experimental results on the benchmark
datasets show that our model significantly outperforms the strong BERT+CSNN
baseline.
| 2,021 |
Computation and Language
|
Towards Emotion Recognition in Hindi-English Code-Mixed Data: A
Transformer Based Approach
|
In the last few years, emotion detection in social-media text has become a
popular problem due to its wide ranging application in better understanding the
consumers, in psychology, in aiding human interaction with computers, designing
smart systems etc. Because of the availability of huge amounts of data from
social-media, which is regularly used for expressing sentiments and opinions,
this problem has garnered great attention. In this paper, we present a Hinglish
dataset labelled for emotion detection. We highlight a deep learning based
approach for detecting emotions in Hindi-English code mixed tweets, using
bilingual word embeddings derived from FastText and Word2Vec approaches, as
well as transformer based models. We experiment with various deep learning
models, including CNNs, LSTMs, Bi-directional LSTMs (with and without
attention), along with transformers like BERT, RoBERTa, and ALBERT. The
transformer based BERT model outperforms all other models giving the best
performance with an accuracy of 71.43%.
| 2,021 |
Computation and Language
|
Analyzing Curriculum Learning for Sentiment Analysis along Task
Difficulty, Pacing and Visualization Axes
|
While Curriculum Learning (CL) has recently gained traction in Natural
language Processing Tasks, it is still not adequately analyzed. Previous works
only show their effectiveness but fail short to explain and interpret the
internal workings fully. In this paper, we analyze curriculum learning in
sentiment analysis along multiple axes. Some of these axes have been proposed
by earlier works that need more in-depth study. Such analysis requires
understanding where curriculum learning works and where it does not. Our axes
of analysis include Task difficulty on CL, comparing CL pacing techniques, and
qualitative analysis by visualizing the movement of attention scores in the
model as curriculum phases progress. We find that curriculum learning works
best for difficult tasks and may even lead to a decrement in performance for
tasks with higher performance without curriculum learning. We see that One-Pass
curriculum strategies suffer from catastrophic forgetting and attention
movement visualization within curriculum pacing. This shows that curriculum
learning breaks down the challenging main task into easier sub-tasks solved
sequentially.
| 2,021 |
Computation and Language
|
Using Transformer based Ensemble Learning to classify Scientific
Articles
|
Many time reviewers fail to appreciate novel ideas of a researcher and
provide generic feedback. Thus, proper assignment of reviewers based on their
area of expertise is necessary. Moreover, reading each and every paper from
end-to-end for assigning it to a reviewer is a tedious task. In this paper, we
describe a system which our team FideLIPI submitted in the shared task of
SDPRA-2021 [14]. It comprises four independent sub-systems capable of
classifying abstracts of scientific literature to one of the given seven
classes. The first one is a RoBERTa [10] based model built over these
abstracts. Adding topic models / Latent dirichlet allocation (LDA) [2] based
features to the first model results in the second sub-system. The third one is
a sentence level RoBERTa [10] model. The fourth one is a Logistic Regression
model built using Term Frequency Inverse Document Frequency (TF-IDF) features.
We ensemble predictions of these four sub-systems using majority voting to
develop the final system which gives a F1 score of 0.93 on the test and
validation set. This outperforms the existing State Of The Art (SOTA) model
SciBERT's [1] in terms of F1 score on the validation set.Our codebase is
available at https://github.com/SDPRA-2021/shared-task/tree/main/FideLIPI
| 2,021 |
Computation and Language
|
Sentiment Analysis for YouTube Comments in Roman Urdu
|
Sentiment analysis is a vast area in the Machine learning domain. A lot of
work is done on datasets and their analysis of the English Language. In
Pakistan, a huge amount of data is in roman Urdu language, it is scattered all
over the social sites including Twitter, YouTube, Facebook and similar
applications. In this study the focus domain of dataset gathering is YouTube
comments. The Dataset contains the comments of people over different Pakistani
dramas and TV shows. The Dataset contains multi-class classification that is
grouped The comments into positive, negative and neutral sentiment. In this
Study comparative analysis is done for five supervised learning Algorithms
including linear regression, SVM, KNN, Multi layer Perceptron and Na\"ive Bayes
classifier. Accuracy, recall, precision and F-measure are used for measuring
performance. Results show that accuracy of SVM is 64 percent, which is better
than the rest of the list.
| 2,021 |
Computation and Language
|
Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for
Transformer-based Offensive language Detection
|
Social media often acts as breeding grounds for different forms of offensive
content. For low resource languages like Tamil, the situation is more complex
due to the poor performance of multilingual or language-specific models and
lack of proper benchmark datasets. Based on this shared task, Offensive
Language Identification in Dravidian Languages at EACL 2021, we present an
exhaustive exploration of different transformer models, We also provide a
genetic algorithm technique for ensembling different models. Our ensembled
models trained separately for each language secured the first position in
Tamil, the second position in Kannada, and the first position in Malayalam
sub-tasks. The models and codes are provided.
| 2,021 |
Computation and Language
|
Formal Language Theory Meets Modern NLP
|
NLP is deeply intertwined with the formal study of language, both
conceptually and historically. Arguably, this connection goes all the way back
to Chomsky's Syntactic Structures in 1957. It also still holds true today, with
a strand of recent works building formal analysis of modern neural networks
methods in terms of formal languages. In this document, I aim to explain
background about formal languages as they relate to this recent work. I will by
necessity ignore large parts of the rich history of this field, instead
focusing on concepts connecting to modern deep learning-based NLP.
| 2,021 |
Computation and Language
|
Multi-Domain Adaptation in Neural Machine Translation Through
Multidimensional Tagging
|
While NMT has achieved remarkable results in the last 5 years, production
systems come with strict quality requirements in arbitrarily niche domains that
are not always adequately covered by readily available parallel corpora. This
is typically addressed by training domain specific models, using fine-tuning
methods and some variation of back-translation on top of in-domain monolingual
corpora. However, industrial practitioners can rarely afford to focus on a
single domain. A far more typical scenario includes a set of closely related,
yet succinctly different sub-domains. At Booking.com, we need to translate
property descriptions, user reviews, as well as messages, (for example those
sent between a customer and an agent or property manager). An editor might need
to translate articles across a set of different topics. An e-commerce platform
would typically need to translate both the description of each item and the
user generated content related to them. To this end, we propose MDT: a novel
method to simultaneously fine-tune on several sub-domains by passing
multidimensional sentence-level information to the model during training and
inference. We show that MDT achieves results competitive to N specialist models
each fine-tuned on a single constituent domain, while effectively serving all N
sub-domains, therefore cutting development and maintenance costs by the same
factor. Besides BLEU (industry standard automatic evaluation metric known to
only weakly correlate with human judgement) we also report rigorous human
evaluation results for all models and sub-domains as well as specific examples
that better contextualise the performance of each model in terms of adequacy
and fluency. To facilitate further research, we plan to make the code available
upon acceptance.
| 2,021 |
Computation and Language
|
Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach
|
Reliable automatic evaluation of dialogue systems under an interactive
environment has long been overdue. An ideal environment for evaluating dialog
systems, also known as the Turing test, needs to involve human interaction,
which is usually not affordable for large-scale experiments. Though researchers
have attempted to use metrics (e.g., perplexity, BLEU) in language generation
tasks or some model-based reinforcement learning methods (e.g., self-play
evaluation) for automatic evaluation, these methods only show a very weak
correlation with the actual human evaluation in practice. To bridge such a gap,
we propose a new framework named ENIGMA for estimating human evaluation scores
based on recent advances of off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore
does not involve human interaction with the target policy during the
evaluation, making automatic evaluations feasible. More importantly, ENIGMA is
model-free and agnostic to the behavior policies for collecting the experience
data (see details in Section 2), which significantly alleviates the technical
difficulties of modeling complex dialogue environments and human behaviors. Our
experiments show that ENIGMA significantly outperforms existing methods in
terms of correlation with human evaluation scores.
| 2,021 |
Computation and Language
|
Machine Translation Customization via Automatic Training Data Selection
from the Web
|
Machine translation (MT) systems, especially when designed for an industrial
setting, are trained with general parallel data derived from the Web. Thus,
their style is typically driven by word/structure distribution coming from the
average of many domains. In contrast, MT customers want translations to be
specialized to their domain, for which they are typically able to provide text
samples. We describe an approach for customizing MT systems on specific domains
by selecting data similar to the target customer data to train neural
translation models. We build document classifiers using monolingual target
data, e.g., provided by the customers to select parallel training data from Web
crawled data. Finally, we train MT models on our automatically selected data,
obtaining a system specialized to the target domain. We tested our approach on
the benchmark from WMT-18 Translation Task for News domains enabling
comparisons with state-of-the-art MT systems. The results show that our models
outperform the top systems while using less data and smaller models.
| 2,021 |
Computation and Language
|
CDA: a Cost Efficient Content-based Multilingual Web Document Aligner
|
We introduce a Content-based Document Alignment approach (CDA), an efficient
method to align multilingual web documents based on content in creating
parallel training data for machine translation (MT) systems operating at the
industrial level. CDA works in two steps: (i) projecting documents of a web
domain to a shared multilingual space; then (ii) aligning them based on the
similarity of their representations in such space. We leverage lexical
translation models to build vector representations using TF-IDF. CDA achieves
performance comparable with state-of-the-art systems in the WMT-16 Bilingual
Document Alignment Shared Task benchmark while operating in multilingual space.
Besides, we created two web-scale datasets to examine the robustness of CDA in
an industrial setting involving up to 28 languages and millions of documents.
The experiments show that CDA is robust, cost-effective, and is significantly
superior in (i) processing large and noisy web data and (ii) scaling to new and
low-resourced languages.
| 2,021 |
Computation and Language
|
Entity Structure Within and Throughout: Modeling Mention Dependencies
for Document-Level Relation Extraction
|
Entities, as the essential elements in relation extraction tasks, exhibit
certain structure. In this work, we formulate such structure as distinctive
dependencies between mention pairs. We then propose SSAN, which incorporates
these structural dependencies within the standard self-attention mechanism and
throughout the overall encoding stage. Specifically, we design two alternative
transformation modules inside each self-attention building block to produce
attentive biases so as to adaptively regularize its attention flow. Our
experiments demonstrate the usefulness of the proposed entity structure and the
effectiveness of SSAN. It significantly outperforms competitive baselines,
achieving new state-of-the-art results on three popular document-level relation
extraction datasets. We further provide ablation and visualization to show how
the entity structure guides the model for better relation extraction. Our code
is publicly available.
| 2,021 |
Computation and Language
|
Multilingual Answer Sentence Reranking via Automatically Translated Data
|
We present a study on the design of multilingual Answer Sentence Selection
(AS2) models, which are a core component of modern Question Answering (QA)
systems. The main idea is to transfer data, created from one resource rich
language, e.g., English, to other languages, less rich in terms of resources.
The main findings of this paper are: (i) the training data for AS2 translated
into a target language can be used to effectively fine-tune a Transformer-based
model for that language; (ii) one multilingual Transformer model it is enough
to rank answers in multiple languages; and (iii) mixed-language question/answer
pairs can be used to fine-tune models to select answers from any language,
where the input question is just in one language. This highly reduces the
complexity and technical requirement of a multilingual QA system. Our
experiments validate the findings above, showing a modest drop, at most 3%,
with respect to the state-of-the-art English model.
| 2,021 |
Computation and Language
|
An Attention Ensemble Approach for Efficient Text Classification of
Indian Languages
|
The recent surge of complex attention-based deep learning architectures has
led to extraordinary results in various downstream NLP tasks in the English
language. However, such research for resource-constrained and morphologically
rich Indian vernacular languages has been relatively limited. This paper
proffers team SPPU\_AKAH's solution for the TechDOfication 2020 subtask-1f:
which focuses on the coarse-grained technical domain identification of short
text documents in Marathi, a Devanagari script-based Indian language. Availing
the large dataset at hand, a hybrid CNN-BiLSTM attention ensemble model is
proposed that competently combines the intermediate sentence representations
generated by the convolutional neural network and the bidirectional long
short-term memory, leading to efficient text classification. Experimental
results show that the proposed model outperforms various baseline machine
learning and deep learning models in the given task, giving the best validation
accuracy of 89.57\% and f1-score of 0.8875. Furthermore, the solution resulted
in the best system submission for this subtask, giving a test accuracy of
64.26\% and f1-score of 0.6157, transcending the performances of other teams as
well as the baseline system given by the organizers of the shared task.
| 2,021 |
Computation and Language
|
Deep Structured Feature Networks for Table Detection and Tabular Data
Extraction from Scanned Financial Document Images
|
Automatic table detection in PDF documents has achieved a great success but
tabular data extraction are still challenging due to the integrity and noise
issues in detected table areas. The accurate data extraction is extremely
crucial in finance area. Inspired by this, the aim of this research is
proposing an automated table detection and tabular data extraction from
financial PDF documents. We proposed a method that consists of three main
processes, which are detecting table areas with a Faster R-CNN (Region-based
Convolutional Neural Network) model with Feature Pyramid Network (FPN) on each
page image, extracting contents and structures by a compounded layout
segmentation technique based on optical character recognition (OCR) and
formulating regular expression rules for table header separation. The tabular
data extraction feature is embedded with rule-based filtering and restructuring
functions that are highly scalable. We annotate a new Financial Documents
dataset with table regions for the experiment. The excellent table detection
performance of the detection model is obtained from our customized dataset. The
main contributions of this paper are proposing the Financial Documents dataset
with table-area annotations, the superior detection model and the rule-based
layout segmentation technique for the tabular data extraction from PDF files.
| 2,022 |
Computation and Language
|
Contextual Argument Component Classification for Class Discussions
|
Argument mining systems often consider contextual information, i.e.
information outside of an argumentative discourse unit, when trained to
accomplish tasks such as argument component identification, classification, and
relation extraction. However, prior work has not carefully analyzed the utility
of different contextual properties in context-aware models. In this work, we
show how two different types of contextual information, local discourse context
and speaker context, can be incorporated into a computational model for
classifying argument components in multi-party classroom discussions. We find
that both context types can improve performance, although the improvements are
dependent on context size and position.
| 2,020 |
Computation and Language
|
Discussion Tracker: Supporting Teacher Learning about Students'
Collaborative Argumentation in High School Classrooms
|
Teaching collaborative argumentation is an advanced skill that many K-12
teachers struggle to develop. To address this, we have developed Discussion
Tracker, a classroom discussion analytics system based on novel algorithms for
classifying argument moves, specificity, and collaboration. Results from a
classroom deployment indicate that teachers found the analytics useful, and
that the underlying classifiers perform with moderate to substantial agreement
with humans.
| 2,020 |
Computation and Language
|
NUBOT: Embedded Knowledge Graph With RASA Framework for Generating
Semantic Intents Responses in Roman Urdu
|
The understanding of the human language is quantified by identifying intents
and entities. Even though classification methods that rely on labeled
information are often used for the comprehension of language understanding, it
is incredibly time consuming and tedious process to generate high propensity
supervised datasets. In this paper, we present the generation of accurate
intents for the corresponding Roman Urdu unstructured data and integrate this
corpus in RASA NLU module for intent classification. We embed knowledge graph
with RASA Framework to maintain the dialog history for semantic based natural
language mechanism for chatbot communication. We compare results of our work
with existing linguistic systems combined with semantic technologies. Minimum
accuracy of intents generation is 64 percent of confidence and in the response
generation part minimum accuracy is 82.1 percent and maximum accuracy gain is
96.7 percent. All the scores refers to log precision, recall, and f1 measure
for each intents once summarized for all. Furthermore, it creates a confusion
matrix represents that which intents are ambiguously recognized by approach.
| 2,021 |
Computation and Language
|
Understanding and Enhancing the Use of Context for Machine Translation
|
To understand and infer meaning in language, neural models have to learn
complicated nuances. Discovering distinctive linguistic phenomena from data is
not an easy task. For instance, lexical ambiguity is a fundamental feature of
language which is challenging to learn. Even more prominently, inferring the
meaning of rare and unseen lexical units is difficult with neural networks.
Meaning is often determined from context. With context, languages allow meaning
to be conveyed even when the specific words used are not known by the reader.
To model this learning process, a system has to learn from a few instances in
context and be able to generalize well to unseen cases. The learning process is
hindered when training data is scarce for a task. Even with sufficient data,
learning patterns for the long tail of the lexical distribution is challenging.
In this thesis, we focus on understanding certain potentials of contexts in
neural models and design augmentation models to benefit from them. We focus on
machine translation as an important instance of the more general language
understanding problem. To translate from a source language to a target
language, a neural model has to understand the meaning of constituents in the
provided context and generate constituents with the same meanings in the target
language. This task accentuates the value of capturing nuances of language and
the necessity of generalization from few observations. The main problem we
study in this thesis is what neural machine translation models learn from data
and how we can devise more focused contexts to enhance this learning. Looking
more in-depth into the role of context and the impact of data on learning
models is essential to advance the NLP field. Moreover, it helps highlight the
vulnerabilities of current neural networks and provides insights into designing
more robust models.
| 2,021 |
Computation and Language
|
Automatic Code Generation using Pre-Trained Language Models
|
Recent advancements in natural language processing \cite{gpt2} \cite{BERT}
have led to near-human performance in multiple natural language tasks. In this
paper, we seek to understand whether similar techniques can be applied to a
highly structured environment with strict syntax rules. Specifically, we
propose an end-to-end machine learning model for code generation in the Python
language built on-top of pre-trained language models. We demonstrate that a
fine-tuned model can perform well in code generation tasks, achieving a BLEU
score of 0.22, an improvement of 46\% over a reasonable sequence-to-sequence
baseline. All results and related code used for training and data processing
are available on GitHub.
| 2,021 |
Computation and Language
|
Web-based Application for Detecting Indonesian Clickbait Headlines using
IndoBERT
|
With increasing usage of clickbaits in Indonesian Online News, newsworthy
articles sometimes get buried among clickbaity news. A reliable and lightweight
tool is needed to detect such clickbaits on-the-go. Leveraging state-of-the-art
natural language processing model BERT, a RESTful API based application is
developed. This study offloaded the computing resources needed to train the
model on the cloud server, while the client-side application only needs to send
a request to the API and the cloud server will handle the rest. This study
proposed the design and developed a web-based application to detect clickbait
in Indonesian using IndoBERT as a language model. The application usage is
discussed and available for public use with a performance of mean ROC-AUC of
89%.
| 2,021 |
Computation and Language
|
Pre-Training BERT on Arabic Tweets: Practical Considerations
|
Pretraining Bidirectional Encoder Representations from Transformers (BERT)
for downstream NLP tasks is a non-trival task. We pretrained 5 BERT models that
differ in the size of their training sets, mixture of formal and informal
Arabic, and linguistic preprocessing. All are intended to support Arabic
dialects and social media. The experiments highlight the centrality of data
diversity and the efficacy of linguistically aware segmentation. They also
highlight that more data or more training step do not necessitate better
models. Our new models achieve new state-of-the-art results on several
downstream tasks. The resulting models are released to the community under the
name QARiB.
| 2,021 |
Computation and Language
|
Pruning the Index Contents for Memory Efficient Open-Domain QA
|
This work presents a novel pipeline that demonstrates what is achievable with
a combined effort of state-of-the-art approaches. Specifically, it proposes the
novel R2-D2 (Rank twice, reaD twice) pipeline composed of retriever, passage
reranker, extractive reader, generative reader and a simple way to combine
them. Furthermore, previous work often comes with a massive index of external
documents that scales in the order of tens of GiB. This work presents a simple
approach for pruning the contents of a massive index such that the open-domain
QA system altogether with index, OS, and library components fits into 6GiB
docker image while retaining only 8% of original index contents and losing only
3% EM accuracy.
| 2,021 |
Computation and Language
|
Multi-View Feature Representation for Dialogue Generation with
Bidirectional Distillation
|
Neural dialogue models suffer from low-quality responses when interacted in
practice, demonstrating difficulty in generalization beyond training data.
Recently, knowledge distillation has been used to successfully regularize the
student by transferring knowledge from the teacher. However, the teacher and
the student are trained on the same dataset and tend to learn similar feature
representations, whereas the most general knowledge should be found through
differences. The finding of general knowledge is further hindered by the
unidirectional distillation, as the student should obey the teacher and may
discard some knowledge that is truly general but refuted by the teacher. To
this end, we propose a novel training framework, where the learning of general
knowledge is more in line with the idea of reaching consensus, i.e., finding
common knowledge that is beneficial to different yet all datasets through
diversified learning partners. Concretely, the training task is divided into a
group of subtasks with the same number of students. Each student assigned to
one subtask not only is optimized on the allocated subtask but also imitates
multi-view feature representation aggregated from other students (i.e., student
peers), which induces students to capture common knowledge among different
subtasks and alleviates the over-fitting of students on the allocated subtasks.
To further enhance generalization, we extend the unidirectional distillation to
the bidirectional distillation that encourages the student and its student
peers to co-evolve by exchanging complementary knowledge with each other.
Empirical results and analysis demonstrate that our training framework
effectively improves the model generalization without sacrificing training
efficiency.
| 2,021 |
Computation and Language
|
ReINTEL Challenge 2020: Exploiting Transfer Learning Models for Reliable
Intelligence Identification on Vietnamese Social Network Sites
|
This paper presents the system that we propose for the Reliable Intelligence
Indentification on Vietnamese Social Network Sites (ReINTEL) task of the
Vietnamese Language and Speech Processing 2020 (VLSP 2020) Shared Task. In this
task, the VLSP 2020 provides a dataset with approximately 6,000 trainning
news/posts annotated with reliable or unreliable labels, and a test set
consists of 2,000 examples without labels. In this paper, we conduct
experiments on different transfer learning models, which are bert4news and
PhoBERT fine-tuned to predict whether the news is reliable or not. In our
experiments, we achieve the AUC score of 94.52% on the private test set from
ReINTEL's organizers.
| 2,021 |
Computation and Language
|
Evaluating Contextualized Language Models for Hungarian
|
We present an extended comparison of contextualized language models for
Hungarian. We compare huBERT, a Hungarian model against 4 multilingual models
including the multilingual BERT model. We evaluate these models through three
tasks, morphological probing, POS tagging and NER. We find that huBERT works
better than the other models, often by a large margin, particularly near the
global optimum (typically at the middle layers). We also find that huBERT tends
to generate fewer subwords for one word and that using the last subword for
token-level tasks is generally a better choice than using the first one.
| 2,021 |
Computation and Language
|
Subword Pooling Makes a Difference
|
Contextual word-representations became a standard in modern natural language
processing systems. These models use subword tokenization to handle large
vocabularies and unknown words. Word-level usage of such systems requires a way
of pooling multiple subwords that correspond to a single word. In this paper we
investigate how the choice of subword pooling affects the downstream
performance on three tasks: morphological probing, POS tagging and NER, in 9
typologically diverse languages. We compare these in two massively multilingual
models, mBERT and XLM-RoBERTa. For morphological tasks, the widely used `choose
the first subword' is the worst strategy and the best results are obtained by
using attention over the subwords. For POS tagging both of these strategies
perform poorly and the best choice is to use a small LSTM over the subwords.
The same strategy works best for NER and we show that mBERT is better than
XLM-RoBERTa in all 9 languages. We publicly release all code, data and the full
result tables at \url{https://github.com/juditacs/subword-choice}.
| 2,021 |
Computation and Language
|
Joint Intent Detection And Slot Filling Based on Continual Learning
Model
|
Slot filling and intent detection have become a significant theme in the
field of natural language understanding. Even though slot filling is
intensively associated with intent detection, the characteristics of the
information required for both tasks are different while most of those
approaches may not fully aware of this problem. In addition, balancing the
accuracy of two tasks effectively is an inevitable problem for the joint
learning model. In this paper, a Continual Learning Interrelated Model (CLIM)
is proposed to consider semantic information with different characteristics and
balance the accuracy between intent detection and slot filling effectively. The
experimental results show that CLIM achieves state-of-the-art performace on
slot filling and intent detection on ATIS and Snips.
| 2,021 |
Computation and Language
|
Using Prior Knowledge to Guide BERT's Attention in Semantic Textual
Matching Tasks
|
We study the problem of incorporating prior knowledge into a deep
Transformer-based model,i.e.,Bidirectional Encoder Representations from
Transformers (BERT), to enhance its performance on semantic textual matching
tasks. By probing and analyzing what BERT has already known when solving this
task, we obtain better understanding of what task-specific knowledge BERT needs
the most and where it is most needed. The analysis further motivates us to take
a different approach than most existing works. Instead of using prior knowledge
to create a new training task for fine-tuning BERT, we directly inject
knowledge into BERT's multi-head attention mechanism. This leads us to a simple
yet effective approach that enjoys fast training stage as it saves the model
from training on additional data or tasks other than the main task. Extensive
experiments demonstrate that the proposed knowledge-enhanced BERT is able to
consistently improve semantic textual matching performance over the original
BERT model, and the performance benefit is most salient when training data is
scarce.
| 2,021 |
Computation and Language
|
A Relational Tsetlin Machine with Applications to Natural Language
Understanding
|
TMs are a pattern recognition approach that uses finite state machines for
learning and propositional logic to represent patterns. In addition to being
natively interpretable, they have provided competitive accuracy for various
tasks. In this paper, we increase the computing power of TMs by proposing a
first-order logic-based framework with Herbrand semantics. The resulting TM is
relational and can take advantage of logical structures appearing in natural
language, to learn rules that represent how actions and consequences are
related in the real world. The outcome is a logic program of Horn clauses,
bringing in a structured view of unstructured data. In closed-domain
question-answering, the first-order representation produces 10x more compact
KBs, along with an increase in answering accuracy from 94.83% to 99.48%. The
approach is further robust towards erroneous, missing, and superfluous
information, distilling the aspects of a text that are important for real-world
understanding.
| 2,021 |
Computation and Language
|
Few Shot Learning for Information Verification
|
Information verification is quite a challenging task, this is because many
times verifying a claim can require picking pieces of information from multiple
pieces of evidence which can have a hierarchy of complex semantic relations.
Previously a lot of researchers have mainly focused on simply concatenating
multiple evidence sentences to accept or reject claims. These approaches are
limited as evidence can contain hierarchical information and dependencies. In
this research, we aim to verify facts based on evidence selected from a list of
articles taken from Wikipedia. Pretrained language models such as XLNET are
used to generate meaningful representations and graph-based attention and
convolutions are used in such a way that the system requires little additional
training to learn to verify facts.
| 2,021 |
Computation and Language
|
Co-occurrences using Fasttext embeddings for word similarity tasks in
Urdu
|
Urdu is a widely spoken language in South Asia. Though immoderate literature
exists for the Urdu language still the data isn't enough to naturally process
the language by NLP techniques. Very efficient language models exist for the
English language, a high resource language, but Urdu and other under-resourced
languages have been neglected for a long time. To create efficient language
models for these languages we must have good word embedding models. For Urdu,
we can only find word embeddings trained and developed using the skip-gram
model. In this paper, we have built a corpus for Urdu by scraping and
integrating data from various sources and compiled a vocabulary for the Urdu
language. We also modify fasttext embeddings and N-Grams models to enable
training them on our built corpus. We have used these trained embeddings for a
word similarity task and compared the results with existing techniques.
| 2,021 |
Computation and Language
|
Bilingual Language Modeling, A transfer learning technique for Roman
Urdu
|
Pretrained language models are now of widespread use in Natural Language
Processing. Despite their success, applying them to Low Resource languages is
still a huge challenge. Although Multilingual models hold great promise,
applying them to specific low-resource languages e.g. Roman Urdu can be
excessive. In this paper, we show how the code-switching property of languages
may be used to perform cross-lingual transfer learning from a corresponding
high resource language. We also show how this transfer learning technique
termed Bilingual Language Modeling can be used to produce better performing
models for Roman Urdu. To enable training and experimentation, we also present
a collection of novel corpora for Roman Urdu extracted from various sources and
social networking sites, e.g. Twitter. We train Monolingual, Multilingual, and
Bilingual models of Roman Urdu - the proposed bilingual model achieves 23%
accuracy compared to the 2% and 11% of the monolingual and multilingual models
respectively in the Masked Language Modeling (MLM) task.
| 2,021 |
Computation and Language
|
Better Call the Plumber: Orchestrating Dynamic Information Extraction
Pipelines
|
In the last decade, a large number of Knowledge Graph (KG) information
extraction approaches were proposed. Albeit effective, these efforts are
disjoint, and their collective strengths and weaknesses in effective KG
information extraction (IE) have not been studied in the literature. We propose
Plumber, the first framework that brings together the research community's
disjoint IE efforts. The Plumber architecture comprises 33 reusable components
for various KG information extraction subtasks, such as coreference resolution,
entity linking, and relation extraction. Using these components,Plumber
dynamically generates suitable information extraction pipelines and offers
overall 264 distinct pipelines.We study the optimization problem of choosing
suitable pipelines based on input sentences. To do so, we train a
transformer-based classification model that extracts contextual embeddings from
the input and finds an appropriate pipeline. We study the efficacy of Plumber
for extracting the KG triples using standard datasets over two KGs: DBpedia,
and Open Research Knowledge Graph (ORKG). Our results demonstrate the
effectiveness of Plumber in dynamically generating KG information extraction
pipelines,outperforming all baselines agnostics of the underlying KG.
Furthermore,we provide an analysis of collective failure cases, study the
similarities and synergies among integrated components, and discuss their
limitations.
| 2,021 |
Computation and Language
|
Towards Personalised and Document-level Machine Translation of Dialogue
|
State-of-the-art (SOTA) neural machine translation (NMT) systems translate
texts at sentence level, ignoring context: intra-textual information, like the
previous sentence, and extra-textual information, like the gender of the
speaker. Because of that, some sentences are translated incorrectly.
Personalised NMT (PersNMT) and document-level NMT (DocNMT) incorporate this
information into the translation process. Both fields are relatively new and
previous work within them is limited. Moreover, there are no readily available
robust evaluation metrics for them, which makes it difficult to develop better
systems, as well as track global progress and compare different methods. This
thesis proposal focuses on PersNMT and DocNMT for the domain of dialogue
extracted from TV subtitles in five languages: English, Brazilian Portuguese,
German, French and Polish. Three main challenges are addressed: (1)
incorporating extra-textual information directly into NMT systems; (2)
improving the machine translation of cohesion devices; (3) reliable evaluation
for PersNMT and DocNMT.
| 2,021 |
Computation and Language
|
Word frequency-rank relationship in tagged texts
|
We analyze the frequency-rank relationship in sub-vocabularies corresponding
to three different grammatical classes (nouns, verbs, and others) in a
collection of literary works in English, whose words have been automatically
tagged according to their grammatical role. Comparing with a null hypothesis
which assumes that words belonging to each class are uniformly distributed
across the frequency-ranked vocabulary of the whole work, we disclose
statistically significant differences between the three classes. This results
point to the fact that frequency-rank relationships may reflect linguistic
features associated with grammatical function.
| 2,021 |
Computation and Language
|
An open access NLP dataset for Arabic dialects : Data collection,
labeling, and model construction
|
Natural Language Processing (NLP) is today a very active field of research
and innovation. Many applications need however big sets of data for supervised
learning, suitably labelled for the training purpose. This includes
applications for the Arabic language and its national dialects. However, such
open access labeled data sets in Arabic and its dialects are lacking in the
Data Science ecosystem and this lack can be a burden to innovation and research
in this field. In this work, we present an open data set of social data content
in several Arabic dialects. This data was collected from the Twitter social
network and consists on +50K twits in five (5) national dialects. Furthermore,
this data was labeled for several applications, namely dialect detection, topic
detection and sentiment analysis. We publish this data as an open access data
to encourage innovation and encourage other works in the field of NLP for
Arabic dialects and social media. A selection of models were built using this
data set and are presented in this paper along with their performances.
| 2,021 |
Computation and Language
|
InsNet: An Efficient, Flexible, and Performant Insertion-based Text
Generation Model
|
We propose InsNet, an expressive insertion-based text generator with
efficient training and flexible decoding (parallel or sequential). Unlike most
existing insertion-based text generation works that require re-encoding of the
context after each insertion operation and thus are inefficient to train,
InsNet only requires one pass of context encoding for the entire sequence
during training by introducing a novel insertion-oriented position encoding and
a light-weighted slot representation strategy to enable computation sharing.
Furthermore, we propose an algorithm InsNet-Dinic to better determine the
parallelization of insertion operations that provides a controllable switch
between parallel and sequential decoding, making it flexible to handle more
parallelizable tasks such as machine translation with efficient decoding, or
less parallelizable tasks such as open-domain text generation to guarantee
high-quality outputs. Experiments on two lexically constrained text generation
datasets and three machine translation datasets demonstrate InsNet's advantages
over previous insertion-based methods in terms of training speed, inference
efficiency, and generation quality.
| 2,022 |
Computation and Language
|
Multimodal Punctuation Prediction with Contextual Dropout
|
Automatic speech recognition (ASR) is widely used in consumer electronics.
ASR greatly improves the utility and accessibility of technology, but usually
the output is only word sequences without punctuation. This can result in
ambiguity in inferring user-intent. We first present a transformer-based
approach for punctuation prediction that achieves 8% improvement on the IWSLT
2012 TED Task, beating the previous state of the art [1]. We next describe our
multimodal model that learns from both text and audio, which achieves 8%
improvement over the text-only algorithm on an internal dataset for which we
have both the audio and transcriptions. Finally, we present an approach to
learning a model using contextual dropout that allows us to handle variable
amounts of future context at test time.
| 2,021 |
Computation and Language
|
Jointly Learning Clinical Entities and Relations with Contextual
Language Models and Explicit Context
|
We hypothesize that explicit integration of contextual information into an
Multi-task Learning framework would emphasize the significance of context for
boosting performance in jointly learning Named Entity Recognition (NER) and
Relation Extraction (RE). Our work proves this hypothesis by segmenting
entities from their surrounding context and by building contextual
representations using each independent segment. This relation representation
allows for a joint NER/RE system that achieves near state-of-the-art (SOTA)
performance on both NER and RE tasks while beating the SOTA RE system at
end-to-end NER & RE with a 49.07 F1.
| 2,021 |
Computation and Language
|
Performance of Automatic De-identification Across Different Note Types
|
Free-text clinical notes detail all aspects of patient care and have great
potential to facilitate quality improvement and assurance initiatives as well
as advance clinical research. However, concerns about patient privacy and
confidentiality limit the use of clinical notes for research. As a result, the
information documented in these notes remains unavailable for most researchers.
De-identification (de-id), i.e., locating and removing personally identifying
protected health information (PHI), is one way of improving access to clinical
narratives. However, there are limited off-the-shelf de-identification systems
able to consistently detect PHI across different data sources and medical
specialties. In this abstract, we present the performance of a state-of-the art
de-id system called NeuroNER1 on a diverse set of notes from University of
Washington (UW) when the models are trained on data from an external
institution (Partners Healthcare) vs. from the same institution (UW). We
present results at the level of PHI and note types.
| 2,021 |
Computation and Language
|
IFoodCloud: A Platform for Real-time Sentiment Analysis of Public
Opinion about Food Safety in China
|
The Internet contains a wealth of public opinion on food safety, including
views on food adulteration, food-borne diseases, agricultural pollution,
irregular food distribution, and food production issues. In order to
systematically collect and analyse public opinion on food safety, we developed
IFoodCloud, a platform for the real-time sentiment analysis of public opinion
on food safety in China. It collects data from more than 3,100 public sources
that can be used to explore public opinion trends, public sentiment, and
regional attention differences of food safety incidents. At the same time, we
constructed a sentiment classification model using multiple lexicon-based and
deep learning-based algorithms integrated with IFoodCloud that provide an
unprecedented rapid means of understanding the public sentiment toward specific
food safety incidents. Our best model's F1-score achieved 0.9737. Further,
three real-world cases are presented to demonstrate the application and
robustness. IFoodCloud could be considered a valuable tool for promote
scientisation of food safety supervision and risk communication.
| 2,021 |
Computation and Language
|
Highly Fast Text Segmentation With Pairwise Markov Chains
|
Natural Language Processing (NLP) models' current trend consists of using
increasingly more extra-data to build the best models as possible. It implies
more expensive computational costs and training time, difficulties for
deployment, and worries about these models' carbon footprint reveal a critical
problem in the future. Against this trend, our goal is to develop NLP models
requiring no extra-data and minimizing training time. To do so, in this paper,
we explore Markov chain models, Hidden Markov Chain (HMC) and Pairwise Markov
Chain (PMC), for NLP segmentation tasks. We apply these models for three
classic applications: POS Tagging, Named-Entity-Recognition, and Chunking. We
develop an original method to adapt these models for text segmentation's
specific challenges to obtain relevant performances with very short training
and execution times. PMC achieves equivalent results to those obtained by
Conditional Random Fields (CRF), one of the most applied models for these tasks
when no extra-data are used. Moreover, PMC has training times 30 times shorter
than the CRF ones, which validates this model given our objectives.
| 2,021 |
Computation and Language
|
Introducing the Hidden Neural Markov Chain framework
|
Nowadays, neural network models achieve state-of-the-art results in many
areas as computer vision or speech processing. For sequential data, especially
for Natural Language Processing (NLP) tasks, Recurrent Neural Networks (RNNs)
and their extensions, the Long Short Term Memory (LSTM) network and the Gated
Recurrent Unit (GRU), are among the most used models, having a "term-to-term"
sequence processing. However, if many works create extensions and improvements
of the RNN, few have focused on developing other ways for sequential data
processing with neural networks in a "term-to-term" way. This paper proposes
the original Hidden Neural Markov Chain (HNMC) framework, a new family of
sequential neural models. They are not based on the RNN but on the Hidden
Markov Model (HMM), a probabilistic graphical model. This neural extension is
possible thanks to the recent Entropic Forward-Backward algorithm for HMM
restoration. We propose three different models: the classic HNMC, the HNMC2,
and the HNMC-CN. After describing our models' whole construction, we compare
them with classic RNN and Bidirectional RNN (BiRNN) models for some sequence
labeling tasks: Chunking, Part-Of-Speech Tagging, and Named Entity Recognition.
For every experiment, whatever the architecture or the embedding method used,
one of our proposed models has the best results. It shows this new neural
sequential framework's potential, which can open the way to new models, and
might eventually compete with the prevalent BiLSTM and BiGRU.
| 2,021 |
Computation and Language
|
Semantic Parsing to Manipulate Relational Database For a Management
System
|
Chatbots and AI assistants have claimed their importance in today life. The
main reason behind adopting this technology is to connect with the user,
understand their requirements, and fulfill them. This has been achieved but at
the cost of heavy training data and complex learning models. This work is
carried out proposes a simple algorithm, a model which can be implemented in
different fields each with its own work scope. The proposed model converts
human language text to computer-understandable SQL queries. The model requires
data only related to the specific field, saving data space. This model performs
linear computation hence solving the computational complexity. This work also
defines the stages where a new methodology is implemented and what previous
method was adopted to fulfill the requirement at that stage. Two datasets
available online will be used in this work, the ATIS dataset, and WikiSQL. This
work compares the computation time among the 2 datasets and also compares the
accuracy of both. This paper works over basic Natural language processing tasks
like semantic parsing, NER, parts of speech and tends to achieve results
through these simple methods.
| 2,021 |
Computation and Language
|
JST-RR Model: Joint Modeling of Ratings and Reviews in Sentiment-Topic
Prediction
|
Analysis of online reviews has attracted great attention with broad
applications. Often times, the textual reviews are coupled with the numerical
ratings in the data. In this work, we propose a probabilistic model to
accommodate both textual reviews and overall ratings with consideration of
their intrinsic connection for a joint sentiment-topic prediction. The key of
the proposed method is to develop a unified generative model where the topic
modeling is constructed based on review texts and the sentiment prediction is
obtained by combining review texts and overall ratings. The inference of model
parameters are obtained by an efficient Gibbs sampling procedure. The proposed
method can enhance the prediction accuracy of review data and achieve an
effective detection of interpretable topics and sentiments. The merits of the
proposed method are elaborated by the case study from Amazon datasets and
simulation studies.
| 2,021 |
Computation and Language
|
Position Information in Transformers: An Overview
|
Transformers are arguably the main workhorse in recent Natural Language
Processing research. By definition a Transformer is invariant with respect to
reordering of the input. However, language is inherently sequential and word
order is essential to the semantics and syntax of an utterance. In this
article, we provide an overview and theoretical comparison of existing methods
to incorporate position information into Transformer models. The objectives of
this survey are to (1) showcase that position information in Transformer is a
vibrant and extensive research area; (2) enable the reader to compare existing
methods by providing a unified notation and systematization of different
approaches along important model dimensions; (3) indicate what characteristics
of an application should be taken into account when selecting a position
encoding; (4) provide stimuli for future research.
| 2,021 |
Computation and Language
|
User Factor Adaptation for User Embedding via Multitask Learning
|
Language varies across users and their interested fields in social media
data: words authored by a user across his/her interests may have different
meanings (e.g., cool) or sentiments (e.g., fast). However, most of the existing
methods to train user embeddings ignore the variations across user interests,
such as product and movie categories (e.g., drama vs. action). In this study,
we treat the user interest as domains and empirically examine how the user
language can vary across the user factor in three English social media
datasets. We then propose a user embedding model to account for the language
variability of user interests via a multitask learning framework. The model
learns user language and its variations without human supervision. While
existing work mainly evaluated the user embedding by extrinsic tasks, we
propose an intrinsic evaluation via clustering and evaluate user embeddings by
an extrinsic task, text classification. The experiments on the three
English-language social media datasets show that our proposed approach can
generally outperform baselines via adapting the user factor.
| 2,021 |
Computation and Language
|
Generating Human Readable Transcript for Automatic Speech Recognition
with Pre-trained Language Model
|
Modern Automatic Speech Recognition (ASR) systems can achieve high
performance in terms of recognition accuracy. However, a perfectly accurate
transcript still can be challenging to read due to disfluency, filter words,
and other errata common in spoken communication. Many downstream tasks and
human readers rely on the output of the ASR system; therefore, errors
introduced by the speaker and ASR system alike will be propagated to the next
task in the pipeline. In this work, we propose an ASR post-processing model
that aims to transform the incorrect and noisy ASR output into a readable text
for humans and downstream tasks. We leverage the Metadata Extraction (MDE)
corpus to construct a task-specific dataset for our study. Since the dataset is
small, we propose a novel data augmentation method and use a two-stage training
strategy to fine-tune the RoBERTa pre-trained model. On the constructed test
set, our model outperforms a production two-step pipeline-based post-processing
method by a large margin of 13.26 on readability-aware WER (RA-WER) and 17.53
on BLEU metrics. Human evaluation also demonstrates that our method can
generate more human-readable transcripts than the baseline method.
| 2,021 |
Computation and Language
|
Domain Adaptation in Dialogue Systems using Transfer and Meta-Learning
|
Current generative-based dialogue systems are data-hungry and fail to adapt
to new unseen domains when only a small amount of target data is available.
Additionally, in real-world applications, most domains are underrepresented, so
there is a need to create a system capable of generalizing to these domains
using minimal data. In this paper, we propose a method that adapts to unseen
domains by combining both transfer and meta-learning (DATML). DATML improves
the previous state-of-the-art dialogue model, DiKTNet, by introducing a
different learning technique: meta-learning. We use Reptile, a first-order
optimization-based meta-learning algorithm as our improved training method. We
evaluated our model on the MultiWOZ dataset and outperformed DiKTNet in both
BLEU and Entity F1 scores when the same amount of data is available.
| 2,021 |
Computation and Language
|
Creating a Universal Dependencies Treebank of Spoken Frisian-Dutch
Code-switched Data
|
This paper explores the difficulties of annotating transcribed spoken
Dutch-Frisian code-switch utterances into Universal Dependencies. We make use
of data from the FAME! corpus, which consists of transcriptions and audio data.
Besides the usual annotation difficulties, this dataset is extra challenging
because of Frisian being low-resource, the informal nature of the data,
code-switching and non-standard sentence segmentation. As a starting point, two
annotators annotated 150 random utterances in three stages of 50 utterances.
After each stage, disagreements where discussed and resolved. An increase of
7.8 UAS and 10.5 LAS points was achieved between the first and third round.
This paper will focus on the issues that arise when annotating a transcribed
speech corpus. To resolve these issues several solutions are proposed.
| 2,021 |
Computation and Language
|
Cognitively Aided Zero-Shot Automatic Essay Grading
|
Automatic essay grading (AEG) is a process in which machines assign a grade
to an essay written in response to a topic, called the prompt. Zero-shot AEG is
when we train a system to grade essays written to a new prompt which was not
present in our training data. In this paper, we describe a solution to the
problem of zero-shot automatic essay grading, using cognitive information, in
the form of gaze behaviour. Our experiments show that using gaze behaviour
helps in improving the performance of AEG systems, especially when we provide a
new essay written in response to a new prompt for scoring, by an average of
almost 5 percentage points of QWK.
| 2,021 |
Computation and Language
|
Factorization of Fact-Checks for Low Resource Indian Languages
|
The advancement in technology and accessibility of internet to each
individual is revolutionizing the real time information. The liberty to express
your thoughts without passing through any credibility check is leading to
dissemination of fake content in the ecosystem. It can have disastrous effects
on both individuals and society as a whole. The amplification of fake news is
becoming rampant in India too. Debunked information often gets republished with
a replacement description, claiming it to depict some different incidence. To
curb such fabricated stories, it is necessary to investigate such deduplicates
and false claims made in public. The majority of studies on automatic
fact-checking and fake news detection is restricted to English only. But for a
country like India where only 10% of the literate population speak English,
role of regional languages in spreading falsity cannot be undermined. In this
paper, we introduce FactDRIL: the first large scale multilingual Fact-checking
Dataset for Regional Indian Languages. We collect an exhaustive dataset across
7 months covering 11 low-resource languages. Our propose dataset consists of
9,058 samples belonging to English, 5,155 samples to Hindi and remaining 8,222
samples are distributed across various regional languages, i.e. Bangla,
Marathi, Malayalam, Telugu, Tamil, Oriya, Assamese, Punjabi, Urdu, Sinhala and
Burmese. We also present the detailed characterization of three M's
(multi-lingual, multi-media, multi-domain) in the FactDRIL accompanied with the
complete list of other varied attributes making it a unique dataset to study.
Lastly, we present some potential use cases of the dataset. We expect this
dataset will be a valuable resource and serve as a starting point to fight
proliferation of fake news in low resource languages.
| 2,021 |
Computation and Language
|
RUBERT: A Bilingual Roman Urdu BERT Using Cross Lingual Transfer
Learning
|
In recent studies, it has been shown that Multilingual language models
underperform their monolingual counterparts. It is also a well-known fact that
training and maintaining monolingual models for each language is a costly and
time-consuming process. Roman Urdu is a resource-starved language used
popularly on social media platforms and chat apps. In this research, we propose
a novel dataset of scraped tweets containing 54M tokens and 3M sentences.
Additionally, we also propose RUBERT a bilingual Roman Urdu model created by
additional pretraining of English BERT. We compare its performance with a
monolingual Roman Urdu BERT trained from scratch and a multilingual Roman Urdu
BERT created by additional pretraining of Multilingual BERT. We show through
our experiments that additional pretraining of the English BERT produces the
most notable performance improvement.
| 2,021 |
Computation and Language
|
Exploiting Multimodal Reinforcement Learning for Simultaneous Machine
Translation
|
This paper addresses the problem of simultaneous machine translation (SiMT)
by exploring two main concepts: (a) adaptive policies to learn a good trade-off
between high translation quality and low latency; and (b) visual information to
support this process by providing additional (visual) contextual information
which may be available before the textual input is produced. For that, we
propose a multimodal approach to simultaneous machine translation using
reinforcement learning, with strategies to integrate visual and textual
information in both the agent and the environment. We provide an exploration on
how different types of visual information and integration strategies affect the
quality and latency of simultaneous translation models, and demonstrate that
visual cues lead to higher quality while keeping the latency low.
| 2,021 |
Computation and Language
|
MixUp Training Leads to Reduced Overfitting and Improved Calibration for
the Transformer Architecture
|
MixUp is a computer vision data augmentation technique that uses convex
interpolations of input data and their labels to enhance model generalization
during training. However, the application of MixUp to the natural language
understanding (NLU) domain has been limited, due to the difficulty of
interpolating text directly in the input space. In this study, we propose MixUp
methods at the Input, Manifold, and sentence embedding levels for the
transformer architecture, and apply them to finetune the BERT model for a
diverse set of NLU tasks. We find that MixUp can improve model performance, as
well as reduce test loss and model calibration error by up to 50%.
| 2,021 |
Computation and Language
|
Exploring Supervised and Unsupervised Rewards in Machine Translation
|
Reinforcement Learning (RL) is a powerful framework to address the
discrepancy between loss functions used during training and the final
evaluation metrics to be used at test time. When applied to neural Machine
Translation (MT), it minimises the mismatch between the cross-entropy loss and
non-differentiable evaluation metrics like BLEU. However, the suitability of
these metrics as reward function at training time is questionable: they tend to
be sparse and biased towards the specific words used in the reference texts. We
propose to address this problem by making models less reliant on such metrics
in two ways: (a) with an entropy-regularised RL method that does not only
maximise a reward function but also explore the action space to avoid peaky
distributions; (b) with a novel RL method that explores a dynamic unsupervised
reward function to balance between exploration and exploitation. We base our
proposals on the Soft Actor-Critic (SAC) framework, adapting the off-policy
maximum entropy model for language generation applications such as MT. We
demonstrate that SAC with BLEU reward tends to overfit less to the training
data and performs better on out-of-domain data. We also show that our dynamic
unsupervised reward can lead to better translation of ambiguous words.
| 2,021 |
Computation and Language
|
Minimally-Supervised Structure-Rich Text Categorization via Learning on
Text-Rich Networks
|
Text categorization is an essential task in Web content analysis. Considering
the ever-evolving Web data and new emerging categories, instead of the
laborious supervised setting, in this paper, we focus on the
minimally-supervised setting that aims to categorize documents effectively,
with a couple of seed documents annotated per category. We recognize that texts
collected from the Web are often structure-rich, i.e., accompanied by various
metadata. One can easily organize the corpus into a text-rich network, joining
raw text documents with document attributes, high-quality phrases, label
surface names as nodes, and their associations as edges. Such a network
provides a holistic view of the corpus' heterogeneous data sources and enables
a joint optimization for network-based analysis and deep textual model
training. We therefore propose a novel framework for minimally supervised
categorization by learning from the text-rich network. Specifically, we jointly
train two modules with different inductive biases -- a text analysis module for
text understanding and a network learning module for class-discriminative,
scalable network learning. Each module generates pseudo training labels from
the unlabeled document set, and both modules mutually enhance each other by
co-training using pooled pseudo labels. We test our model on two real-world
datasets. On the challenging e-commerce product categorization dataset with 683
categories, our experiments show that given only three seed documents per
category, our framework can achieve an accuracy of about 92%, significantly
outperforming all compared methods; our accuracy is only less than 2% away from
the supervised BERT model trained on about 50K labeled documents.
| 2,021 |
Computation and Language
|
Automated Quality Assessment of Cognitive Behavioral Therapy Sessions
Through Highly Contextualized Language Representations
|
During a psychotherapy session, the counselor typically adopts techniques
which are codified along specific dimensions (e.g., 'displays warmth and
confidence', or 'attempts to set up collaboration') to facilitate the
evaluation of the session. Those constructs, traditionally scored by trained
human raters, reflect the complex nature of psychotherapy and highly depend on
the context of the interaction. Recent advances in deep contextualized language
models offer an avenue for accurate in-domain linguistic representations which
can lead to robust recognition and scoring of such psychotherapy-relevant
behavioral constructs, and support quality assurance and supervision. In this
work, we propose a BERT-based model for automatic behavioral scoring of a
specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT),
where prior work is limited to frequency-based language features and/or short
text excerpts which do not capture the unique elements involved in a
spontaneous long conversational interaction. The model focuses on the
classification of therapy sessions with respect to the overall score achieved
on the widely-used Cognitive Therapy Rating Scale (CTRS), but is trained in a
multi-task manner in order to achieve higher interpretability. BERT-based
representations are further augmented with available therapy metadata,
providing relevant non-linguistic context and leading to consistent performance
improvements. We train and evaluate our models on a set of 1,118 real-world
therapy sessions, recorded and automatically transcribed. Our best model
achieves an F1 score equal to 72.61% on the binary classification task of low
vs. high total CTRS.
| 2,021 |
Computation and Language
|
Enhancing Model Robustness By Incorporating Adversarial Knowledge Into
Semantic Representation
|
Despite that deep neural networks (DNNs) have achieved enormous success in
many domains like natural language processing (NLP), they have also been proven
to be vulnerable to maliciously generated adversarial examples. Such inherent
vulnerability has threatened various real-world deployed DNNs-based
applications. To strength the model robustness, several countermeasures have
been proposed in the English NLP domain and obtained satisfactory performance.
However, due to the unique language properties of Chinese, it is not trivial to
extend existing defenses to the Chinese domain. Therefore, we propose AdvGraph,
a novel defense which enhances the robustness of Chinese-based NLP models by
incorporating adversarial knowledge into the semantic representation of the
input. Extensive experiments on two real-world tasks show that AdvGraph
exhibits better performance compared with previous work: (i) effective - it
significantly strengthens the model robustness even under the adaptive attacks
setting without negative impact on model performance over legitimate input;
(ii) generic - its key component, i.e., the representation of connotative
adversarial knowledge is task-agnostic, which can be reused in any
Chinese-based NLP models without retraining; and (iii) efficient - it is a
light-weight defense with sub-linear computational complexity, which can
guarantee the efficiency required in practical scenarios.
| 2,021 |
Computation and Language
|
A Novel Deep Learning Method for Textual Sentiment Analysis
|
Sentiment analysis is known as one of the most crucial tasks in the field of
natural language processing and Convolutional Neural Network (CNN) is one of
those prominent models that is commonly used for this aim. Although
convolutional neural networks have obtained remarkable results in recent years,
they are still confronted with some limitations. Firstly, they consider that
all words in a sentence have equal contributions in the sentence meaning
representation and are not able to extract informative words. Secondly, they
require a large number of training data to obtain considerable results while
they have many parameters that must be accurately adjusted. To this end, a
convolutional neural network integrated with a hierarchical attention layer is
proposed which is able to extract informative words and assign them higher
weight. Moreover, the effect of transfer learning that transfers knowledge
learned in the source domain to the target domain with the aim of improving the
performance is also explored. Based on the empirical results, the proposed
model not only has higher classification accuracy and can extract informative
words but also applying incremental transfer learning can significantly enhance
the classification performance.
| 2,021 |
Computation and Language
|
Paraphrases do not explain word analogies
|
Many types of distributional word embeddings (weakly) encode linguistic
regularities as directions (the difference between "jump" and "jumped" will be
in a similar direction to that of "walk" and "walked," and so on). Several
attempts have been made to explain this fact. We respond to Allen and
Hospedales' recent (ICML, 2019) theoretical explanation, which claims that
word2vec and GloVe will encode linguistic regularities whenever a specific
relation of paraphrase holds between the four words involved in the regularity.
We demonstrate that the explanation does not go through: the paraphrase
relations needed under this explanation do not hold empirically.
| 2,021 |
Computation and Language
|
The Sensitivity of Word Embeddings-based Author Detection Models to
Semantic-preserving Adversarial Perturbations
|
Authorship analysis is an important subject in the field of natural language
processing. It allows the detection of the most likely writer of articles,
news, books, or messages. This technique has multiple uses in tasks related to
authorship attribution, detection of plagiarism, style analysis, sources of
misinformation, etc. The focus of this paper is to explore the limitations and
sensitiveness of established approaches to adversarial manipulations of inputs.
To this end, and using those established techniques, we first developed an
experimental frame-work for author detection and input perturbations. Next, we
experimentally evaluated the performance of the authorship detection model to a
collection of semantic-preserving adversarial perturbations of input
narratives. Finally, we compare and analyze the effects of different
perturbation strategies, input and model configurations, and the effects of
these on the author detection model.
| 2,021 |
Computation and Language
|
Teach Me to Explain: A Review of Datasets for Explainable Natural
Language Processing
|
Explainable NLP (ExNLP) has increasingly focused on collecting
human-annotated textual explanations. These explanations are used downstream in
three ways: as data augmentation to improve performance on a predictive task,
as supervision to train models to produce explanations for their predictions,
and as a ground-truth to evaluate model-generated explanations. In this review,
we identify 65 datasets with three predominant classes of textual explanations
(highlights, free-text, and structured), organize the literature on annotating
each type, identify strengths and shortcomings of existing collection
methodologies, and give recommendations for collecting ExNLP datasets in the
future.
| 2,021 |
Computation and Language
|
SocialNLP EmotionGIF 2020 Challenge Overview: Predicting Reaction GIF
Categories on Social Media
|
We present an overview of the EmotionGIF2020 Challenge, held at the 8th
International Workshop on Natural Language Processing for Social Media
(SocialNLP), in conjunction with ACL 2020. The challenge required predicting
affective reactions to online texts, and included the EmotionGIF dataset, with
tweets labeled for the reaction categories. The novel dataset included 40K
tweets with their reaction GIFs. Due to the special circumstances of year 2020,
two rounds of the competition were conducted. A total of 84 teams registered
for the task. Of these, 25 teams success-fully submitted entries to the
evaluation phase in the first round, while 13 teams participated successfully
in the second round. Of the top participants, five teams presented a technical
report and shared their code. The top score of the winning team using the
Recall@K metric was 62.47%.
| 2,021 |
Computation and Language
|
Hopeful_Men@LT-EDI-EACL2021: Hope Speech Detection Using Indic
Transliteration and Transformers
|
This paper aims to describe the approach we used to detect hope speech in the
HopeEDI dataset. We experimented with two approaches. In the first approach, we
used contextual embeddings to train classifiers using logistic regression,
random forest, SVM, and LSTM based models.The second approach involved using a
majority voting ensemble of 11 models which were obtained by fine-tuning
pre-trained transformer models (BERT, ALBERT, RoBERTa, IndicBERT) after adding
an output layer. We found that the second approach was superior for English,
Tamil and Malayalam. Our solution got a weighted F1 score of 0.93, 0.75 and
0.49 for English,Malayalam and Tamil respectively. Our solution ranked first in
English, eighth in Malayalam and eleventh in Tamil.
| 2,021 |
Computation and Language
|
Automatic Meter Classification of Kurdish Poems
|
Most of the classic texts in Kurdish literature are poems. Knowing the meter
of the poems is helpful for correct reading, a better understanding of the
meaning, and avoidance of ambiguity. This paper presents a rule-based method
for automatic classification of the poem meter for the Central Kurdish
language. The metrical system of Kurdish poetry is divided into three classes
of quantitative, syllabic, and free verses. As the vowel length is not phonemic
in the language, there are uncertainties in syllable weight and meter
identification. The proposed method generates all the possible situations and
then, by considering all lines of the input poem and the common meter patterns
of Kurdish poetry, identifies the most probable meter type and pattern of the
input poem. Evaluation of the method on a dataset from VejinBooks Kurdish
corpus resulted in 97.3% of precision in meter type and 96.2% of precision in
pattern identification.
| 2,021 |
Computation and Language
|
OneStop QAMaker: Extract Question-Answer Pairs from Text in a One-Stop
Approach
|
Large-scale question-answer (QA) pairs are critical for advancing research
areas like machine reading comprehension and question answering. To construct
QA pairs from documents requires determining how to ask a question and what is
the corresponding answer. Existing methods for QA pair generation usually
follow a pipeline approach. Namely, they first choose the most likely candidate
answer span and then generate the answer-specific question. This pipeline
approach, however, is undesired in mining the most appropriate QA pairs from
documents since it ignores the connection between question generation and
answer extraction, which may lead to incompatible QA pair generation, i.e., the
selected answer span is inappropriate for question generation. However, for
human annotators, we take the whole QA pair into account and consider the
compatibility between question and answer. Inspired by such motivation, instead
of the conventional pipeline approach, we propose a model named OneStop
generate QA pairs from documents in a one-stop approach. Specifically,
questions and their corresponding answer span is extracted simultaneously and
the process of question generation and answer extraction mutually affect each
other. Additionally, OneStop is much more efficient to be trained and deployed
in industrial scenarios since it involves only one model to solve the complex
QA generation task. We conduct comprehensive experiments on three large-scale
machine reading comprehension datasets: SQuAD, NewsQA, and DuReader. The
experimental results demonstrate that our OneStop model outperforms the
baselines significantly regarding the quality of generated questions, quality
of generated question-answer pairs, and model efficiency.
| 2,021 |
Computation and Language
|
Augmenting Part-of-speech Tagging with Syntactic Information for
Vietnamese and Chinese
|
Word segmentation and part-of-speech tagging are two critical preliminary
steps for downstream tasks in Vietnamese natural language processing. In
reality, people tend to consider also the phrase boundary when performing word
segmentation and part of speech tagging rather than solely process word by word
from left to right. In this paper, we implement this idea to improve word
segmentation and part of speech tagging the Vietnamese language by employing a
simplified constituency parser. Our neural model for joint word segmentation
and part-of-speech tagging has the architecture of the syllable-based CRF
constituency parser. To reduce the complexity of parsing, we replace all
constituent labels with a single label indicating for phrases. This model can
be augmented with predicted word boundary and part-of-speech tags by other
tools. Because Vietnamese and Chinese have some similar linguistic phenomena,
we evaluated the proposed model and its augmented versions on three Vietnamese
benchmark datasets and six Chinese benchmark datasets. Our experimental results
show that the proposed model achieves higher performances than previous works
for both languages.
| 2,021 |
Computation and Language
|
Sentiment Analysis of Code-Mixed Social Media Text (Hinglish)
|
This paper discusses the results obtained for different techniques applied
for performing the sentiment analysis of social media (Twitter) code-mixed text
written in Hinglish. The various stages involved in performing the sentiment
analysis were data consolidation, data cleaning, data transformation and
modelling. Various data cleaning techniques were applied, data was cleaned in
five iterations and the results of experiments conducted were noted after each
iteration. Data was transformed using count vectorizer, one hot vectorizer,
tf-idf vectorizer, doc2vec, word2vec and fasttext embeddings. The models were
created using various machine learning algorithms such as SVM, KNN, Decision
Trees, Random Forests, Naive Bayes, Logistic Regression, and ensemble voting
classifiers. The data was obtained from a task on Codalab competition website
which was listed as Task:9 on the Semeval-2020 competition website. The models
created were evaluated using the F1-score (macro). The best F1-score of 69.07
was achieved using ensemble voting classifier.
| 2,021 |
Computation and Language
|
From Universal Language Model to Downstream Task: Improving
RoBERTa-Based Vietnamese Hate Speech Detection
|
Natural language processing is a fast-growing field of artificial
intelligence. Since the Transformer was introduced by Google in 2017, a large
number of language models such as BERT, GPT, and ELMo have been inspired by
this architecture. These models were trained on huge datasets and achieved
state-of-the-art results on natural language understanding. However,
fine-tuning a pre-trained language model on much smaller datasets for
downstream tasks requires a carefully-designed pipeline to mitigate problems of
the datasets such as lack of training data and imbalanced data. In this paper,
we propose a pipeline to adapt the general-purpose RoBERTa language model to a
specific text classification task: Vietnamese Hate Speech Detection. We first
tune the PhoBERT on our dataset by re-training the model on the Masked Language
Model task; then, we employ its encoder for text classification. In order to
preserve pre-trained weights while learning new feature representations, we
further utilize different training techniques: layer freezing, block-wise
learning rate, and label smoothing. Our experiments proved that our proposed
pipeline boosts the performance significantly, achieving a new state-of-the-art
on Vietnamese Hate Speech Detection campaign with 0.7221 F1 score.
| 2,020 |
Computation and Language
|
Multichannel LSTM-CNN for Telugu Technical Domain Identification
|
With the instantaneous growth of text information, retrieving domain-oriented
information from the text data has a broad range of applications in Information
Retrieval and Natural language Processing. Thematic keywords give a compressed
representation of the text. Usually, Domain Identification plays a significant
role in Machine Translation, Text Summarization, Question Answering,
Information Extraction, and Sentiment Analysis. In this paper, we proposed the
Multichannel LSTM-CNN methodology for Technical Domain Identification for
Telugu. This architecture was used and evaluated in the context of the ICON
shared task TechDOfication 2020 (task h), and our system got 69.9% of the F1
score on the test dataset and 90.01% on the validation set.
| 2,021 |
Computation and Language
|
PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen
Domains
|
Natural Language Processing algorithms have made incredible progress, but
they still struggle when applied to out-of-distribution examples. We address a
challenging and underexplored version of this domain adaptation problem, where
an algorithm is trained on several source domains, and then applied to examples
from unseen domains that are unknown at training time. Particularly, no
examples, labeled or unlabeled, or any other knowledge about the target domain
are available to the algorithm at training time. We present PADA: An
example-based autoregressive Prompt learning algorithm for on-the-fly
Any-Domain Adaptation, based on the T5 language model. Given a test example,
PADA first generates a unique prompt for it and then, conditioned on this
prompt, labels the example with respect to the NLP prediction task. PADA is
trained to generate a prompt which is a token sequence of unrestricted length,
consisting of Domain Related Features (DRFs) that characterize each of the
source domains. Intuitively, the generated prompt is a unique signature that
maps the test example to a semantic space spanned by the source domains. In
experiments with 3 tasks (text classification and sequence tagging), for a
total of 14 multi-source adaptation scenarios, PADA substantially outperforms
strong baselines.
| 2,022 |
Computation and Language
|
Multi-Task Attentive Residual Networks for Argument Mining
|
We explore the use of residual networks and neural attention for multiple
argument mining tasks. We propose a residual architecture that exploits
attention, multi-task learning, and makes use of ensemble, without any
assumption on document or argument structure. We present an extensive
experimental evaluation on five different corpora of user-generated comments,
scientific publications, and persuasive essays. Our results show that our
approach is a strong competitor against state-of-the-art architectures with a
higher computational footprint or corpus-specific design, representing an
interesting compromise between generality, performance accuracy and reduced
model size.
| 2,023 |
Computation and Language
|
NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based
Token Classification and Span Prediction Techniques
|
Toxicity detection of text has been a popular NLP task in the recent years.
In SemEval-2021 Task-5 Toxic Spans Detection, the focus is on detecting toxic
spans within passages. Most state-of-the-art span detection approaches employ
various techniques, each of which can be broadly classified into Token
Classification or Span Prediction approaches. In our paper, we explore simple
versions of both of these approaches and their performance on the task.
Specifically, we use BERT-based models -- BERT, RoBERTa, and SpanBERT for both
approaches. We also combine these approaches and modify them to bring
improvements for Toxic Spans prediction. To this end, we investigate results on
four hybrid approaches -- Multi-Span, Span+Token, LSTM-CRF, and a combination
of predicted offsets using union/intersection. Additionally, we perform a
thorough ablative analysis and analyze our observed results. Our best
submission -- a combination of SpanBERT Span Predictor and RoBERTa Token
Classifier predictions -- achieves an F1 score of 0.6753 on the test set. Our
best post-eval F1 score is 0.6895 on intersection of predicted offsets from
top-3 RoBERTa Token Classification checkpoints. These approaches improve the
performance by 3% on average than those of the shared baseline models -- RNNSL
and SpaCy NER.
| 2,021 |
Computation and Language
|
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with
Abstract Words using Augmentation, Linguistic Features and Voting
|
In this article, we present our methodologies for SemEval-2021 Task-4:
Reading Comprehension of Abstract Meaning. Given a fill-in-the-blank-type
question and a corresponding context, the task is to predict the most suitable
word from a list of 5 options. There are three sub-tasks within this task:
Imperceptibility (subtask-I), Non-Specificity (subtask-II), and Intersection
(subtask-III). We use encoders of transformers-based models pre-trained on the
masked language modelling (MLM) task to build our Fill-in-the-blank (FitB)
models. Moreover, to model imperceptibility, we define certain linguistic
features, and to model non-specificity, we leverage information from hypernyms
and hyponyms provided by a lexical database. Specifically, for non-specificity,
we try out augmentation techniques, and other statistical techniques. We also
propose variants, namely Chunk Voting and Max Context, to take care of input
length restrictions for BERT, etc. Additionally, we perform a thorough ablation
study, and use Integrated Gradients to explain our predictions on a few
samples. Our best submissions achieve accuracies of 75.31% and 77.84%, on the
test sets for subtask-I and subtask-II, respectively. For subtask-III, we
achieve accuracies of 65.64% and 62.27%.
| 2,021 |
Computation and Language
|
Trajectory-Based Meta-Learning for Out-Of-Vocabulary Word Embedding
Learning
|
Word embedding learning methods require a large number of occurrences of a
word to accurately learn its embedding. However, out-of-vocabulary (OOV) words
which do not appear in the training corpus emerge frequently in the smaller
downstream data. Recent work formulated OOV embedding learning as a few-shot
regression problem and demonstrated that meta-learning can improve results
obtained. However, the algorithm used, model-agnostic meta-learning (MAML) is
known to be unstable and perform worse when a large number of gradient steps
are used for parameter updates. In this work, we propose the use of Leap, a
meta-learning algorithm which leverages the entire trajectory of the learning
process instead of just the beginning and the end points, and thus ameliorates
these two issues. In our experiments on a benchmark OOV embedding learning
dataset and in an extrinsic evaluation, Leap performs comparably or better than
MAML. We go on to examine which contexts are most beneficial to learn an OOV
embedding from, and propose that the choice of contexts may matter more than
the meta-learning employed.
| 2,021 |
Computation and Language
|
Re-Evaluating GermEval17 Using German Pre-Trained Language Models
|
The lack of a commonly used benchmark data set (collection) such as
(Super-)GLUE (Wang et al., 2018, 2019) for the evaluation of non-English
pre-trained language models is a severe shortcoming of current English-centric
NLP-research. It concentrates a large part of the research on English,
neglecting the uncertainty when transferring conclusions found for the English
language to other languages. We evaluate the performance of the German and
multilingual BERT-based models currently available via the huggingface
transformers library on the four tasks of the GermEval17 workshop. We compare
them to pre-BERT architectures (Wojatzki et al., 2017; Schmitt et al., 2018;
Attia et al., 2018) as well as to an ELMo-based architecture (Biesialska et
al., 2020) and a BERT-based approach (Guhr et al., 2020). The observed
improvements are put in relation to those for similar tasks and similar models
(pre-BERT vs. BERT-based) for the English language in order to draw tentative
conclusions about whether the observed improvements are transferable to German
or potentially other related languages.
| 2,021 |
Computation and Language
|
Creolizing the Web
|
The evolution of language has been a hotly debated subject with contradicting
hypotheses and unreliable claims. Drawing from signalling games, dynamic
population mechanics, machine learning and algebraic topology, we present a
method for detecting evolutionary patterns in a sociological model of language
evolution. We develop a minimalistic model that provides a rigorous base for
any generalized evolutionary model for language based on communication between
individuals. We also discuss theoretical guarantees of this model, ranging from
stability of language representations to fast convergence of language by
temporal communication and language drift in an interactive setting. Further we
present empirical results and their interpretations on a real world dataset
from \rdt to identify communities and echo chambers for opinions, thus placing
obstructions to reliable communication among communities.
| 2,021 |
Computation and Language
|
Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched
Data
|
Using task-specific pre-training and leveraging cross-lingual transfer are
two of the most popular ways to handle code-switched data. In this paper, we
aim to compare the effects of both for the task of sentiment analysis. We work
with two Dravidian Code-Switched languages - Tamil-Engish and Malayalam-English
and four different BERT based models. We compare the effects of task-specific
pre-training and cross-lingual transfer and find that task-specific
pre-training results in superior zero-shot and supervised performance when
compared to performance achieved by leveraging cross-lingual transfer from
multilingual BERT models.
| 2,021 |
Computation and Language
|
Probing Classifiers: Promises, Shortcomings, and Advances
|
Probing classifiers have emerged as one of the prominent methodologies for
interpreting and analyzing deep neural network models of natural language
processing. The basic idea is simple -- a classifier is trained to predict some
linguistic property from a model's representations -- and has been used to
examine a wide variety of models and properties. However, recent studies have
demonstrated various methodological limitations of this approach. This article
critically reviews the probing classifiers framework, highlighting their
promises, shortcomings, and advances.
| 2,021 |
Computation and Language
|
When Attention Meets Fast Recurrence: Training Language Models with
Reduced Compute
|
Large language models have become increasingly difficult to train because of
the growing computation time and cost. In this work, we present SRU++, a
highly-efficient architecture that combines fast recurrence and attention for
sequence modeling. SRU++ exhibits strong modeling capacity and training
efficiency. On standard language modeling tasks such as Enwik8, Wiki-103 and
Billion Word datasets, our model obtains better bits-per-character and
perplexity while using 3x-10x less training cost compared to top-performing
Transformer models. For instance, our model achieves a state-of-the-art result
on the Enwik8 dataset using 1.6 days of training on an 8-GPU machine. We
further demonstrate that SRU++ requires minimal attention for near
state-of-the-art performance. Our results suggest jointly leveraging fast
recurrence with little attention as a promising direction for accelerating
model training and inference.
| 2,021 |
Computation and Language
|
References in Wikipedia: The Editors' Perspective
|
References are an essential part of Wikipedia. Each statement in Wikipedia
should be referenced. In this paper, we explore the creation and collection of
references for new Wikipedia articles from an editors' perspective. We map out
the workflow of editors when creating a new article, emphasising how they
select references.
| 2,021 |
Computation and Language
|
A Large-Scale, Automated Study of Language Surrounding Artificial
Intelligence
|
This work presents a large-scale analysis of artificial intelligence (AI) and
machine learning (ML) references within news articles and scientific
publications between 2011 and 2019. We implement word association measurements
that automatically identify shifts in language co-occurring with AI/ML and
quantify the strength of these word associations. Our results highlight the
evolution of perceptions and definitions around AI/ML and detect emerging
application areas, models, and systems (e.g., blockchain and cybersecurity).
Recent small-scale, manual studies have explored AI/ML discourse within the
general public, the policymaker community, and researcher community, but are
limited in their scalability and longevity. Our methods provide new views into
public perceptions and subject-area expert discussions of AI/ML and greatly
exceed the explanative power of prior work.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.