Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Stanza: A Python Natural Language Processing Toolkit for Many Human
Languages | We introduce Stanza, an open-source Python natural language processing
toolkit supporting 66 human languages. Compared to existing widely used
toolkits, Stanza features a language-agnostic fully neural pipeline for text
analysis, including tokenization, multi-word token expansion, lemmatization,
part-of-speech and morphological feature tagging, dependency parsing, and named
entity recognition. We have trained Stanza on a total of 112 datasets,
including the Universal Dependencies treebanks and other multilingual corpora,
and show that the same neural architecture generalizes well and achieves
competitive performance on all languages tested. Additionally, Stanza includes
a native Python interface to the widely used Java Stanford CoreNLP software,
which further extends its functionality to cover other tasks such as
coreference resolution and relation extraction. Source code, documentation, and
pretrained models for 66 languages are available at
https://stanfordnlp.github.io/stanza.
| 2,020 | Computation and Language |
A Survey on Contextual Embeddings | Contextual embeddings, such as ELMo and BERT, move beyond global word
representations like Word2Vec and achieve ground-breaking performance on a wide
range of natural language processing tasks. Contextual embeddings assign each
word a representation based on its context, thereby capturing uses of words
across varied contexts and encoding knowledge that transfers across languages.
In this survey, we review existing contextual embedding models, cross-lingual
polyglot pre-training, the application of contextual embeddings in downstream
tasks, model compression, and model analyses.
| 2,020 | Computation and Language |
A Formal Analysis of Multimodal Referring Strategies Under Common Ground | In this paper, we present an analysis of computationally generated
mixed-modality definite referring expressions using combinations of gesture and
linguistic descriptions. In doing so, we expose some striking formal semantic
properties of the interactions between gesture and language, conditioned on the
introduction of content into the common ground between the (computational)
speaker and (human) viewer, and demonstrate how these formal features can
contribute to training better models to predict viewer judgment of referring
expressions, and potentially to the generation of more natural and informative
referring expressions.
| 2,020 | Computation and Language |
Parallel sequence tagging for concept recognition | Background: Named Entity Recognition (NER) and Normalisation (NEN) are core
components of any text-mining system for biomedical texts. In a traditional
concept-recognition pipeline, these tasks are combined in a serial way, which
is inherently prone to error propagation from NER to NEN. We propose a parallel
architecture, where both NER and NEN are modeled as a sequence-labeling task,
operating directly on the source text. We examine different harmonisation
strategies for merging the predictions of the two classifiers into a single
output sequence. Results: We test our approach on the recent Version 4 of the
CRAFT corpus. In all 20 annotation sets of the concept-annotation task, our
system outperforms the pipeline system reported as a baseline in the CRAFT
shared task 2019. Conclusions: Our analysis shows that the strengths of the two
classifiers can be combined in a fruitful way. However, prediction
harmonisation requires individual calibration on a development set for each
annotation set. This allows achieving a good trade-off between established
knowledge (training set) and novel information (unseen concepts). Availability
and Implementation: Source code freely available for download at
https://github.com/OntoGene/craft-st. Supplementary data are available at arXiv
online.
| 2,020 | Computation and Language |
Developing a Multilingual Annotated Corpus of Misogyny and Aggression | In this paper, we discuss the development of a multilingual annotated corpus
of misogyny and aggression in Indian English, Hindi, and Indian Bangla as part
of a project on studying and automatically identifying misogyny and communalism
on social media (the ComMA Project). The dataset is collected from comments on
YouTube videos and currently contains a total of over 20,000 comments. The
comments are annotated at two levels - aggression (overtly aggressive, covertly
aggressive, and non-aggressive) and misogyny (gendered and non-gendered). We
describe the process of data collection, the tagset used for annotation, and
issues and challenges faced during the process of annotation. Finally, we
discuss the results of the baseline experiments conducted to develop a
classifier for misogyny in the three languages.
| 2,020 | Computation and Language |
LAXARY: A Trustworthy Explainable Twitter Analysis Model for
Post-Traumatic Stress Disorder Assessment | Veteran mental health is a significant national problem as large number of
veterans are returning from the recent war in Iraq and continued military
presence in Afghanistan. While significant existing works have investigated
twitter posts-based Post Traumatic Stress Disorder (PTSD) assessment using
blackbox machine learning techniques, these frameworks cannot be trusted by the
clinicians due to the lack of clinical explainability. To obtain the trust of
clinicians, we explore the big question, can twitter posts provide enough
information to fill up clinical PTSD assessment surveys that have been
traditionally trusted by clinicians? To answer the above question, we propose,
LAXARY (Linguistic Analysis-based Exaplainable Inquiry) model, a novel
Explainable Artificial Intelligent (XAI) model to detect and represent PTSD
assessment of twitter users using a modified Linguistic Inquiry and Word Count
(LIWC) analysis. First, we employ clinically validated survey tools for
collecting clinical PTSD assessment data from real twitter users and develop a
PTSD Linguistic Dictionary using the PTSD assessment survey results. Then, we
use the PTSD Linguistic Dictionary along with machine learning model to fill up
the survey tools towards detecting PTSD status and its intensity of
corresponding twitter users. Our experimental evaluation on 210 clinically
validated veteran twitter users provides promising accuracies of both PTSD
classification and its intensity estimation. We also evaluate our developed
PTSD Linguistic Dictionary's reliability and validity.
| 2,020 | Computation and Language |
A Label Proportions Estimation Technique for Adversarial Domain
Adaptation in Text Classification | Many text classification tasks are domain-dependent, and various domain
adaptation approaches have been proposed to predict unlabeled data in a new
domain. Domain-adversarial neural networks (DANN) and their variants have been
used widely recently and have achieved promising results for this problem.
However, most of these approaches assume that the label proportions of the
source and target domains are similar, which rarely holds in most real-world
scenarios. Sometimes the label shift can be large and the DANN fails to learn
domain-invariant features. In this study, we focus on unsupervised domain
adaptation of text classification with label shift and introduce a domain
adversarial network with label proportions estimation (DAN-LPE) framework. The
DAN-LPE simultaneously trains a domain adversarial net and processes label
proportions estimation by the confusion of the source domain and the
predictions of the target domain. Experiments show the DAN-LPE achieves a good
estimate of the target label distributions and reduces the label shift to
improve the classification performance.
| 2,020 | Computation and Language |
HELFI: a Hebrew-Greek-Finnish Parallel Bible Corpus with Cross-Lingual
Morpheme Alignment | Twenty-five years ago, morphologically aligned Hebrew-Finnish and
Greek-Finnish bitexts (texts accompanied by a translation) were constructed
manually in order to create an analytical concordance (Luoto et al., 1997) for
a Finnish Bible translation. The creators of the bitexts recently secured the
publisher's permission to release its fine-grained alignment, but the alignment
was still dependent on proprietary, third-party resources such as a copyrighted
text edition and proprietary morphological analyses of the source texts. In
this paper, we describe a nontrivial editorial process starting from the
creation of the original one-purpose database and ending with its
reconstruction using only freely available text editions and annotations. This
process produced an openly available dataset that contains (i) the source texts
and their translations, (ii) the morphological analyses, (iii) the
cross-lingual morpheme alignments.
| 2,020 | Computation and Language |
Offensive Language Identification in Greek | As offensive language has become a rising issue for online communities and
social media platforms, researchers have been investigating ways of coping with
abusive content and developing systems to detect its different types:
cyberbullying, hate speech, aggression, etc. With a few notable exceptions,
most research on this topic so far has dealt with English. This is mostly due
to the availability of language resources for English. To address this
shortcoming, this paper presents the first Greek annotated dataset for
offensive language identification: the Offensive Greek Tweet Dataset (OGTD).
OGTD is a manually annotated dataset containing 4,779 posts from Twitter
annotated as offensive and not offensive. Along with a detailed description of
the dataset, we evaluate several computational models trained and tested on
this data.
| 2,020 | Computation and Language |
Recent Advances and Challenges in Task-oriented Dialog System | Due to the significance and value in human-computer interaction and natural
language processing, task-oriented dialog systems are attracting more and more
attention in both academic and industrial communities. In this paper, we survey
recent advances and challenges in task-oriented dialog systems. We also discuss
three critical topics for task-oriented dialog systems: (1) improving data
efficiency to facilitate dialog modeling in low-resource settings, (2) modeling
multi-turn dynamics for dialog policy learning to achieve better
task-completion performance, and (3) integrating domain ontology knowledge into
the dialog model. Besides, we review the recent progresses in dialog evaluation
and some widely-used corpora. We believe that this survey, though incomplete,
can shed a light on future research in task-oriented dialog systems.
| 2,020 | Computation and Language |
Multi-label natural language processing to identify diagnosis and
procedure codes from MIMIC-III inpatient notes | In the United States, 25% or greater than 200 billion dollars of hospital
spending accounts for administrative costs that involve services for medical
coding and billing. With the increasing number of patient records, manual
assignment of the codes performed is overwhelming, time-consuming and
error-prone, causing billing errors. Natural language processing can automate
the extraction of codes/labels from unstructured clinical notes, which can aid
human coders to save time, increase productivity, and verify medical coding
errors. Our objective is to identify appropriate diagnosis and procedure codes
from clinical notes by performing multi-label classification. We used
de-identified data of critical care patients from the MIMIC-III database and
subset the data to select the ten (top-10) and fifty (top-50) most common
diagnoses and procedures, which covers 47.45% and 74.12% of all admissions
respectively. We implemented state-of-the-art Bidirectional Encoder
Representations from Transformers (BERT) to fine-tune the language model on 80%
of the data and validated on the remaining 20%. The model achieved an overall
accuracy of 87.08%, an F1 score of 85.82%, and an AUC of 91.76% for top-10
codes. For the top-50 codes, our model achieved an overall accuracy of 93.76%,
an F1 score of 92.24%, and AUC of 91%. When compared to previously published
research, our model outperforms in predicting codes from the clinical text. We
discuss approaches to generalize the knowledge discovery process of our
MIMIC-BERT to other clinical notes. This can help human coders to save time,
prevent backlogs, and additional costs due to coding errors.
| 2,020 | Computation and Language |
XPersona: Evaluating Multilingual Personalized Chatbot | Personalized dialogue systems are an essential step toward better
human-machine interaction. Existing personalized dialogue agents rely on
properly designed conversational datasets, which are mostly monolingual (e.g.,
English), which greatly limits the usage of conversational agents in other
languages. In this paper, we propose a multi-lingual extension of Persona-Chat,
namely XPersona. Our dataset includes persona conversations in six different
languages other than English for building and evaluating multilingual
personalized agents. We experiment with both multilingual and cross-lingual
trained baselines, and evaluate them against monolingual and
translation-pipeline models using both automatic and human evaluation.
Experimental results show that the multilingual trained models outperform the
translation-pipeline and that they are on par with the monolingual models, with
the advantage of having a single model across multiple languages. On the other
hand, the state-of-the-art cross-lingual trained models achieve inferior
performance to the other models, showing that cross-lingual conversation
modeling is a challenging task. We hope that our dataset and baselines will
accelerate research in multilingual dialogue systems.
| 2,020 | Computation and Language |
Adapting Deep Learning Methods for Mental Health Prediction on Social
Media | Mental health poses a significant challenge for an individual's well-being.
Text analysis of rich resources, like social media, can contribute to deeper
understanding of illnesses and provide means for their early detection. We
tackle a challenge of detecting social media users' mental status through deep
learning-based models, moving away from traditional approaches to the task. In
a binary classification task on predicting if a user suffers from one of nine
different disorders, a hierarchical attention network outperforms previously
set benchmarks for four of the disorders. Furthermore, we explore the
limitations of our model and analyze phrases relevant for classification by
inspecting the model's word-level attention weights.
| 2,019 | Computation and Language |
PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic
Emotions in German and English Poetry | Most approaches to emotion analysis of social media, literature, news, and
other domains focus exclusively on basic emotion categories as defined by Ekman
or Plutchik. However, art (such as literature) enables engagement in a broader
range of more complex and subtle emotions. These have been shown to also
include mixed emotional responses. We consider emotions in poetry as they are
elicited in the reader, rather than what is expressed in the text or intended
by the author. Thus, we conceptualize a set of aesthetic emotions that are
predictive of aesthetic appreciation in the reader, and allow the annotation of
multiple labels per line to capture mixed emotions within their context. We
evaluate this novel setting in an annotation experiment both with carefully
trained experts and via crowdsourcing. Our annotation with experts leads to an
acceptable agreement of kappa = .70, resulting in a consistent dataset for
future large scale analysis. Finally, we conduct first emotion classification
experiments based on BERT, showing that identifying aesthetic emotions is
challenging in our data, with up to .52 F1-micro on the German subset. Data and
resources are available at https://github.com/tnhaider/poetry-emotion
| 2,020 | Computation and Language |
A Benchmarking Study of Embedding-based Entity Alignment for Knowledge
Graphs | Entity alignment seeks to find entities in different knowledge graphs (KGs)
that refer to the same real-world object. Recent advancement in KG embedding
impels the advent of embedding-based entity alignment, which encodes entities
in a continuous embedding space and measures entity similarities based on the
learned embeddings. In this paper, we conduct a comprehensive experimental
study of this emerging field. We survey 23 recent embedding-based entity
alignment approaches and categorize them based on their techniques and
characteristics. We also propose a new KG sampling algorithm, with which we
generate a set of dedicated benchmark datasets with various heterogeneity and
distributions for a realistic evaluation. We develop an open-source library
including 12 representative embedding-based entity alignment approaches, and
extensively evaluate these approaches, to understand their strengths and
limitations. Additionally, for several directions that have not been explored
in current approaches, we perform exploratory experiments and report our
preliminary findings for future studies. The benchmark datasets, open-source
library and experimental results are all accessible online and will be duly
maintained.
| 2,020 | Computation and Language |
PowerNorm: Rethinking Batch Normalization in Transformers | The standard normalization method for neural network (NN) models used in
Natural Language Processing (NLP) is layer normalization (LN). This is
different than batch normalization (BN), which is widely-adopted in Computer
Vision. The preferred use of LN in NLP is principally due to the empirical
observation that a (naive/vanilla) use of BN leads to significant performance
degradation for NLP tasks; however, a thorough understanding of the underlying
reasons for this is not always evident. In this paper, we perform a systematic
study of NLP transformer models to understand why BN has a poor performance, as
compared to LN. We find that the statistics of NLP data across the batch
dimension exhibit large fluctuations throughout training. This results in
instability, if BN is naively implemented. To address this, we propose Power
Normalization (PN), a novel normalization scheme that resolves this issue by
(i) relaxing zero-mean normalization in BN, (ii) incorporating a running
quadratic mean instead of per batch statistics to stabilize fluctuations, and
(iii) using an approximate backpropagation for incorporating the running
statistics in the forward pass. We show theoretically, under mild assumptions,
that PN leads to a smaller Lipschitz constant for the loss, compared with BN.
Furthermore, we prove that the approximate backpropagation scheme leads to
bounded gradients. We extensively test PN for transformers on a range of NLP
tasks, and we show that it significantly outperforms both LN and BN. In
particular, PN outperforms LN by 0.4/0.6 BLEU on IWSLT14/WMT14 and 5.6/3.0 PPL
on PTB/WikiText-103. We make our code publicly available at
\url{https://github.com/sIncerass/powernorm}.
| 2,020 | Computation and Language |
Calibration of Pre-trained Transformers | Pre-trained Transformers are now ubiquitous in natural language processing,
but despite their high end-task performance, little is known empirically about
whether they are calibrated. Specifically, do these models' posterior
probabilities provide an accurate empirical measure of how likely the model is
to be correct on a given example? We focus on BERT and RoBERTa in this work,
and analyze their calibration across three tasks: natural language inference,
paraphrase detection, and commonsense reasoning. For each task, we consider
in-domain as well as challenging out-of-domain settings, where models face more
examples they should be uncertain about. We show that: (1) when used
out-of-the-box, pre-trained models are calibrated in-domain, and compared to
baselines, their calibration error out-of-domain can be as much as 3.5x lower;
(2) temperature scaling is effective at further reducing calibration error
in-domain, and using label smoothing to deliberately increase empirical
uncertainty helps calibrate posteriors out-of-domain.
| 2,020 | Computation and Language |
Selective Attention Encoders by Syntactic Graph Convolutional Networks
for Document Summarization | Abstractive text summarization is a challenging task, and one need to design
a mechanism to effectively extract salient information from the source text and
then generate a summary. A parsing process of the source text contains critical
syntactic or semantic structures, which is useful to generate more accurate
summary. However, modeling a parsing tree for text summarization is not trivial
due to its non-linear structure and it is harder to deal with a document that
includes multiple sentences and their parsing trees. In this paper, we propose
to use a graph to connect the parsing trees from the sentences in a document
and utilize the stacked graph convolutional networks (GCNs) to learn the
syntactic representation for a document. The selective attention mechanism is
used to extract salient information in semantic and structural aspect and
generate an abstractive summary. We evaluate our approach on the CNN/Daily Mail
text summarization dataset. The experimental results show that the proposed
GCNs based selective attention approach outperforms the baselines and achieves
the state-of-the-art performance on the dataset.
| 2,020 | Computation and Language |
Gender Representation in Open Source Speech Resources | With the rise of artificial intelligence (AI) and the growing use of
deep-learning architectures, the question of ethics, transparency and fairness
of AI systems has become a central concern within the research community. We
address transparency and fairness in spoken language systems by proposing a
study about gender representation in speech resources available through the
Open Speech and Language Resource platform. We show that finding gender
information in open source corpora is not straightforward and that gender
balance depends on other corpus characteristics (elicited/non elicited speech,
low/high resource language, speech task targeted). The paper ends with
recommendations about metadata and gender information for researchers in order
to assure better transparency of the speech systems built using such corpora.
| 2,020 | Computation and Language |
Pre-trained Models for Natural Language Processing: A Survey | Recently, the emergence of pre-trained models (PTMs) has brought natural
language processing (NLP) to a new era. In this survey, we provide a
comprehensive review of PTMs for NLP. We first briefly introduce language
representation learning and its research progress. Then we systematically
categorize existing PTMs based on a taxonomy with four perspectives. Next, we
describe how to adapt the knowledge of PTMs to the downstream tasks. Finally,
we outline some potential directions of PTMs for future research. This survey
is purposed to be a hands-on guide for understanding, using, and developing
PTMs for various NLP tasks.
| 2,020 | Computation and Language |
Unsupervised Pidgin Text Generation By Pivoting English Data and
Self-Training | West African Pidgin English is a language that is significantly spoken in
West Africa, consisting of at least 75 million speakers. Nevertheless, proper
machine translation systems and relevant NLP datasets for pidgin English are
virtually absent. In this work, we develop techniques targeted at bridging the
gap between Pidgin English and English in the context of natural language
generation. %As a proof of concept, we explore the proposed techniques in the
area of data-to-text generation. By building upon the previously released
monolingual Pidgin English text and parallel English data-to-text corpus, we
hope to build a system that can automatically generate Pidgin English
descriptions from structured data. We first train a data-to-English text
generation system, before employing techniques in unsupervised neural machine
translation and self-training to establish the Pidgin-to-English cross-lingual
alignment. The human evaluation performed on the generated Pidgin texts shows
that, though still far from being practically usable, the pivoting +
self-training technique improves both Pidgin text fluency and relevance.
| 2,021 | Computation and Language |
Distant Supervision and Noisy Label Learning for Low Resource Named
Entity Recognition: A Study on Hausa and Yor\`ub\'a | The lack of labeled training data has limited the development of natural
language processing tools, such as named entity recognition, for many languages
spoken in developing countries. Techniques such as distant and weak supervision
can be used to create labeled data in a (semi-) automatic way. Additionally, to
alleviate some of the negative effects of the errors in automatic annotation,
noise-handling methods can be integrated. Pretrained word embeddings are
another key component of most neural named entity classifiers. With the advent
of more complex contextual word embeddings, an interesting trade-off between
model size and performance arises. While these techniques have been shown to
work well in high-resource settings, we want to study how they perform in
low-resource scenarios. In this work, we perform named entity recognition for
Hausa and Yor\`ub\'a, two languages that are widely spoken in several
developing countries. We evaluate different embedding approaches and show that
distant supervision can be successfully leveraged in a realistic low-resource
scenario where it can more than double a classifier's performance.
| 2,020 | Computation and Language |
TTTTTackling WinoGrande Schemas | We applied the T5 sequence-to-sequence model to tackle the AI2 WinoGrande
Challenge by decomposing each example into two input text strings, each
containing a hypothesis, and using the probabilities assigned to the
"entailment" token as a score of the hypothesis. Our first (and only)
submission to the official leaderboard yielded 0.7673 AUC on March 13, 2020,
which is the best known result at this time and beats the previous state of the
art by over five points.
| 2,020 | Computation and Language |
X-Stance: A Multilingual Multi-Target Dataset for Stance Detection | We extract a large-scale stance detection dataset from comments written by
candidates of elections in Switzerland. The dataset consists of German, French
and Italian text, allowing for a cross-lingual evaluation of stance detection.
It contains 67 000 comments on more than 150 political issues (targets). Unlike
stance detection models that have specific target issues, we use the dataset to
train a single model on all the issues. To make learning across targets
possible, we prepend to each instance a natural question that represents the
target (e.g. "Do you support X?"). Baseline results from multilingual BERT show
that zero-shot cross-lingual and cross-target transfer of stance detection is
moderately successful with this approach.
| 2,020 | Computation and Language |
An Analysis on the Learning Rules of the Skip-Gram Model | To improve the generalization of the representations for natural language
processing tasks, words are commonly represented using vectors, where distances
among the vectors are related to the similarity of the words. While word2vec,
the state-of-the-art implementation of the skip-gram model, is widely used and
improves the performance of many natural language processing tasks, its
mechanism is not yet well understood.
In this work, we derive the learning rules for the skip-gram model and
establish their close relationship to competitive learning. In addition, we
provide the global optimal solution constraints for the skip-gram model and
validate them by experimental results.
| 2,019 | Computation and Language |
Diversity, Density, and Homogeneity: Quantitative Characteristic Metrics
for Text Collections | Summarizing data samples by quantitative measures has a long history, with
descriptive statistics being a case in point. However, as natural language
processing methods flourish, there are still insufficient characteristic
metrics to describe a collection of texts in terms of the words, sentences, or
paragraphs they comprise. In this work, we propose metrics of diversity,
density, and homogeneity that quantitatively measure the dispersion, sparsity,
and uniformity of a text collection. We conduct a series of simulations to
verify that each metric holds desired properties and resonates with human
intuitions. Experiments on real-world datasets demonstrate that the proposed
characteristic metrics are highly correlated with text classification
performance of a renowned model, BERT, which could inspire future applications.
| 2,020 | Computation and Language |
Enhancing Factual Consistency of Abstractive Summarization | Automatic abstractive summaries are found to often distort or fabricate facts
in the article. This inconsistency between summary and original text has
seriously impacted its applicability. We propose a fact-aware summarization
model FASum to extract and integrate factual relations into the summary
generation process via graph attention. We then design a factual corrector
model FC to automatically correct factual errors from summaries generated by
existing systems. Empirical results show that the fact-aware summarization can
produce abstractive summaries with higher factual consistency compared with
existing systems, and the correction model improves the factual consistency of
given summaries via modifying only a few keywords.
| 2,021 | Computation and Language |
Temporal Embeddings and Transformer Models for Narrative Text
Understanding | We present two deep learning approaches to narrative text understanding for
character relationship modelling. The temporal evolution of these relations is
described by dynamic word embeddings, that are designed to learn semantic
changes over time. An empirical analysis of the corresponding character
trajectories shows that such approaches are effective in depicting dynamic
evolution. A supervised learning approach based on the state-of-the-art
transformer model BERT is used instead to detect static relations between
characters. The empirical validation shows that such events (e.g., two
characters belonging to the same family) might be spotted with good accuracy,
even when using automatically annotated data. This provides a deeper
understanding of narrative plots based on the identification of key facts.
Standard clustering techniques are finally used for character de-aliasing, a
necessary pre-processing step for both approaches. Overall, deep learning
models appear to be suitable for narrative text understanding, while also
providing a challenging and unexploited benchmark for general natural language
understanding.
| 2,020 | Computation and Language |
Beheshti-NER: Persian Named Entity Recognition Using BERT | Named entity recognition is a natural language processing task to recognize
and extract spans of text associated with named entities and classify them in
semantic Categories.
Google BERT is a deep bidirectional language model, pre-trained on large
corpora that can be fine-tuned to solve many NLP tasks such as question
answering, named entity recognition, part of speech tagging and etc. In this
paper, we use the pre-trained deep bidirectional network, BERT, to make a model
for named entity recognition in Persian.
We also compare the results of our model with the previous state of the art
results achieved on Persian NER. Our evaluation metric is CONLL 2003 score in
two levels of word and phrase. This model achieved second place in NSURL-2019
task 7 competition which associated with NER for the Persian language. our
results in this competition are 83.5 and 88.4 f1 CONLL score respectively in
phrase and word level evaluation.
| 2,020 | Computation and Language |
Utilizing Language Relatedness to improve Machine Translation: A Case
Study on Languages of the Indian Subcontinent | In this work, we present an extensive study of statistical machine
translation involving languages of the Indian subcontinent. These languages are
related by genetic and contact relationships. We describe the similarities
between Indic languages arising from these relationships. We explore how
lexical and orthographic similarity among these languages can be utilized to
improve translation quality between Indic languages when limited parallel
corpora is available. We also explore how the structural correspondence between
Indic languages can be utilized to re-use linguistic resources for English to
Indic language translation. Our observations span 90 language pairs from 9
Indic languages and English. To the best of our knowledge, this is the first
large-scale study specifically devoted to utilizing language relatedness to
improve translation between related languages.
| 2,020 | Computation and Language |
Techniques for Vocabulary Expansion in Hybrid Speech Recognition Systems | The problem of out of vocabulary words (OOV) is typical for any speech
recognition system, hybrid systems are usually constructed to recognize a fixed
set of words and rarely can include all the words that will be encountered
during exploitation of the system. One of the popular approach to cover OOVs is
to use subword units rather then words. Such system can potentially recognize
any previously unseen word if the word can be constructed from present subword
units, but also non-existing words can be recognized. The other popular
approach is to modify HMM part of the system so that it can be easily and
effectively expanded with custom set of words we want to add to the system. In
this paper we explore different existing methods of this solution on both graph
construction and search method levels. We also present a novel vocabulary
expansion techniques which solve some common internal subroutine problems
regarding recognition graph processing.
| 2,020 | Computation and Language |
NSURL-2019 Task 7: Named Entity Recognition (NER) in Farsi | NSURL-2019 Task 7 focuses on Named Entity Recognition (NER) in Farsi. This
task was chosen to compare different approaches to find phrases that specify
Named Entities in Farsi texts, and to establish a standard testbed for future
researches on this task in Farsi. This paper describes the process of making
training and test data, a list of participating teams (6 teams), and evaluation
results of their systems. The best system obtained 85.4% of F1 score based on
phrase-level evaluation on seven classes of NEs including person, organization,
location, date, time, money and percent.
| 2,020 | Computation and Language |
TNT-KID: Transformer-based Neural Tagger for Keyword Identification | With growing amounts of available textual data, development of algorithms
capable of automatic analysis, categorization and summarization of these data
has become a necessity. In this research we present a novel algorithm for
keyword identification, i.e., an extraction of one or multi-word phrases
representing key aspects of a given document, called Transformer-based Neural
Tagger for Keyword IDentification (TNT-KID). By adapting the transformer
architecture for a specific task at hand and leveraging language model
pretraining on a domain specific corpus, the model is capable of overcoming
deficiencies of both supervised and unsupervised state-of-the-art approaches to
keyword extraction by offering competitive and robust performance on a variety
of different datasets while requiring only a fraction of manually labeled data
required by the best performing systems. This study also offers thorough error
analysis with valuable insights into the inner workings of the model and an
ablation study measuring the influence of specific components of the keyword
identification workflow on the overall performance.
| 2,021 | Computation and Language |
Parallel Intent and Slot Prediction using MLB Fusion | Intent and Slot Identification are two important tasks in Spoken Language
Understanding (SLU). For a natural language utterance, there is a high
correlation between these two tasks. A lot of work has been done on each of
these using Recurrent-Neural-Networks (RNN), Convolution Neural Networks (CNN)
and Attention based models. Most of the past work used two separate models for
intent and slot prediction. Some of them also used sequence-to-sequence type
models where slots are predicted after evaluating the utterance-level intent.
In this work, we propose a parallel Intent and Slot Prediction technique where
separate Bidirectional Gated Recurrent Units (GRU) are used for each task. We
posit the usage of MLB (Multimodal Low-rank Bilinear Attention Network) fusion
for improvement in performance of intent and slot learning. To the best of our
knowledge, this is the first attempt of using such a technique on text based
problems. Also, our proposed methods outperform the existing state-of-the-art
results for both intent and slot prediction on two benchmark datasets
| 2,020 | Computation and Language |
Language Technology Programme for Icelandic 2019-2023 | In this paper, we describe a new national language technology programme for
Icelandic. The programme, which spans a period of five years, aims at making
Icelandic usable in communication and interactions in the digital world, by
developing accessible, open-source language resources and software. The
research and development work within the programme is carried out by a
consortium of universities, institutions, and private companies, with a strong
emphasis on cooperation between academia and industries. Five core projects
will be the main content of the programme: language resources, speech
recognition, speech synthesis, machine translation, and spell and grammar
checking. We also describe other national language technology programmes and
give an overview over the history of language technology in Iceland.
| 2,020 | Computation and Language |
FedNER: Privacy-preserving Medical Named Entity Recognition with
Federated Learning | Medical named entity recognition (NER) has wide applications in intelligent
healthcare. Sufficient labeled data is critical for training accurate medical
NER model. However, the labeled data in a single medical platform is usually
limited. Although labeled datasets may exist in many different medical
platforms, they cannot be directly shared since medical data is highly
privacy-sensitive. In this paper, we propose a privacy-preserving medical NER
method based on federated learning, which can leverage the labeled data in
different platforms to boost the training of medical NER model and remove the
need of exchanging raw data among different platforms. Since the labeled data
in different platforms usually has some differences in entity type and
annotation criteria, instead of constraining different platforms to share the
same model, we decompose the medical NER model in each platform into a shared
module and a private module. The private module is used to capture the
characteristics of the local data in each platform, and is updated using local
labeled data. The shared module is learned across different medical platform to
capture the shared NER knowledge. Its local gradients from different platforms
are aggregated to update the global shared module, which is further delivered
to each platform to update their local shared modules. Experiments on three
publicly available datasets validate the effectiveness of our method.
| 2,020 | Computation and Language |
TArC: Incrementally and Semi-Automatically Collecting a Tunisian Arabish
Corpus | This article describes the constitution process of the first
morpho-syntactically annotated Tunisian Arabish Corpus (TArC). Arabish, also
known as Arabizi, is a spontaneous coding of Arabic dialects in Latin
characters and arithmographs (numbers used as letters). This code-system was
developed by Arabic-speaking users of social media in order to facilitate the
writing in the Computer-Mediated Communication (CMC) and text messaging
informal frameworks. There is variety in the realization of Arabish amongst
dialects, and each Arabish code-system is under-resourced, in the same way as
most of the Arabic dialects. In the last few years, the focus on Arabic
dialects in the NLP field has considerably increased. Taking this into
consideration, TArC will be a useful support for different types of analyses,
computational and linguistic, as well as for NLP tools training. In this
article we will describe preliminary work on the TArC semi-automatic
construction process and some of the first analyses we developed on TArC. In
addition, in order to provide a complete overview of the challenges faced
during the building process, we will present the main Tunisian dialect
characteristics and their encoding in Tunisian Arabish.
| 2,020 | Computation and Language |
A Framework for Generating Explanations from Temporal Personal Health
Data | Whereas it has become easier for individuals to track their personal health
data (e.g., heart rate, step count, food log), there is still a wide chasm
between the collection of data and the generation of meaningful explanations to
help users better understand what their data means to them. With an increased
comprehension of their data, users will be able to act upon the newfound
information and work towards striving closer to their health goals. We aim to
bridge the gap between data collection and explanation generation by mining the
data for interesting behavioral findings that may provide hints about a user's
tendencies. Our focus is on improving the explainability of temporal personal
health data via a set of informative summary templates, or "protoforms." These
protoforms span both evaluation-based summaries that help users evaluate their
health goals and pattern-based summaries that explain their implicit behaviors.
In addition to individual users, the protoforms we use are also designed for
population-level summaries. We apply our approach to generate summaries (both
univariate and multivariate) from real user data and show that our system can
generate interesting and useful explanations.
| 2,021 | Computation and Language |
Probing Word Translations in the Transformer and Trading Decoder for
Encoder Layers | Due to its effectiveness and performance, the Transformer translation model
has attracted wide attention, most recently in terms of probing-based
approaches. Previous work focuses on using or probing source linguistic
features in the encoder. To date, the way word translation evolves in
Transformer layers has not yet been investigated. Naively, one might assume
that encoder layers capture source information while decoder layers translate.
In this work, we show that this is not quite the case: translation already
happens progressively in encoder layers and even in the input embeddings. More
surprisingly, we find that some of the lower decoder layers do not actually do
that much decoding. We show all of this in terms of a probing approach where we
project representations of the layer analyzed to the final trained and frozen
classifier level of the Transformer decoder to measure word translation
accuracy. Our findings motivate and explain a Transformer configuration change:
if translation already happens in the encoder layers, perhaps we can increase
the number of encoder layers, while decreasing the number of decoder layers,
boosting decoding speed, without loss in translation quality? Our experiments
show that this is indeed the case: we can increase speed by up to a factor 2.3
with small gains in translation quality, while an 18-4 deep encoder
configuration boosts translation quality by +1.42 BLEU (En-De) at a speed-up of
1.4.
| 2,021 | Computation and Language |
A Joint Approach to Compound Splitting and Idiomatic Compound Detection | Applications such as machine translation, speech recognition, and information
retrieval require efficient handling of noun compounds as they are one of the
possible sources for out-of-vocabulary (OOV) words. In-depth processing of noun
compounds requires not only splitting them into smaller components (or even
roots) but also the identification of instances that should remain unsplitted
as they are of idiomatic nature. We develop a two-fold deep learning-based
approach of noun compound splitting and idiomatic compound detection for the
German language that we train using a newly collected corpus of annotated
German compounds. Our neural noun compound splitter operates on a sub-word
level and outperforms the current state of the art by about 5%.
| 2,020 | Computation and Language |
Prior Knowledge Driven Label Embedding for Slot Filling in Natural
Language Understanding | Traditional slot filling in natural language understanding (NLU) predicts a
one-hot vector for each word. This form of label representation lacks semantic
correlation modelling, which leads to severe data sparsity problem, especially
when adapting an NLU model to a new domain. To address this issue, a novel
label embedding based slot filling framework is proposed in this paper. Here,
distributed label embedding is constructed for each slot using prior knowledge.
Three encoding methods are investigated to incorporate different kinds of prior
knowledge about slots: atomic concepts, slot descriptions, and slot exemplars.
The proposed label embeddings tend to share text patterns and reuses data with
different slot labels. This makes it useful for adaptive NLU with limited data.
Also, since label embedding is independent of NLU model, it is compatible with
almost all deep learning based slot filling models. The proposed approaches are
evaluated on three datasets. Experiments on single domain and domain adaptation
tasks show that label embedding achieves significant performance improvement
over traditional one-hot label representation as well as advanced zero-shot
approaches.
| 2,020 | Computation and Language |
SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive
Connection | While the self-attention mechanism has been widely used in a wide variety of
tasks, it has the unfortunate property of a quadratic cost with respect to the
input length, which makes it difficult to deal with long inputs. In this paper,
we present a method for accelerating and structuring self-attentions: Sparse
Adaptive Connection (SAC). In SAC, we regard the input sequence as a graph and
attention operations are performed between linked nodes. In contrast with
previous self-attention models with pre-defined structures (edges), the model
learns to construct attention edges to improve task-specific performances. In
this way, the model is able to select the most salient nodes and reduce the
quadratic complexity regardless of the sequence length. Based on SAC, we show
that previous variants of self-attention models are its special cases. Through
extensive experiments on neural machine translation, language modeling, graph
representation learning and image classification, we demonstrate SAC is
competitive with state-of-the-art models while significantly reducing memory
cost.
| 2,020 | Computation and Language |
Toward Tag-free Aspect Based Sentiment Analysis: A Multiple Attention
Network Approach | Existing aspect based sentiment analysis (ABSA) approaches leverage various
neural network models to extract the aspect sentiments via learning
aspect-specific feature representations. However, these approaches heavily rely
on manual tagging of user reviews according to the predefined aspects as the
input, a laborious and time-consuming process. Moreover, the underlying methods
do not explain how and why the opposing aspect level polarities in a user
review lead to the overall polarity. In this paper, we tackle these two
problems by designing and implementing a new Multiple-Attention Network (MAN)
approach for more powerful ABSA without the need for aspect tags using two new
tag-free data sets crawled directly from TripAdvisor
({https://www.tripadvisor.com}). With the Self- and Position-Aware attention
mechanism, MAN is capable of extracting both aspect level and overall
sentiments from the text reviews using the aspect level and overall customer
ratings, and it can also detect the vital aspect(s) leading to the overall
sentiment polarity among different aspects via a new aspect ranking scheme. We
carry out extensive experiments to demonstrate the strong performance of MAN
compared to other state-of-the-art ABSA approaches and the explainability of
our approach by visualizing and interpreting attention weights in case studies.
| 2,020 | Computation and Language |
Caption Generation of Robot Behaviors based on Unsupervised Learning of
Action Segments | Bridging robot action sequences and their natural language captions is an
important task to increase explainability of human assisting robots in their
recently evolving field. In this paper, we propose a system for generating
natural language captions that describe behaviors of human assisting robots.
The system describes robot actions by using robot observations; histories from
actuator systems and cameras, toward end-to-end bridging between robot actions
and natural language captions. Two reasons make it challenging to apply
existing sequence-to-sequence models to this mapping: 1) it is hard to prepare
a large-scale dataset for any kind of robots and their environment, and 2)
there is a gap between the number of samples obtained from robot action
observations and generated word sequences of captions. We introduced
unsupervised segmentation based on K-means clustering to unify typical robot
observation patterns into a class. This method makes it possible for the
network to learn the relationship from a small amount of data. Moreover, we
utilized a chunking method based on byte-pair encoding (BPE) to fill in the gap
between the number of samples of robot action observations and words in a
caption. We also applied an attention mechanism to the segmentation task.
Experimental results show that the proposed model based on unsupervised
learning can generate better descriptions than other methods. We also show that
the attention mechanism did not work well in our low-resource setting.
| 2,020 | Computation and Language |
E2EET: From Pipeline to End-to-end Entity Typing via Transformer-Based
Embeddings | Entity Typing (ET) is the process of identifying the semantic types of every
entity within a corpus. In contrast to Named Entity Recognition, where each
token in a sentence is labelled with zero or one class label, ET involves
labelling each entity mention with one or more class labels. Existing entity
typing models, which operate at the mention level, are limited by two key
factors: they do not make use of recently-proposed context-dependent
embeddings, and are trained on fixed context windows. They are therefore
sensitive to window size selection and are unable to incorporate the context of
the entire document. In light of these drawbacks we propose to incorporate
context using transformer-based embeddings for a mention-level model, and an
end-to-end model using a Bi-GRU to remove the dependency on window size. An
extensive ablative study demonstrates the effectiveness of contextualised
embeddings for mention-level models and the competitiveness of our end-to-end
model for entity typing.
| 2,020 | Computation and Language |
Unsupervised Word Polysemy Quantification with Multiresolution Grids of
Contextual Embeddings | The number of senses of a given word, or polysemy, is a very subjective
notion, which varies widely across annotators and resources. We propose a novel
method to estimate polysemy, based on simple geometry in the contextual
embedding space. Our approach is fully unsupervised and purely data-driven. We
show through rigorous experiments that our rankings are well correlated (with
strong statistical significance) with 6 different rankings derived from famous
human-constructed resources such as WordNet, OntoNotes, Oxford, Wikipedia etc.,
for 6 different standard metrics. We also visualize and analyze the correlation
between the human rankings. A valuable by-product of our method is the ability
to sample, at no extra cost, sentences containing different senses of a given
word. Finally, the fully unsupervised nature of our method makes it applicable
to any language.
Code and data are publicly available at
https://github.com/ksipos/polysemy-assessment .
The paper was accepted as a long paper at EACL 2021.
| 2,023 | Computation and Language |
Fast Cross-domain Data Augmentation through Neural Sentence Editing | Data augmentation promises to alleviate data scarcity. This is most important
in cases where the initial data is in short supply. This is, for existing
methods, also where augmenting is the most difficult, as learning the full data
distribution is impossible. For natural language, sentence editing offers a
solution - relying on small but meaningful changes to the original ones.
Learning which changes are meaningful also requires large amounts of training
data. We thus aim to learn this in a source domain where data is abundant and
apply it in a different, target domain, where data is scarce - cross-domain
augmentation.
We create the Edit-transformer, a Transformer-based sentence editor that is
significantly faster than the state of the art and also works cross-domain. We
argue that, due to its structure, the Edit-transformer is better suited for
cross-domain environments than its edit-based predecessors. We show this
performance gap on the Yelp-Wikipedia domain pairs. Finally, we show that due
to this cross-domain performance advantage, the Edit-transformer leads to
meaningful performance gains in several downstream tasks.
| 2,020 | Computation and Language |
PathVQA: 30000+ Questions for Medical Visual Question Answering | Is it possible to develop an "AI Pathologist" to pass the board-certified
examination of the American Board of Pathology? To achieve this goal, the first
step is to create a visual question answering (VQA) dataset where the AI agent
is presented with a pathology image together with a question and is asked to
give the correct answer. Our work makes the first attempt to build such a
dataset. Different from creating general-domain VQA datasets where the images
are widely accessible and there are many crowdsourcing workers available and
capable of generating question-answer pairs, developing a medical VQA dataset
is much more challenging. First, due to privacy concerns, pathology images are
usually not publicly available. Second, only well-trained pathologists can
understand pathology images, but they barely have time to help create datasets
for AI research. To address these challenges, we resort to pathology textbooks
and online digital libraries. We develop a semi-automated pipeline to extract
pathology images and captions from textbooks and generate question-answer pairs
from captions using natural language processing. We collect 32,799 open-ended
questions from 4,998 pathology images where each question is manually checked
to ensure correctness. To our best knowledge, this is the first dataset for
pathology VQA. Our dataset will be released publicly to promote research in
medical VQA.
| 2,020 | Computation and Language |
Adaptive Name Entity Recognition under Highly Unbalanced Data | For several purposes in Natural Language Processing (NLP), such as
Information Extraction, Sentiment Analysis or Chatbot, Named Entity Recognition
(NER) holds an important role as it helps to determine and categorize entities
in text into predefined groups such as the names of persons, locations,
quantities, organizations or percentages, etc. In this report, we present our
experiments on a neural architecture composed of a Conditional Random Field
(CRF) layer stacked on top of a Bi-directional LSTM (BI-LSTM) layer for solving
NER tasks. Besides, we also employ a fusion input of embedding vectors (Glove,
BERT), which are pre-trained on the huge corpus to boost the generalization
capacity of the model. Unfortunately, due to the heavy unbalanced distribution
cross-training data, both approaches just attained a bad performance on less
training samples classes. To overcome this challenge, we introduce an add-on
classification model to split sentences into two different sets: Weak and
Strong classes and then designing a couple of Bi-LSTM-CRF models properly to
optimize performance on each set. We evaluated our models on the test set and
discovered that our method can improve performance for Weak classes
significantly by using a very small data set (approximately 0.45\%) compared to
the rest classes.
| 2,020 | Computation and Language |
Generating Natural Language Adversarial Examples on a Large Scale with
Generative Models | Today text classification models have been widely used. However, these
classifiers are found to be easily fooled by adversarial examples. Fortunately,
standard attacking methods generate adversarial texts in a pair-wise way, that
is, an adversarial text can only be created from a real-world text by replacing
a few words. In many applications, these texts are limited in numbers,
therefore their corresponding adversarial examples are often not diverse enough
and sometimes hard to read, thus can be easily detected by humans and cannot
create chaos at a large scale. In this paper, we propose an end to end solution
to efficiently generate adversarial texts from scratch using generative models,
which are not restricted to perturbing the given texts. We call it unrestricted
adversarial text generation. Specifically, we train a conditional variational
autoencoder (VAE) with an additional adversarial loss to guide the generation
of adversarial examples. Moreover, to improve the validity of adversarial
texts, we utilize discrimators and the training framework of generative
adversarial networks (GANs) to make adversarial texts consistent with real
data. Experimental results on sentiment analysis demonstrate the scalability
and efficiency of our method. It can attack text classification models with a
higher success rate than existing methods, and provide acceptable quality for
humans in the meantime.
| 2,020 | Computation and Language |
Multimodal Analytics for Real-world News using Measures of Cross-modal
Entity Consistency | The World Wide Web has become a popular source for gathering information and
news. Multimodal information, e.g., enriching text with photos, is typically
used to convey the news more effectively or to attract attention. Photo content
can range from decorative, depict additional important information, or can even
contain misleading information. Therefore, automatic approaches to quantify
cross-modal consistency of entity representation can support human assessors to
evaluate the overall multimodal message, for instance, with regard to bias or
sentiment. In some cases such measures could give hints to detect fake news,
which is an increasingly important topic in today's society. In this paper, we
introduce a novel task of cross-modal consistency verification in real-world
news and present a multimodal approach to quantify the entity coherence between
image and text. Named entity linking is applied to extract persons, locations,
and events from news texts. Several measures are suggested to calculate
cross-modal similarity for these entities using state of the art approaches. In
contrast to previous work, our system automatically gathers example data from
the Web and is applicable to real-world news. Results on two novel datasets
that cover different languages, topics, and domains demonstrate the feasibility
of our approach. Datasets and code are publicly available to foster research
towards this new direction.
| 2,020 | Computation and Language |
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than
Generators | Masked language modeling (MLM) pre-training methods such as BERT corrupt the
input by replacing some tokens with [MASK] and then train a model to
reconstruct the original tokens. While they produce good results when
transferred to downstream NLP tasks, they generally require large amounts of
compute to be effective. As an alternative, we propose a more sample-efficient
pre-training task called replaced token detection. Instead of masking the
input, our approach corrupts it by replacing some tokens with plausible
alternatives sampled from a small generator network. Then, instead of training
a model that predicts the original identities of the corrupted tokens, we train
a discriminative model that predicts whether each token in the corrupted input
was replaced by a generator sample or not. Thorough experiments demonstrate
this new pre-training task is more efficient than MLM because the task is
defined over all input tokens rather than just the small subset that was masked
out. As a result, the contextual representations learned by our approach
substantially outperform the ones learned by BERT given the same model size,
data, and compute. The gains are particularly strong for small models; for
example, we train a model on one GPU for 4 days that outperforms GPT (trained
using 30x more compute) on the GLUE natural language understanding benchmark.
Our approach also works well at scale, where it performs comparably to RoBERTa
and XLNet while using less than 1/4 of their compute and outperforms them when
using the same amount of compute.
| 2,020 | Computation and Language |
Improving Yor\`ub\'a Diacritic Restoration | Yor\`ub\'a is a widely spoken West African language with a writing system
rich in orthographic and tonal diacritics. They provide morphological
information, are crucial for lexical disambiguation, pronunciation and are
vital for any computational Speech or Natural Language Processing tasks.
However diacritic marks are commonly excluded from electronic texts due to
limited device and application support as well as general education on proper
usage. We report on recent efforts at dataset cultivation. By aggregating and
improving disparate texts from the web and various personal libraries, we were
able to significantly grow our clean Yor\`ub\'a dataset from a majority
Bibilical text corpora with three sources to millions of tokens from over a
dozen sources. We evaluate updated diacritic restoration models on a new,
general purpose, public-domain Yor\`ub\'a evaluation dataset of modern
journalistic news text, selected to be multi-purpose and reflecting
contemporary usage. All pre-trained models, datasets and source-code have been
released as an open-source project to advance efforts on Yor\`ub\'a language
technology.
| 2,020 | Computation and Language |
Felix: Flexible Text Editing Through Tagging and Insertion | We present Felix --- a flexible text-editing approach for generation,
designed to derive the maximum benefit from the ideas of decoding with
bi-directional contexts and self-supervised pre-training. In contrast to
conventional sequence-to-sequence (seq2seq) models, Felix is efficient in
low-resource settings and fast at inference time, while being capable of
modeling flexible input-output transformations. We achieve this by decomposing
the text-editing task into two sub-tasks: tagging to decide on the subset of
input tokens and their order in the output text and insertion to in-fill the
missing tokens in the output not present in the input. The tagging model
employs a novel Pointer mechanism, while the insertion model is based on a
Masked Language Model. Both of these models are chosen to be non-autoregressive
to guarantee faster inference. Felix performs favourably when compared to
recent text-editing methods and strong seq2seq baselines when evaluated on four
NLG tasks: Sentence Fusion, Machine Translation Automatic Post-Editing,
Summarization, and Text Simplification.
| 2,020 | Computation and Language |
Towards Neural Machine Translation for Edoid Languages | Many Nigerian languages have relinquished their previous prestige and purpose
in modern society to English and Nigerian Pidgin. For the millions of L1
speakers of indigenous languages, there are inequalities that manifest
themselves as unequal access to information, communications, health care,
security as well as attenuated participation in political and civic life. To
minimize exclusion and promote socio-linguistic and economic empowerment, this
work explores the feasibility of Neural Machine Translation (NMT) for the Edoid
language family of Southern Nigeria. Using the new JW300 public dataset, we
trained and evaluated baseline translation models for four widely spoken
languages in this group: \`Ed\'o, \'Es\'an, Urhobo and Isoko. Trained models,
code and datasets have been open-sourced to advance future research efforts on
Edoid language technology.
| 2,020 | Computation and Language |
Generating Chinese Poetry from Images via Concrete and Abstract
Information | In recent years, the automatic generation of classical Chinese poetry has
made great progress. Besides focusing on improving the quality of the generated
poetry, there is a new topic about generating poetry from an image. However,
the existing methods for this topic still have the problem of topic drift and
semantic inconsistency, and the image-poem pairs dataset is hard to be built
when training these models. In this paper, we extract and integrate the
Concrete and Abstract information from images to address those issues. We
proposed an infilling-based Chinese poetry generation model which can infill
the Concrete keywords into each line of poems in an explicit way, and an
abstract information embedding to integrate the Abstract information into
generated poems. In addition, we use non-parallel data during training and
construct separate image datasets and poem datasets to train the different
components in our framework. Both automatic and human evaluation results show
that our approach can generate poems which have better consistency with images
without losing the quality.
| 2,020 | Computation and Language |
Cross-Lingual Adaptation Using Universal Dependencies | We describe a cross-lingual adaptation method based on syntactic parse trees
obtained from the Universal Dependencies (UD), which are consistent across
languages, to develop classifiers in low-resource languages. The idea of UD
parsing is to capture similarities as well as idiosyncrasies among
typologically different languages. In this paper, we show that models trained
using UD parse trees for complex NLP tasks can characterize very different
languages. We study two tasks of paraphrase identification and semantic
relation extraction as case studies. Based on UD parse trees, we develop
several models using tree kernels and show that these models trained on the
English dataset can correctly classify data of other languages e.g. French,
Farsi, and Arabic. The proposed approach opens up avenues for exploiting UD
parsing in solving similar cross-lingual tasks, which is very useful for
languages that no labeled data is available for them.
| 2,020 | Computation and Language |
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating
Cross-lingual Generalization | Much recent progress in applications of machine learning models to NLP has
been driven by benchmarks that evaluate models across a wide variety of tasks.
However, these broad-coverage benchmarks have been mostly limited to English,
and despite an increasing interest in multilingual models, a benchmark that
enables the comprehensive evaluation of such methods on a diverse range of
languages and tasks is still missing. To this end, we introduce the
Cross-lingual TRansfer Evaluation of Multilingual Encoders XTREME benchmark, a
multi-task benchmark for evaluating the cross-lingual generalization
capabilities of multilingual representations across 40 languages and 9 tasks.
We demonstrate that while models tested on English reach human performance on
many tasks, there is still a sizable gap in the performance of cross-lingually
transferred models, particularly on syntactic and sentence retrieval tasks.
There is also a wide spread of results across languages. We release the
benchmark to encourage research on cross-lingual learning methods that transfer
linguistic knowledge across a diverse and representative set of languages and
tasks.
| 2,020 | Computation and Language |
Can Embeddings Adequately Represent Medical Terminology? New Large-Scale
Medical Term Similarity Datasets Have the Answer! | A large number of embeddings trained on medical data have emerged, but it
remains unclear how well they represent medical terminology, in particular
whether the close relationship of semantically similar medical terms is encoded
in these embeddings. To date, only small datasets for testing medical term
similarity are available, not allowing to draw conclusions about the
generalisability of embeddings to the enormous amount of medical terms used by
doctors. We present multiple automatically created large-scale medical term
similarity datasets and confirm their high quality in an annotation study with
doctors. We evaluate state-of-the-art word and contextual embeddings on our new
datasets, comparing multiple vector similarity metrics and word vector
aggregation techniques. Our results show that current embeddings are limited in
their ability to adequately encode medical terms. The novel datasets thus form
a challenging new benchmark for the development of medical embeddings able to
accurately represent the whole medical terminology.
| 2,020 | Computation and Language |
Learning Syntactic and Dynamic Selective Encoding for Document
Summarization | Text summarization aims to generate a headline or a short summary consisting
of the major information of the source text. Recent studies employ the
sequence-to-sequence framework to encode the input with a neural network and
generate abstractive summary. However, most studies feed the encoder with the
semantic word embedding but ignore the syntactic information of the text.
Further, although previous studies proposed the selective gate to control the
information flow from the encoder to the decoder, it is static during the
decoding and cannot differentiate the information based on the decoder states.
In this paper, we propose a novel neural architecture for document
summarization. Our approach has the following contributions: first, we
incorporate syntactic information such as constituency parsing trees into the
encoding sequence to learn both the semantic and syntactic information from the
document, resulting in more accurate summary; second, we propose a dynamic gate
network to select the salient information based on the context of the decoder
state, which is essential to document summarization. The proposed model has
been evaluated on CNN/Daily Mail summarization datasets and the experimental
results show that the proposed approach outperforms baseline approaches.
| 2,020 | Computation and Language |
Adversarial Multi-Binary Neural Network for Multi-class Classification | Multi-class text classification is one of the key problems in machine
learning and natural language processing. Emerging neural networks deal with
the problem using a multi-output softmax layer and achieve substantial
progress, but they do not explicitly learn the correlation among classes. In
this paper, we use a multi-task framework to address multi-class
classification, where a multi-class classifier and multiple binary classifiers
are trained together. Moreover, we employ adversarial training to distinguish
the class-specific features and the class-agnostic features. The model benefits
from better feature representation. We conduct experiments on two large-scale
multi-class text classification tasks and demonstrate that the proposed
architecture outperforms baseline approaches.
| 2,020 | Computation and Language |
BaitWatcher: A lightweight web interface for the detection of
incongruent news headlines | In digital environments where substantial amounts of information are shared
online, news headlines play essential roles in the selection and diffusion of
news articles. Some news articles attract audience attention by showing
exaggerated or misleading headlines. This study addresses the \textit{headline
incongruity} problem, in which a news headline makes claims that are either
unrelated or opposite to the contents of the corresponding article. We present
\textit{BaitWatcher}, which is a lightweight web interface that guides readers
in estimating the likelihood of incongruence in news articles before clicking
on the headlines. BaitWatcher utilizes a hierarchical recurrent encoder that
efficiently learns complex textual representations of a news headline and its
associated body text. For training the model, we construct a million scale
dataset of news articles, which we also release for broader research use. Based
on the results of a focus group interview, we discuss the importance of
developing an interpretable AI agent for the design of a better interface for
mitigating the effects of online misinformation.
| 2,020 | Computation and Language |
Hurtful Words: Quantifying Biases in Clinical Contextual Word Embeddings | In this work, we examine the extent to which embeddings may encode
marginalized populations differently, and how this may lead to a perpetuation
of biases and worsened performance on clinical tasks. We pretrain deep
embedding models (BERT) on medical notes from the MIMIC-III hospital dataset,
and quantify potential disparities using two approaches. First, we identify
dangerous latent relationships that are captured by the contextual word
embeddings using a fill-in-the-blank method with text from real clinical notes
and a log probability bias score quantification. Second, we evaluate
performance gaps across different definitions of fairness on over 50 downstream
clinical prediction tasks that include detection of acute and chronic
conditions. We find that classifiers trained from BERT representations exhibit
statistically significant differences in performance, often favoring the
majority group with regards to gender, language, ethnicity, and insurance
status. Finally, we explore shortcomings of using adversarial debiasing to
obfuscate subgroup information in contextual word embeddings, and recommend
best practices for such deep embedding models in clinical settings.
| 2,020 | Computation and Language |
Keyword-Attentive Deep Semantic Matching | Deep Semantic Matching is a crucial component in various natural language
processing applications such as question and answering (QA), where an input
query is compared to each candidate question in a QA corpus in terms of
relevance. Measuring similarities between a query-question pair in an open
domain scenario can be challenging due to diverse word tokens in the
queryquestion pair. We propose a keyword-attentive approach to improve deep
semantic matching. We first leverage domain tags from a large corpus to
generate a domain-enhanced keyword dictionary. Built upon BERT, we stack a
keyword-attentive transformer layer to highlight the importance of keywords in
the query-question pair. During model training, we propose a new negative
sampling approach based on keyword coverage between the input pair. We evaluate
our approach on a Chinese QA corpus using various metrics, including precision
of retrieval candidates and accuracy of semantic matching. Experiments show
that our approach outperforms existing strong baselines. Our approach is
general and can be applied to other text matching tasks with little adaptation.
| 2,020 | Computation and Language |
From Algebraic Word Problem to Program: A Formalized Approach | In this paper, we propose a pipeline to convert grade school level algebraic
word problem into program of a formal languageA-IMP. Using natural language
processing tools, we break the problem into sentence fragments which can then
be reduced to functions. The functions are categorized by the head verb of the
sentence and its structure, as defined by (Hosseini et al., 2014). We define
the function signature and extract its arguments from the text using dependency
parsing. We have a working implementation of the entire pipeline which can be
found on our github repository.
| 2,020 | Computation and Language |
Hybrid Attention-Based Transformer Block Model for Distant Supervision
Relation Extraction | With an exponential explosive growth of various digital text information, it
is challenging to efficiently obtain specific knowledge from massive
unstructured text information. As one basic task for natural language
processing (NLP), relation extraction aims to extract the semantic relation
between entity pairs based on the given text. To avoid manual labeling of
datasets, distant supervision relation extraction (DSRE) has been widely used,
aiming to utilize knowledge base to automatically annotate datasets.
Unfortunately, this method heavily suffers from wrong labelling due to the
underlying strong assumptions. To address this issue, we propose a new
framework using hybrid attention-based Transformer block with multi-instance
learning to perform the DSRE task. More specifically, the Transformer block is
firstly used as the sentence encoder to capture syntactic information of
sentences, which mainly utilizes multi-head self-attention to extract features
from word level. Then, a more concise sentence-level attention mechanism is
adopted to constitute the bag representation, aiming to incorporate valid
information of each sentence to effectively represent the bag. Experimental
results on the public dataset New York Times (NYT) demonstrate that the
proposed approach can outperform the state-of-the-art algorithms on the
evaluation dataset, which verifies the effectiveness of our model for the DSRE
task.
| 2,020 | Computation and Language |
Vector logic allows counterfactual virtualization by The Square Root of
NOT | In this work we investigate the representation of counterfactual conditionals
using the vector logic, a matrix-vectors formalism for logical functions and
truth values. Inside this formalism, the counterfactuals can be transformed in
complex matrices preprocessing an implication matrix with one of the square
roots of NOT, a complex matrix. This mathematical approach puts in evidence the
virtual character of the counterfactuals. This happens because this
representation produces a valuation of a counterfactual that is the
superposition of the two opposite truth values weighted, respectively, by two
complex conjugated coefficients. This result shows that this procedure gives an
uncertain evaluation projected on the complex domain. After this basic
representation, the judgment of the plausibility of a given counterfactual
allows us to shift the decision towards an acceptance or a refusal. This shift
is the result of applying for a second time one of the two square roots of NOT.
| 2,020 | Computation and Language |
Joint Multiclass Debiasing of Word Embeddings | Bias in Word Embeddings has been a subject of recent interest, along with
efforts for its reduction. Current approaches show promising progress towards
debiasing single bias dimensions such as gender or race. In this paper, we
present a joint multiclass debiasing approach that is capable of debiasing
multiple bias dimensions simultaneously. In that direction, we present two
approaches, HardWEAT and SoftWEAT, that aim to reduce biases by minimizing the
scores of the Word Embeddings Association Test (WEAT). We demonstrate the
viability of our methods by debiasing Word Embeddings on three classes of
biases (religion, gender and race) in three different publicly available word
embeddings and show that our concepts can both reduce or even completely
eliminate bias, while maintaining meaningful relationships between vectors in
word embeddings. Our work strengthens the foundation for more unbiased neural
representations of textual data.
| 2,020 | Computation and Language |
Matching Text with Deep Mutual Information Estimation | Text matching is a core natural language processing research problem. How to
retain sufficient information on both content and structure information is one
important challenge. In this paper, we present a neural approach for
general-purpose text matching with deep mutual information estimation
incorporated. Our approach, Text matching with Deep Info Max (TIM), is
integrated with a procedure of unsupervised learning of representations by
maximizing the mutual information between text matching neural network's input
and output. We use both global and local mutual information to learn text
representations. We evaluate our text matching approach on several tasks
including natural language inference, paraphrase identification, and answer
selection. Compared to the state-of-the-art approaches, the experiments show
that our method integrated with mutual information estimation learns better
text representation and achieves better experimental results of text matching
tasks without exploiting pretraining on external data.
| 2,020 | Computation and Language |
Tigrinya Neural Machine Translation with Transfer Learning for
Humanitarian Response | We report our experiments in building a domain-specific Tigrinya-to-English
neural machine translation system. We use transfer learning from other Ge'ez
script languages and report an improvement of 1.3 BLEU points over a classic
neural baseline. We publish our development pipeline as an open-source library
and also provide a demonstration application.
| 2,020 | Computation and Language |
Generating Major Types of Chinese Classical Poetry in a Uniformed
Framework | Poetry generation is an interesting research topic in the field of text
generation. As one of the most valuable literary and cultural heritages of
China, Chinese classical poetry is very familiar and loved by Chinese people
from generation to generation. It has many particular characteristics in its
language structure, ranging from form, sound to meaning, thus is regarded as an
ideal testing task for text generation. In this paper, we propose a GPT-2 based
uniformed framework for generating major types of Chinese classical poems. We
define a unified format for formulating all types of training samples by
integrating detailed form information, then present a simple form-stressed
weighting method in GPT-2 to strengthen the control to the form of the
generated poems, with special emphasis on those forms with longer body length.
Preliminary experimental results show this enhanced model can generate Chinese
classical poems of major types with high quality in both form and content,
validating the effectiveness of the proposed strategy. The model has been
incorporated into Jiuge, the most influential Chinese classical poetry
generation system developed by Tsinghua University (Guo et al., 2019).
| 2,020 | Computation and Language |
Masakhane -- Machine Translation For Africa | Africa has over 2000 languages. Despite this, African languages account for a
small portion of available resources and publications in Natural Language
Processing (NLP). This is due to multiple factors, including: a lack of focus
from government and funding, discoverability, a lack of community, sheer
language complexity, difficulty in reproducing papers and no benchmarks to
compare techniques. To begin to address the identified problems, MASAKHANE, an
open-source, continent-wide, distributed, online research effort for machine
translation for African languages, was founded. In this paper, we discuss our
methodology for building the community and spurring research from the African
continent, as well as outline the success of the community in terms of
addressing the identified problems affecting African NLP.
| 2,020 | Computation and Language |
Meta-CoTGAN: A Meta Cooperative Training Paradigm for Improving
Adversarial Text Generation | Training generative models that can generate high-quality text with
sufficient diversity is an important open problem for Natural Language
Generation (NLG) community. Recently, generative adversarial models have been
applied extensively on text generation tasks, where the adversarially trained
generators alleviate the exposure bias experienced by conventional maximum
likelihood approaches and result in promising generation quality. However, due
to the notorious defect of mode collapse for adversarial training, the
adversarially trained generators face a quality-diversity trade-off, i.e., the
generator models tend to sacrifice generation diversity severely for increasing
generation quality. In this paper, we propose a novel approach which aims to
improve the performance of adversarial text generation via efficiently
decelerating mode collapse of the adversarial training. To this end, we
introduce a cooperative training paradigm, where a language model is
cooperatively trained with the generator and we utilize the language model to
efficiently shape the data distribution of the generator against mode collapse.
Moreover, instead of engaging the cooperative update for the generator in a
principled way, we formulate a meta learning mechanism, where the cooperative
update to the generator serves as a high level meta task, with an intuition of
ensuring the parameters of the generator after the adversarial update would
stay resistant against mode collapse. In the experiment, we demonstrate our
proposed approach can efficiently slow down the pace of mode collapse for the
adversarial text generators. Overall, our proposed method is able to outperform
the baseline approaches with significant margins in terms of both generation
quality and diversity in the testified domains.
| 2,020 | Computation and Language |
The Medical Scribe: Corpus Development and Model Performance Analyses | There is a growing interest in creating tools to assist in clinical note
generation using the audio of provider-patient encounters. Motivated by this
goal and with the help of providers and medical scribes, we developed an
annotation scheme to extract relevant clinical concepts. We used this
annotation scheme to label a corpus of about 6k clinical encounters. This was
used to train a state-of-the-art tagging model. We report ontologies, labeling
results, model performances, and detailed analyses of the results. Our results
show that the entities related to medications can be extracted with a
relatively high accuracy of 0.90 F-score, followed by symptoms at 0.72 F-score,
and conditions at 0.57 F-score. In our task, we not only identify where the
symptoms are mentioned but also map them to canonical forms as they appear in
the clinical notes. Of the different types of errors, in about 19-38% of the
cases, we find that the model output was correct, and about 17-32% of the
errors do not impact the clinical note. Taken together, the models developed in
this work are more useful than the F-scores reflect, making it a promising
approach for practical applications.
| 2,020 | Computation and Language |
Forensic Authorship Analysis of Microblogging Texts Using N-Grams and
Stylometric Features | In recent years, messages and text posted on the Internet are used in
criminal investigations. Unfortunately, the authorship of many of them remains
unknown. In some channels, the problem of establishing authorship may be even
harder, since the length of digital texts is limited to a certain number of
characters. In this work, we aim at identifying authors of tweet messages,
which are limited to 280 characters. We evaluate popular features employed
traditionally in authorship attribution which capture properties of the writing
style at different levels. We use for our experiments a self-captured database
of 40 users, with 120 to 200 tweets per user. Results using this small set are
promising, with the different features providing a classification accuracy
between 92% and 98.5%. These results are competitive in comparison to existing
studies which employ short texts such as tweets or SMS.
| 2,020 | Computation and Language |
Predicting Legal Proceedings Status: Approaches Based on Sequential Text
Data | The objective of this paper is to develop predictive models to classify
Brazilian legal proceedings in three possible classes of status: (i) archived
proceedings, (ii) active proceedings, and (iii) suspended proceedings. This
problem's resolution is intended to assist public and private institutions in
managing large portfolios of legal proceedings, providing gains in scale and
efficiency. In this paper, legal proceedings are made up of sequences of short
texts called "motions." We combined several natural language processing (NLP)
and machine learning techniques to solve the problem. Although working with
Portuguese NLP, which can be challenging due to lack of resources, our
approaches performed remarkably well in the classification task, achieving
maximum accuracy of .93 and top average F1 Scores of .89 (macro) and .93
(weighted). Furthermore, we could extract and interpret the patterns learned by
one of our models besides quantifying how those patterns relate to the
classification task. The interpretability step is important among machine
learning legal applications and gives us an exciting insight into how black-box
models make decisions.
| 2,021 | Computation and Language |
Finnish Language Modeling with Deep Transformer Models | Transformers have recently taken the center stage in language modeling after
LSTM's were considered the dominant model architecture for a long time. In this
project, we investigate the performance of the Transformer architectures-BERT
and Transformer-XL for the language modeling task. We use a sub-word model
setting with the Finnish language and compare it to the previous State of the
art (SOTA) LSTM model. BERT achieves a pseudo-perplexity score of 14.5, which
is the first such measure achieved as far as we know. Transformer-XL improves
upon the perplexity score to 73.58 which is 27\% better than the LSTM model.
| 2,020 | Computation and Language |
Cost-Sensitive BERT for Generalisable Sentence Classification with
Imbalanced Data | The automatic identification of propaganda has gained significance in recent
years due to technological and social changes in the way news is generated and
consumed. That this task can be addressed effectively using BERT, a powerful
new architecture which can be fine-tuned for text classification tasks, is not
surprising. However, propaganda detection, like other tasks that deal with news
documents and other forms of decontextualized social communication (e.g.
sentiment analysis), inherently deals with data whose categories are
simultaneously imbalanced and dissimilar. We show that BERT, while capable of
handling imbalanced classes with no additional data augmentation, does not
generalise well when the training and test data are sufficiently dissimilar (as
is often the case with news sources, whose topics evolve over time). We show
how to address this problem by providing a statistical measure of similarity
between datasets and a method of incorporating cost-weighting into BERT when
the training and test sets are dissimilar. We test these methods on the
Propaganda Techniques Corpus (PTC) and achieve the second-highest score on
sentence-level propaganda classification.
| 2,020 | Computation and Language |
Predicting Unplanned Readmissions with Highly Unstructured Data | Deep learning techniques have been successfully applied to predict unplanned
readmissions of patients in medical centers. The training data for these models
is usually based on historical medical records that contain a significant
amount of free-text from admission reports, referrals, exam notes, etc. Most of
the models proposed so far are tailored to English text data and assume that
electronic medical records follow standards common in developed countries.
These two characteristics make them difficult to apply in developing countries
that do not necessarily follow international standards for registering patient
information, or that store text information in languages other than English.
In this paper we propose a deep learning architecture for predicting
unplanned readmissions that consumes data that is significantly less structured
compared with previous models in the literature. We use it to present the first
results for this task in a large clinical dataset that mainly contains Spanish
text data. The dataset is composed of almost 10 years of records in a Chilean
medical center. On this dataset, our model achieves results that are comparable
to some of the most recent results obtained in US medical centers for the same
task (0.76 AUROC).
| 2,020 | Computation and Language |
Author2Vec: A Framework for Generating User Embedding | Online forums and social media platforms provide noisy but valuable data
every day. In this paper, we propose a novel end-to-end neural network-based
user embedding system, Author2Vec. The model incorporates sentence
representations generated by BERT (Bidirectional Encoder Representations from
Transformers) with a novel unsupervised pre-training objective, authorship
classification, to produce better user embedding that encodes useful
user-intrinsic properties. This user embedding system was pre-trained on post
data of 10k Reddit users and was analyzed and evaluated on two user
classification benchmarks: depression detection and personality classification,
in which the model proved to outperform traditional count-based and
prediction-based methods. We substantiate that Author2Vec successfully encoded
useful user attributes and the generated user embedding performs well in
downstream classification tasks without further finetuning.
| 2,020 | Computation and Language |
Sentiment Analysis in Drug Reviews using Supervised Machine Learning
Algorithms | Sentiment Analysis is an important algorithm in Natural Language Processing
which is used to detect sentiment within some text. In our project, we had
chosen to work on analyzing reviews of various drugs which have been reviewed
in form of texts and have also been given a rating on a scale from 1-10. We had
obtained this data set from the UCI machine learning repository which had 2
data sets: train and test (split as 75-25\%). We had split the number rating
for the drug into three classes in general: positive (7-10), negative (1-4) or
neutral(4-7). There are multiple reviews for the drugs that belong to a similar
condition and we decided to investigate how the reviews for different
conditions use different words impact the ratings of the drugs. Our intention
was mainly to implement supervised machine learning classification algorithms
that predict the class of the rating using the textual review. We had primarily
implemented different embeddings such as Term Frequency Inverse Document
Frequency (TFIDF) and the Count Vectors (CV). We had trained models on the most
popular conditions such as "Birth Control", "Depression" and "Pain" within the
data set and obtained good results while predicting the test data sets.
| 2,020 | Computation and Language |
Multi-Label Text Classification using Attention-based Graph Neural
Network | In Multi-Label Text Classification (MLTC), one sample can belong to more than
one class. It is observed that most MLTC tasks, there are dependencies or
correlations among labels. Existing methods tend to ignore the relationship
among labels. In this paper, a graph attention network-based model is proposed
to capture the attentive dependency structure among the labels. The graph
attention network uses a feature matrix and a correlation matrix to capture and
explore the crucial dependencies between the labels and generate classifiers
for the task. The generated classifiers are applied to sentence feature vectors
obtained from the text feature extraction network (BiLSTM) to enable end-to-end
training. Attention allows the system to assign different weights to neighbor
nodes per label, thus allowing it to learn the dependencies among labels
implicitly. The results of the proposed model are validated on five real-world
MLTC datasets. The proposed model achieves similar or better performance
compared to the previous state-of-the-art models.
| 2,020 | Computation and Language |
Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream
Tasks | Word2Vec is a prominent model for natural language processing (NLP) tasks.
Similar inspiration is found in distributed embeddings for new state-of-the-art
(SotA) deep neural networks. However, wrong combination of hyper-parameters can
produce poor quality vectors. The objective of this work is to empirically show
optimal combination of hyper-parameters exists and evaluate various
combinations. We compare them with the released, pre-trained original word2vec
model. Both intrinsic and extrinsic (downstream) evaluations, including named
entity recognition (NER) and sentiment analysis (SA) were carried out. The
downstream tasks reveal that the best model is usually task-specific, high
analogy scores don't necessarily correlate positively with F1 scores and the
same applies to focus on data alone. Increasing vector dimension size after a
point leads to poor quality or performance. If ethical considerations to save
time, energy and the environment are made, then reasonably smaller corpora may
do just as well or even better in some cases. Besides, using a small corpus, we
obtain better human-assigned WordSim scores, corresponding Spearman correlation
and better downstream performances (with significance tests) compared to the
original model, trained on 100 billion-word corpus.
| 2,021 | Computation and Language |
Common-Knowledge Concept Recognition for SEVA | We build a common-knowledge concept recognition system for a Systems
Engineer's Virtual Assistant (SEVA) which can be used for downstream tasks such
as relation extraction, knowledge graph construction, and question-answering.
The problem is formulated as a token classification task similar to named
entity extraction. With the help of a domain expert and text processing
methods, we construct a dataset annotated at the word-level by carefully
defining a labelling scheme to train a sequence model to recognize systems
engineering concepts. We use a pre-trained language model and fine-tune it with
the labeled dataset of concepts. In addition, we also create some essential
datasets for information such as abbreviations and definitions from the systems
engineering domain. Finally, we construct a simple knowledge graph using these
extracted concepts along with some hyponym relations.
| 2,020 | Computation and Language |
Rat big, cat eaten! Ideas for a useful deep-agent protolanguage | Deep-agent communities developing their own language-like communication
protocol are a hot (or at least warm) topic in AI. Such agents could be very
useful in machine-machine and human-machine interaction scenarios long before
they have evolved a protocol as complex as human language. Here, I propose a
small set of priorities we should focus on, if we want to get as fast as
possible to a stage where deep agents speak a useful protolanguage.
| 2,020 | Computation and Language |
TLDR: Token Loss Dynamic Reweighting for Reducing Repetitive Utterance
Generation | Natural Language Generation (NLG) models are prone to generating repetitive
utterances. In this work, we study the repetition problem for encoder-decoder
models, using both recurrent neural network (RNN) and transformer
architectures. To this end, we consider the chit-chat task, where the problem
is more prominent than in other tasks that need encoder-decoder architectures.
We first study the influence of model architectures. By using pre-attention and
highway connections for RNNs, we manage to achieve lower repetition rates.
However, this method does not generalize to other models such as transformers.
We hypothesize that the deeper reason is that in the training corpora, there
are hard tokens that are more difficult for a generative model to learn than
others and, once learning has finished, hard tokens are still under-learned, so
that repetitive generations are more likely to happen. Based on this
hypothesis, we propose token loss dynamic reweighting (TLDR) that applies
differentiable weights to individual token losses. By using higher weights for
hard tokens and lower weights for easy tokens, NLG models are able to learn
individual tokens at different paces. Experiments on chit-chat benchmark
datasets show that TLDR is more effective in repetition reduction for both RNN
and transformer architectures than baselines using different weighting
functions.
| 2,020 | Computation and Language |
FFR V1.0: Fon-French Neural Machine Translation | Africa has the highest linguistic diversity in the world. On account of the
importance of language to communication, and the importance of reliable,
powerful and accurate machine translation models in modern inter-cultural
communication, there have been (and still are) efforts to create
state-of-the-art translation models for the many African languages. However,
the low-resources, diacritical and tonal complexities of African languages are
major issues facing African NLP today. The FFR is a major step towards creating
a robust translation model from Fon, a very low-resource and tonal language, to
French, for research and public use. In this paper, we describe our pilot
project: the creation of a large growing corpora for Fon-to-French translations
and our FFR v1.0 model, trained on this dataset. The dataset and model are made
publicly available.
| 2,020 | Computation and Language |
Integrating Crowdsourcing and Active Learning for Classification of
Work-Life Events from Tweets | Social media, especially Twitter, is being increasingly used for research
with predictive analytics. In social media studies, natural language processing
(NLP) techniques are used in conjunction with expert-based, manual and
qualitative analyses. However, social media data are unstructured and must
undergo complex manipulation for research use. The manual annotation is the
most resource and time-consuming process that multiple expert raters have to
reach consensus on every item, but is essential to create gold-standard
datasets for training NLP-based machine learning classifiers. To reduce the
burden of the manual annotation, yet maintaining its reliability, we devised a
crowdsourcing pipeline combined with active learning strategies. We
demonstrated its effectiveness through a case study that identifies job loss
events from individual tweets. We used Amazon Mechanical Turk platform to
recruit annotators from the Internet and designed a number of quality control
measures to assure annotation accuracy. We evaluated 4 different active
learning strategies (i.e., least confident, entropy, vote entropy, and
Kullback-Leibler divergence). The active learning strategies aim at reducing
the number of tweets needed to reach a desired performance of automated
classification. Results show that crowdsourcing is useful to create
high-quality annotations and active learning helps in reducing the number of
required tweets, although there was no substantial difference among the
strategies tested.
| 2,020 | Computation and Language |
Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak
Supervision | We created this CORD-NER dataset with comprehensive named entity recognition
(NER) on the COVID-19 Open Research Dataset Challenge (CORD-19) corpus
(2020-03-13). This CORD-NER dataset covers 75 fine-grained entity types: In
addition to the common biomedical entity types (e.g., genes, chemicals and
diseases), it covers many new entity types related explicitly to the COVID-19
studies (e.g., coronaviruses, viral proteins, evolution, materials, substrates
and immune responses), which may benefit research on COVID-19 related virus,
spreading mechanisms, and potential vaccines. CORD-NER annotation is a
combination of four sources with different NER methods. The quality of CORD-NER
annotation surpasses SciSpacy (over 10% higher on the F1 score based on a
sample set of documents), a fully supervised BioNER tool. Moreover, CORD-NER
supports incrementally adding new documents as well as adding new entity types
when needed by adding dozens of seeds as the input examples. We will constantly
update CORD-NER based on the incremental updates of the CORD-19 corpus and the
improvement of our system.
| 2,020 | Computation and Language |
Information-Theoretic Probing with Minimum Description Length | To measure how well pretrained representations encode some linguistic
property, it is common to use accuracy of a probe, i.e. a classifier trained to
predict the property from the representations. Despite widespread adoption of
probes, differences in their accuracy fail to adequately reflect differences in
representations. For example, they do not substantially favour pretrained
representations over randomly initialized ones. Analogously, their accuracy can
be similar when probing for genuine linguistic labels and probing for random
synthetic tasks. To see reasonable differences in accuracy with respect to
these random baselines, previous work had to constrain either the amount of
probe training data or its model size. Instead, we propose an alternative to
the standard probes, information-theoretic probing with minimum description
length (MDL). With MDL probing, training a probe to predict labels is recast as
teaching it to effectively transmit the data. Therefore, the measure of
interest changes from probe accuracy to the description length of labels given
representations. In addition to probe quality, the description length evaluates
"the amount of effort" needed to achieve the quality. This amount of effort
characterizes either (i) size of a probing model, or (ii) the amount of data
needed to achieve the high quality. We consider two methods for estimating MDL
which can be easily implemented on top of the standard probing pipelines:
variational coding and online coding. We show that these methods agree in
results and are more informative and stable than the standard probes.
| 2,020 | Computation and Language |
Semantic Enrichment of Nigerian Pidgin English for Contextual Sentiment
Classification | Nigerian English adaptation, Pidgin, has evolved over the years through
multi-language code switching, code mixing and linguistic adaptation. While
Pidgin preserves many of the words in the normal English language corpus, both
in spelling and pronunciation, the fundamental meaning of these words have
changed significantly. For example,'ginger' is not a plant but an expression of
motivation and 'tank' is not a container but an expression of gratitude. The
implication is that the current approach of using direct English sentiment
analysis of social media text from Nigeria is sub-optimal, as it will not be
able to capture the semantic variation and contextual evolution in the
contemporary meaning of these words. In practice, while many words in Nigerian
Pidgin adaptation are the same as the standard English, the full English
language based sentiment analysis models are not designed to capture the full
intent of the Nigerian pidgin when used alone or code-mixed. By augmenting
scarce human labelled code-changed text with ample synthetic code-reformatted
text and meaning, we achieve significant improvements in sentiment scoring. Our
research explores how to understand sentiment in an intrasentential code mixing
and switching context where there has been significant word localization.This
work presents a 300 VADER lexicon compatible Nigerian Pidgin sentiment tokens
and their scores and a 14,000 gold standard Nigerian Pidgin tweets and their
sentiments labels.
| 2,020 | Computation and Language |
Towards Supervised and Unsupervised Neural Machine Translation Baselines
for Nigerian Pidgin | Nigerian Pidgin is arguably the most widely spoken language in Nigeria.
Variants of this language are also spoken across West and Central Africa,
making it a very important language. This work aims to establish supervised and
unsupervised neural machine translation (NMT) baselines between English and
Nigerian Pidgin. We implement and compare NMT models with different
tokenization methods, creating a solid foundation for future works.
| 2,020 | Computation and Language |
Serialized Output Training for End-to-End Overlapped Speech Recognition | This paper proposes serialized output training (SOT), a novel framework for
multi-speaker overlapped speech recognition based on an attention-based
encoder-decoder approach. Instead of having multiple output layers as with the
permutation invariant training (PIT), SOT uses a model with only one output
layer that generates the transcriptions of multiple speakers one after another.
The attention and decoder modules take care of producing multiple
transcriptions from overlapped speech. SOT has two advantages over PIT: (1) no
limitation in the maximum number of speakers, and (2) an ability to model the
dependencies among outputs for different speakers. We also propose a simple
trick that allows SOT to be executed in $O(S)$, where $S$ is the number of the
speakers in the training sample, by using the start times of the constituent
source utterances. Experimental results on LibriSpeech corpus show that the SOT
models can transcribe overlapped speech with variable numbers of speakers
significantly better than PIT-based models. We also show that the SOT models
can accurately count the number of speakers in the input audio.
| 2,020 | Computation and Language |
A Streaming On-Device End-to-End Model Surpassing Server-Side
Conventional Model Quality and Latency | Thus far, end-to-end (E2E) models have not been shown to outperform
state-of-the-art conventional models with respect to both quality, i.e., word
error rate (WER), and latency, i.e., the time the hypothesis is finalized after
the user stops speaking. In this paper, we develop a first-pass Recurrent
Neural Network Transducer (RNN-T) model and a second-pass Listen, Attend, Spell
(LAS) rescorer that surpasses a conventional model in both quality and latency.
On the quality side, we incorporate a large number of utterances across varied
domains to increase acoustic diversity and the vocabulary seen by the model. We
also train with accented English speech to make the model more robust to
different pronunciations. In addition, given the increased amount of training
data, we explore a varied learning rate schedule. On the latency front, we
explore using the end-of-sentence decision emitted by the RNN-T model to close
the microphone, and also introduce various optimizations to improve the speed
of LAS rescoring. Overall, we find that RNN-T+LAS offers a better WER and
latency tradeoff compared to a conventional model. For example, for the same
latency, RNN-T+LAS obtains a 8% relative improvement in WER, while being more
than 400-times smaller in model size.
| 2,020 | Computation and Language |
Variational Transformers for Diverse Response Generation | Despite the great promise of Transformers in many sequence modeling tasks
(e.g., machine translation), their deterministic nature hinders them from
generalizing to high entropy tasks such as dialogue response generation.
Previous work proposes to capture the variability of dialogue responses with a
recurrent neural network (RNN)-based conditional variational autoencoder
(CVAE). However, the autoregressive computation of the RNN limits the training
efficiency. Therefore, we propose the Variational Transformer (VT), a
variational self-attentive feed-forward sequence model. The VT combines the
parallelizability and global receptive field of the Transformer with the
variational nature of the CVAE by incorporating stochastic latent variables
into Transformers. We explore two types of the VT: 1) modeling the
discourse-level diversity with a global latent variable; and 2) augmenting the
Transformer decoder with a sequence of fine-grained latent variables. Then, the
proposed models are evaluated on three conversational datasets with both
automatic metric and human evaluation. The experimental results show that our
models improve standard Transformers and other baselines in terms of diversity,
semantic relevance, and human judgment.
| 2,020 | Computation and Language |
HIN: Hierarchical Inference Network for Document-Level Relation
Extraction | Document-level RE requires reading, inferring and aggregating over multiple
sentences. From our point of view, it is necessary for document-level RE to
take advantage of multi-granularity inference information: entity level,
sentence level and document level. Thus, how to obtain and aggregate the
inference information with different granularity is challenging for
document-level RE, which has not been considered by previous work. In this
paper, we propose a Hierarchical Inference Network (HIN) to make full use of
the abundant information from entity level, sentence level and document level.
Translation constraint and bilinear transformation are applied to target entity
pair in multiple subspaces to get entity-level inference information. Next, we
model the inference between entity-level information and sentence
representation to achieve sentence-level inference information. Finally, a
hierarchical aggregation approach is adopted to obtain the document-level
inference information. In this way, our model can effectively aggregate
inference information from these three different granularities. Experimental
results show that our method achieves state-of-the-art performance on the
large-scale DocRED dataset. We also demonstrate that using BERT representations
can further substantially boost the performance.
| 2,020 | Computation and Language |
Unsupervised feature learning for speech using correspondence and
Siamese networks | In zero-resource settings where transcribed speech audio is unavailable,
unsupervised feature learning is essential for downstream speech processing
tasks. Here we compare two recent methods for frame-level acoustic feature
learning. For both methods, unsupervised term discovery is used to find pairs
of word examples of the same unknown type. Dynamic programming is then used to
align the feature frames between each word pair, serving as weak top-down
supervision for the two models. For the correspondence autoencoder (CAE),
matching frames are presented as input-output pairs. The Triamese network uses
a contrastive loss to reduce the distance between frames of the same predicted
word type while increasing the distance between negative examples. For the
first time, these feature extractors are compared on the same discrimination
tasks using the same weak supervision pairs. We find that, on the two datasets
considered here, the CAE outperforms the Triamese network. However, we show
that a new hybrid correspondence-Triamese approach (CTriamese), consistently
outperforms both the CAE and Triamese models in terms of average precision and
ABX error rates on both English and Xitsonga evaluation data.
| 2,020 | Computation and Language |
Orchestrating NLP Services for the Legal Domain | Legal technology is currently receiving a lot of attention from various
angles. In this contribution we describe the main technical components of a
system that is currently under development in the European innovation project
Lynx, which includes partners from industry and research. The key contribution
of this paper is a workflow manager that enables the flexible orchestration of
workflows based on a portfolio of Natural Language Processing and Content
Curation services as well as a Multilingual Legal Knowledge Graph that contains
semantic information and meaningful references to legal documents. We also
describe different use cases with which we experiment and develop prototypical
solutions.
| 2,020 | Computation and Language |
Noisy Text Data: Achilles' Heel of BERT | Owing to the phenomenal success of BERT on various NLP tasks and benchmark
datasets, industry practitioners are actively experimenting with fine-tuning
BERT to build NLP applications for solving industry use cases. For most
datasets that are used by practitioners to build industrial NLP applications,
it is hard to guarantee absence of any noise in the data. While BERT has
performed exceedingly well for transferring the learnings from one use case to
another, it remains unclear how BERT performs when fine-tuned on noisy text. In
this work, we explore the sensitivity of BERT to noise in the data. We work
with most commonly occurring noise (spelling mistakes, typos) and show that
this results in significant degradation in the performance of BERT. We present
experimental results to show that BERT's performance on fundamental NLP tasks
like sentiment analysis and textual similarity drops significantly in the
presence of (simulated) noise on benchmark datasets viz. IMDB Movie Review,
STS-B, SST-2. Further, we identify shortcomings in the existing BERT pipeline
that are responsible for this drop in performance. Our findings suggest that
practitioners need to be vary of presence of noise in their datasets while
fine-tuning BERT to solve industry use cases.
| 2,020 | Computation and Language |
Meta Fine-Tuning Neural Language Models for Multi-Domain Text Mining | Pre-trained neural language models bring significant improvement for various
NLP tasks, by fine-tuning the models on task-specific training sets. During
fine-tuning, the parameters are initialized from pre-trained models directly,
which ignores how the learning process of similar NLP tasks in different
domains is correlated and mutually reinforced. In this paper, we propose an
effective learning procedure named Meta Fine-Tuning (MFT), served as a
meta-learner to solve a group of similar NLP tasks for neural language models.
Instead of simply multi-task training over all the datasets, MFT only learns
from typical instances of various domains to acquire highly transferable
knowledge. It further encourages the language model to encode domain-invariant
representations by optimizing a series of novel domain corruption loss
functions. After MFT, the model can be fine-tuned for each domain with better
parameter initializations and higher generalization ability. We implement MFT
upon BERT to solve several multi-domain text mining tasks. Experimental results
confirm the effectiveness of MFT and its usefulness for few-shot learning.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.